By 2024, it is evident that we are living in the post-truth era. The digital revolution, which introduced social media, has accelerated the spread of information to unprecedented levels. Although this has its advantages, the proliferation of false, inaccurate, and fake information on the internet is becoming increasingly perilous. The integration of artificial intelligence (AI) has amplified this threat. AI now allows the creation of realistic images from fictional scenarios using text prompts, eliminating the need for specialized skills to produce fake images. This has led to a significant increase in deepfakes in recent years.
Farhad Oroumchian, a Professor of Information Sciences at the University of Wollongong in Dubai, highlights incidents in the USA and Europe where deepfakes have been used for malicious purposes, such as creating inappropriate videos of students for degradation, revenge, or ransom. He emphasizes the importance of distinguishing between real and AI-generated content to protect against fraud, safeguard personal reputations, and maintain trust in digital interactions.
Deepfakes are a type of synthetic media created using AI techniques, particularly deep learning algorithms, to fabricate realistic content. These technologies manipulate videos, audio recordings, or images to make it appear as though individuals are doing or saying things they never did. As these technologies advance, distinguishing deepfakes from genuine media becomes more challenging, raising concerns about privacy, security, and potential abuse.
Deepfakes can spread false information, manipulate public opinion, and be used without individuals' consent, leading to privacy invasion and identity theft. Recent examples include a deepfake video of US Vice President Kamala Harris shared by Elon Musk and fake videos of Bollywood actors criticizing the Indian Prime Minister. Celebrity deepfakes even have dedicated TikTok accounts, raising significant ethical implications.
Identifying AI-generated images is becoming increasingly difficult. Yohan Wadia, a UAE-based entrepreneur and digital artist, notes that while some images may have subtle signs, many are indistinguishable from real photos. Professor Oroumchian suggests that the sophistication of the tools used to create them determines their detectability. Visual inspection, cross-validation, reverse image searches, and analyzing metadata are strategies to identify AI-generated images.
Specialized tools like Deepware Scanner and Sensity AI can help detect AI-generated content. Wadia believes there should be clear indications when an image is AI-generated, especially in media, to maintain public trust and prevent misinformation. Until regulations are in place, combining these approaches and maintaining a discerning eye online can help identify AI-generated images.