An Indian billionaire has disclosed a startling personal encounter where an AI-driven scam nearly swindled his company in Dubai. The scam involved an artificial intelligence (AI) clone of his voice that deceived one of his senior executives into almost authorizing a substantial financial transfer.
Sunil Bharti Mittal, the founder and chairman of Bharti Enterprises, a multinational conglomerate, shared this experience during the NDTV World Summit on Monday. Mittal detailed how the fraudster replicated his voice so convincingly that even he was left 'stunned' upon hearing the recording. 'One of my senior finance executives in Dubai, who manages our Africa headquarters, received a call in my voice, my tone, instructing him to make a large money transfer,' Mittal recounted. 'He was wise enough to recognize that I would never make such a request over the phone.'
The executive, whose name was not disclosed, promptly reported the suspicious call, averting a significant financial loss. 'When I heard the recording, I was quite stunned by how perfectly it was articulated. It sounded exactly like how I would speak,' Mittal added.
This incident occurs amidst rising global and UAE concerns about the misuse of AI, particularly deepfake technology. The UAE Cyber Security Council recently cautioned about the perils of deepfake content, highlighting the risks of fraud, privacy breaches, and misinformation. Deepfakes—AI-generated media designed to mimic real people—can produce highly convincing yet entirely fabricated videos, images, or audio, posing serious threats to individuals and organizations.
The UAE Cyber Security Council has also initiated an awareness campaign, warning that sharing deepfake content could result in fraud or legal repercussions, urging the public to verify the authenticity of digital content before disseminating it. A recent Kaspersky Business Digitisation survey revealed that while 75% of UAE employees believed they could identify a deepfake, only 37% were successful in distinguishing between real and AI-generated images during testing. Cyber experts noted that organizations remain highly susceptible to deepfake scams, such as those involving fake videos or audio of CEOs authorizing wire transfers.
Dmitry Anikin, senior data scientist at Kaspersky, underscored the necessity for constant vigilance. 'Many employees overestimate their ability to recognize deepfakes, which poses a significant security risk. Cybercriminals are increasingly utilizing this technology to impersonate executives, enabling scams and extortion,' he said.
Source link: https://www.khaleejtimes.com