
The Year Of The Deepfake: Combating Digital Deception In 2024 And Beyond
As I reflect on the past year, it’s clear that 2024 will go down in history as the “year of the deepfake.” News cycles have been flooded with scams involving AI-manipulated videos, sounds and images. From fake celebrity endorsements to the dissemination of misinformation, what was once a niche novelty has exploded into a pervasive and alarming threat.
In this new reality, it’s crucial that we distinguish between real and fake content online. With generative AI continuing to accelerate, identifying deception will become increasingly difficult, posing serious risks to individuals, businesses and societies. Therefore, we must act swiftly to develop solutions that keep genuine internet users safe and reduce fraud.
The threat posed by deepfakes is a growing concern for the FBI, with nearly 40% of online scam victims in 2023 targeted with these manipulated videos, sounds and images. Furthermore, over half of finance professionals in the U.S. and U.K. have been preyed upon by deepfake-powered financial scams, with a staggering 43% falling victim to such attacks. The cryptocurrency sector has not been spared, with deepfake-related incidents rising by an alarming 654% from 2023 to 2024.
The threat is not limited to finance; in the summer of 2024, New York Attorney General Letitia James sounded the alarm about investment scams using AI-manipulated videos of celebrities like Warren Buffett and Bill Gates convincing investors to make ill-advised financial decisions. Moreover, a chief financial officer’s voice was used to convince an unsuspecting individual to transfer funds, highlighting the critical need for swift action.
In this digital age where our daily lives are increasingly online—ranging from work meetings to telehealth appointments to banking and financial planning—it is more vital than ever that we can trust the people we interact with. The stakes are far too high.
Unfortunately, current regulatory efforts to address deepfakes are fragmented at best. There is no federal law in the U.S. that comprehensively addresses the creation, dissemination, and use of these manipulated videos, sounds, and images. Some states like Florida, Texas, and Washington have enacted their own legislation, while Congress considers regulations. However, these measures are still in their early stages.
On the technological front, several defenses are emerging to help tackle this challenge. Google DeepMind has released its AI text watermark tool as open-source software, allowing anyone to utilize it. While a step forward, this solution primarily identifies AI-generated text and does not extend to audio or video manipulations. Facebook is testing facial recognition tools to rapidly restore compromised accounts and identify fake celebrity endorsements. While promising, these efforts are still in the pilot phase and limited in scope. They can help detect deepfakes in specific situations, such as user-generated content, videos, livestreams, and cross-platform sharing, but do not provide a comprehensive solution.
McAfee has also launched a tool that aids users in identifying whether audio in videos on platforms like YouTube or X (formerly Twitter) is genuine or fake. A Google Chrome extension from Hiya uses AI to determine the legitimacy of an on-screen video or audio voice. While these tools can be useful for detecting some audio-based deepfakes, they fail to address a significant portion of the problem, as AI-manipulated videos and images remain undetected.
In conclusion, we urgently require more advanced and ubiquitous solutions capable of swiftly and accurately identifying deepfakes, particularly as they become increasingly sophisticated. These tools must be integrated into social media platforms, video hosting sites, and financial systems to safeguard both consumers and businesses.
Governments, tech companies, financial institutions, and law enforcement agencies must work together more effectively to combat deepfake fraud. This necessitates the creation of standardized strategies and protocols for deepfake detection, sharing best practices, and building stronger partnerships to mitigate the risks associated with this technology.
The time to act is now; otherwise, the digital landscape will continue to be plagued by manipulation and deceit.
Source: www.forbes.com