Generative AI abuse: a growing threat to online security
ActiveFence report reveals how generative AI is being used for child abuse material production, disinformation propagation and extremism
Malevolent actors are abusing Generative Artificial Intelligence (AI) to commit child sexual abuse (CSAM), disinformation, fraud and extremism, says ActiveFence. According to Noam Schwartz, CEO and founder of ActiveFence, "The explosion of generative AI has far-reaching implications for all corners of the internet." Schwartz identifies three key areas of concern. First, these malicious actors are ramping up their operations, resulting in an unprecedented mass production of malicious content. Second, they are looking for ways to exploit generative AI, revealing the inherent vulnerabilities of these models. Ultimately, these evolving threats put pressure on digital services to improve the accuracy and efficiency of their data training protocols.
Forms of Abuse of Generative Artificial Intelligence
Generative AI abuses include the creation of child sexual abuse material, the generation of fraudulent AI-generated images, and the production of deepfake audio files advocating extremism. Scholars found a 172% increase in the volume of CSAMs produced by generative AI in the first quarter of the year. They also uncovered a survey conducted by administrators of a hidden child predator forum on the dark web, which questioned nearly 3,000 predators about their use of generative AI.
Child Predators and the Abuse of Generative AI
The survey revealed that 78% of respondents have used or plan to use Generative AI for CSAM, and the remaining 22% said they plan to try the technology. These predator forums use generative AI algorithms to produce sexual images, text descriptions, stories and narratives. Predators have also been found to use Generative AI to create tutorials of their creations, gaining credibility within the predator community, incentivizing others to replicate their efforts, and sharing recommended phrases and keywords to circumvent the platform's safeguards. .
Extremism, Fraud and Disinformation Amplified by Generative AI
Finally, while fraud and misinformation are not new concepts, generative AI has allowed malicious actors to create fraudulent images faster, more accurately, and with greater reach. Malicious actors use generative AI to create racist, nationalist, or extremist manifestos or speeches. ActiveFence discovered an AI-generated deepfake audio file that exploited the current political and economic situation to create disinformation and incite violence.
Follow us on Twitter for more pills like this05/30/2023 08:52
Marco Verro