AI DevwWrld Chatbot Summit CYSEC Global Cyber Security & Cloud Expo World Series Digital Identity & Authentication Summit Middle East Low Code No Code Summit TimeAI Summit

LLMs reduce the barrier to entry into cybercrime

The growing threat of chatbots in the field of cybercrime: a new ally for cybercriminals

Cybercriminals' use of chatbots and advanced language models makes phishing campaigns increasingly effective, with threats constantly evolving. Traditional security tools often fail to detect these attacks, causing growing concern in the cybersecurity industry.

Contribute to spreading the culture of prevention!
Support our cause with a small donation by helping us raise awareness among users and companies about cyber threats and defense solutions.

This pill is also available in Italian language

According to Egress, cybercriminals are using evolving attack methodologies to breach traditional network securities, including secure email gateways. “Without a doubt, chatbots or large language models (LLMs) lower the barrier to entry into cybercrime, making it possible to create well-written phishing campaigns and generate malware that less capable developers couldn't produce themselves,” said Jack Chapman , VP Threat Intelligence at Egress.

Evolving attack techniques

As threats evolve, the cybersecurity industry must work together to continue to address human risks in email. From RingCentral impersonation scams to attacks through social media, security software threats, and sextortion, there has been no shortage of phishing attacks in 2023. The biggest phishing theme has been missed voice calls, which have accounted for 18.4% of phishing attacks between January and September 2023, making it the most popular theme of the year so far. Many of these attacks use HTML obfuscation techniques to hide their payload.

The potential use of chatbots by cybercriminals

The possibility of cybercriminals using chatbots to create phishing and malware campaigns is a concern, but is it possible to tell if a phishing email was written by a chatbot? The report concluded that no person or tool can say for sure whether an attack was written by a chatbot. Because they use large language models (LLMs), the accuracy of most detection tools increases with longer text samples, often needing at least 250 characters to work. With 44.9% of phishing emails falling below the 250 character limit and a further 26.5% below 500, AI detectors currently either do not work reliably or at all in 71.4% of attacks.

Follow us on Facebook for more pills like this

10/05/2023 07:18

Editorial AI

Last pills

Global blow to cybercrime: a major ransomware network has fallenCybercriminal organization busted: a success for global cybersecurity

Crisis in aviation: Rosaviatsia targeted by cyberattackCyber attack exposes vulnerability of Russian aviation sector

Introduction to the new SysJoker threatIn-depth analysis reveals evolutions and risks of SysJoker cross-platform malware

Cybersecurity strategies compared between Taiwan and JapanStrengthening digital defenses in the information age