AI DevwWrld CyberDSA Chatbot Summit Cyber Revolution Summit CYSEC Global Cyber Security & Cloud Expo World Series Digital Identity & Authentication Summit Asian Integrated Resort Expo Middle East Low Code No Code Summit TimeAI Summit

LLMs reduce the barrier to entry into cybercrime

The growing threat of chatbots in the field of cybercrime: a new ally for cybercriminals

Cybercriminals' use of chatbots and advanced language models makes phishing campaigns increasingly effective, with threats constantly evolving. Traditional security tools often fail to detect these attacks, causing growing concern in the cybersecurity industry.

This pill is also available in Italian language

According to Egress, cybercriminals are using evolving attack methodologies to breach traditional network securities, including secure email gateways. “Without a doubt, chatbots or large language models (LLMs) lower the barrier to entry into cybercrime, making it possible to create well-written phishing campaigns and generate malware that less capable developers couldn't produce themselves,” said Jack Chapman , VP Threat Intelligence at Egress.

Evolving attack techniques

As threats evolve, the cybersecurity industry must work together to continue to address human risks in email. From RingCentral impersonation scams to attacks through social media, security software threats, and sextortion, there has been no shortage of phishing attacks in 2023. The biggest phishing theme has been missed voice calls, which have accounted for 18.4% of phishing attacks between January and September 2023, making it the most popular theme of the year so far. Many of these attacks use HTML obfuscation techniques to hide their payload.

The potential use of chatbots by cybercriminals

The possibility of cybercriminals using chatbots to create phishing and malware campaigns is a concern, but is it possible to tell if a phishing email was written by a chatbot? The report concluded that no person or tool can say for sure whether an attack was written by a chatbot. Because they use large language models (LLMs), the accuracy of most detection tools increases with longer text samples, often needing at least 250 characters to work. With 44.9% of phishing emails falling below the 250 character limit and a further 26.5% below 500, AI detectors currently either do not work reliably or at all in 71.4% of attacks.

Follow us on Threads for more pills like this

10/05/2023 07:18

Marco Verro

Last pills

Hacker attack in Lebanon: Hezbollah under fireTechnological shock and injuries: cyber warfare hits Hezbollah in Lebanon

Data breach: Fortinet faces new hack, 440GB of stolen informationFortinet under attack: hackers breach security and make information public. discover the details and the consequences for the privacy of involved users

Shocking cyber espionage discoveries: nation-state threatsHow state-of-state cyberwarfare is changing the game in the tech industry: Details and analysis of recent attacks

A new era for Flipper Zero with firmware 1.0Discover the revolutionary features of Flipper Zero firmware 1.0: performance improvements, JavaScript, and enhanced connectivity