AI DevwWrld Chatbot Summit Cyber Revolution Summit CYSEC Global Cyber Security & Cloud Expo World Series Digital Identity & Authentication Summit Asian Integrated Resort Expo Middle East Low Code No Code Summit TimeAI Summit

LLMs reduce the barrier to entry into cybercrime

The growing threat of chatbots in the field of cybercrime: a new ally for cybercriminals

Cybercriminals' use of chatbots and advanced language models makes phishing campaigns increasingly effective, with threats constantly evolving. Traditional security tools often fail to detect these attacks, causing growing concern in the cybersecurity industry.

This pill is also available in Italian language

According to Egress, cybercriminals are using evolving attack methodologies to breach traditional network securities, including secure email gateways. “Without a doubt, chatbots or large language models (LLMs) lower the barrier to entry into cybercrime, making it possible to create well-written phishing campaigns and generate malware that less capable developers couldn't produce themselves,” said Jack Chapman , VP Threat Intelligence at Egress.

Evolving attack techniques

As threats evolve, the cybersecurity industry must work together to continue to address human risks in email. From RingCentral impersonation scams to attacks through social media, security software threats, and sextortion, there has been no shortage of phishing attacks in 2023. The biggest phishing theme has been missed voice calls, which have accounted for 18.4% of phishing attacks between January and September 2023, making it the most popular theme of the year so far. Many of these attacks use HTML obfuscation techniques to hide their payload.

The potential use of chatbots by cybercriminals

The possibility of cybercriminals using chatbots to create phishing and malware campaigns is a concern, but is it possible to tell if a phishing email was written by a chatbot? The report concluded that no person or tool can say for sure whether an attack was written by a chatbot. Because they use large language models (LLMs), the accuracy of most detection tools increases with longer text samples, often needing at least 250 characters to work. With 44.9% of phishing emails falling below the 250 character limit and a further 26.5% below 500, AI detectors currently either do not work reliably or at all in 71.4% of attacks.

Follow us on Twitter for more pills like this

10/05/2023 07:18

Editorial AI

Last pills

Career opportunities in Italian intelligence: entering the heart of securityFind out how to join the intelligence forces and contribute to national security

Hacker attack impacts Microsoft and US federal agenciesNational security implications and strategic responses to credential theft

Implications and repercussions of the serious cyberattack on the Lazio NHSConsequences and punitive measures after the ransomware attack that brought the regional healthcare system to its knees

Telecommunications security: flaw exposes conversations and 2FA to the risk of interceptionRisk of privacy violation through call diversion: measures and industry responses