AI DevwWrld CyberDSA Chatbot Summit Cyber Revolution Summit CYSEC Global Cyber Security & Cloud Expo World Series Digital Identity & Authentication Summit Asian Integrated Resort Expo Middle East Low Code No Code Summit TimeAI Summit

LLMs reduce the barrier to entry into cybercrime

The growing threat of chatbots in the field of cybercrime: a new ally for cybercriminals

Cybercriminals' use of chatbots and advanced language models makes phishing campaigns increasingly effective, with threats constantly evolving. Traditional security tools often fail to detect these attacks, causing growing concern in the cybersecurity industry.

This pill is also available in Italian language

According to Egress, cybercriminals are using evolving attack methodologies to breach traditional network securities, including secure email gateways. “Without a doubt, chatbots or large language models (LLMs) lower the barrier to entry into cybercrime, making it possible to create well-written phishing campaigns and generate malware that less capable developers couldn't produce themselves,” said Jack Chapman , VP Threat Intelligence at Egress.

Evolving attack techniques

As threats evolve, the cybersecurity industry must work together to continue to address human risks in email. From RingCentral impersonation scams to attacks through social media, security software threats, and sextortion, there has been no shortage of phishing attacks in 2023. The biggest phishing theme has been missed voice calls, which have accounted for 18.4% of phishing attacks between January and September 2023, making it the most popular theme of the year so far. Many of these attacks use HTML obfuscation techniques to hide their payload.

The potential use of chatbots by cybercriminals

The possibility of cybercriminals using chatbots to create phishing and malware campaigns is a concern, but is it possible to tell if a phishing email was written by a chatbot? The report concluded that no person or tool can say for sure whether an attack was written by a chatbot. Because they use large language models (LLMs), the accuracy of most detection tools increases with longer text samples, often needing at least 250 characters to work. With 44.9% of phishing emails falling below the 250 character limit and a further 26.5% below 500, AI detectors currently either do not work reliably or at all in 71.4% of attacks.

Follow us on Twitter for more pills like this

10/05/2023 07:18

Marco Verro

Last pills

Zero-day threat on Android devices: Samsung prepares a crucial updateFind out how Samsung is addressing critical Android vulnerabilities and protecting Galaxy devices from cyber threats

CrowdStrike: how a security update crippled the tech worldGlobal impact of a security update on banking, transportation and cloud services: what happened and how the crisis is being addressed

Checkmate the criminal networks: the Interpol operation that reveals the invisibleFind out how Operation Interpol exposed digital fraudsters and traffickers through extraordinary global collaboration, seizing luxury goods and false documents

Google Cloud security predictions for 2024: how AI will reshape the cybersecurity landscapeFind out how AI will transform cybersecurity and address geopolitical threats in 2024 according to Google Cloud report