The growing threat of chatbots in the field of cybercrime: a new ally for cybercriminals
Cybercriminals' use of chatbots and advanced language models makes phishing campaigns increasingly effective, with threats constantly evolving. Traditional security tools often fail to detect these attacks, causing growing concern in the cybersecurity industry.
Contribute to spreading the culture of prevention! Support our cause with a small donation by helping us raise awareness among users and companies about cyber threats and defense solutions.
According to Egress, cybercriminals are using evolving attack methodologies to breach traditional network securities, including secure email gateways.“Without a doubt, chatbots or large language models (LLMs) lower the barrier to entry into cybercrime, making it possible to create well-written phishing campaigns and generate malware that less capable developers couldn't produce themselves,” said Jack Chapman , VP Threat Intelligence at Egress.
Evolving attack techniques
As threats evolve, the cybersecurity industry must work together to continue to address human risks in email.From RingCentral impersonation scams to attacks through social media, security software threats, and sextortion, there has been no shortage of phishing attacks in 2023.The biggest phishing theme has been missed voice calls, which have accounted for 18.4% of phishing attacks between January and September 2023, making it the most popular theme of the year so far.Many of these attacks use HTML obfuscation techniques to hide their payload.
The potential use of chatbots by cybercriminals
The possibility of cybercriminals using chatbots to create phishing and malware campaigns is a concern, but is it possible to tell if a phishing email was written by a chatbot? The report concluded that no person or tool can say for sure whether an attack was written by a chatbot.Because they use large language models (LLMs), the accuracy of most detection tools increases with longer text samples, often needing at least 250 characters to work.With 44.9% of phishing emails falling below the 250 character limit and a further 26.5% below 500, AI detectors currently either do not work reliably or at all in 71.4% of attacks.