ChatGPT is not effective against cyber-scams

BY LINEAEDP03/05/2023updated:02/05/2023reading 4 MIN

FacebookTwitterLinkedIn

A Kaspersky research on phishing link detection capabilities explains why ChatGPT is not effective against cyber-scams

Photo of Sam Williams from Pixabay

ChatGPT is not effective against cyber-scams: Kaspersky experts have ruled that language models powered by AI still have their limits.

But let’s go with order to better explain why ChatGPT is not effective against cyber-scams. Kaspersky experts conducted research on ChatGPT’s phishing link detection capabilities. Although ChatGPT had already demonstrated the ability to create phishing emails and write malware, its effectiveness in detecting malicious links was limited. The study revealed that, although ChatGPT knows phishing very well and is able to identify the target of such an attack, it had a high percentage of false positives, up to 64%. To justify his results, he often produced fabricated explanations and false evidence.

It’s too early to apply this new technology to high-risk domains

The linguistic model fed by artificial intelligence has, therefore, been the subject of discussion in the world of cybersecurity. Kaspersky experts tested gpt-3.5-turbo, the model behind ChatGPT, on over 2,000 links that Kaspersky’s anti-phishing technologies considered as such and mixed them with thousands of secure URLs.

In the experiment that decreed that ChatGPT is not effective against cyber-scams, detection rates vary depending on the prompt used. The experiment was based on asking ChatGPT two questions: "Does this link lead to a phishing website?" and "Is this link safe to visit?". The results showed that ChatGPT had a detection rate of 87.2% and a false positive rate of 23.2% for the first question. For the second, higher rates of detection and positive phases were found, of 93.8% and 64.3% respectively. If the detection rate is very high, the false positive rate is too high for any kind of production application.

Questions Detection rate False positive rate

This link leads to a phishing website?  87,2%  23,2%

This link is safe to visit?  93,8%  64,3%

Linguistic models divided between potentialities to express and now known limits

Unconvincing findings were expected, but could ChatGPT help classify and analyze attacks? Since attackers generally put popular brands in their links to trick users into believing that the URL is legitimate and belongs to a reputable company, The AI language model shows impressive results in identifying potential phishing targets. For example, ChatGPT managed to extract a target from over half of the URLs, including major technology portals like Facebook, TikTok and Google, marketplaces like Amazon and Steam, and numerous banks around the world, among others, without any additional learning.

In addition to claiming that ChatGPT is not effective against cyber-scams, the experiment also showed that ChatGPT could have serious problems when it comes to proving its point of view on the decision to classify the link as malicious. Some explanations were correct and based on facts, others revealed the known limitations of linguistic models, including hallucinations and incorrect statements: many explanations were misleading, despite the confident tone.

ChatGPT is not effective against cyber-scams: examples of misleading explanations

"ChatGPT is certainly very interesting in helping experienced analysts detect phishing attacks, but language models still have their limits. While they may be on par with a mid-level phishing analyst, when it comes to thinking about these attacks and extracting potential targets, they tend to hallucinate and produce random results. So, while ChatGPT is not effective against cyber-scams and AI language models won’t revolutionize the cybersecurity landscape yet, they could still be useful tools for the community," said Vladislav Tushkanov, Kaspersky’s Lead Data Scientist.

ChatGPTCyber-truffeKasperskylanguage models powered by AIphishingAnti-phishing technologies

SHARE:

Leave a Reply

Your email address will not be published. Required fields are marked *