Skip to content

Cybercriminals can’t agree on GPTs – Sophos News

A recent surge in media coverage highlighted the availability of large language models (LLMs) specifically designed for use by cybercriminals. Notable examples include WormGPT and FraudGPT, which were being sold on underground forums. Concerns were raised about the potential for threat actors to create “mutating malware” using these models, leading to a frenzy of activity in the underground forums. While the dual-use aspect of LLMs is undoubtedly a concern, it is unclear how threat actors perceive and utilize these tools beyond a few publicly-reported incidents.

To gain a better understanding of the current state of play and explore threat actors’ perspectives on LLMs, Sophos X-Ops conducted an investigation into LLM-related discussions on four prominent criminal forums and marketplaces. The focus was on understanding what threat actors are using LLMs for, their perceptions of these tools, and their thoughts on specific models like WormGPT.

The findings revealed the existence of multiple GPT-derivatives claiming to offer similar capabilities as WormGPT and FraudGPT. However, there was skepticism surrounding some of these models, with allegations of scams being made. Overall, there was considerable doubt and criticism about tools like ChatGPT, with arguments that it is overrated, overhyped, redundant, and unsuitable for generating malware.

Threat actors expressed cybersecurity-specific concerns about LLM-generated code, including worries about operational security and the detection of their activities by antivirus and endpoint detection and response systems. The focus of many posts was on jailbreaks, which are means to bypass the self-censorship of LLMs and obtain harmful or illegal responses. Compromised ChatGPT accounts were also being sold on the forums.

Actual usage of LLMs for generating malware and attack tools was limited, mostly in a proof-of-concept context. Some threat actors effectively used LLMs for mundane coding tasks, while others developed chatbots and auto-responses to enhance the forums they frequented, with varying levels of success. Unskilled “script kiddies” showed interest in using LLMs to generate malware but often struggled to bypass prompt restrictions and understand resulting code errors.

The research also highlighted examples of AI-related thought leadership on the forums, suggesting that threat actors grapple with the same logistical, philosophical, and ethical questions surrounding this technology as everyone else.

It is worth noting that the opinions expressed in this article are not representative of all threat actors and are based on exploratory assessments of LLM-related discussions on the selected forums. The research confirmed and validated some of the findings from Trend Micro’s research on the same topic.

The four forums examined in this research were Exploit, XSS, Breach Forums, and Hackforums. However, it is possible that more active discussions on LLMs are taking place in other, less visible channels.

Overall, this research provides insights into the current landscape of LLM-related discussions among threat actors. While the use of LLMs for cybercrime purposes exists, it is not as widespread as some may fear. The findings shed light on the skepticism, limitations, and potential misuse of these tools by threat actors.

Leave a Reply

Your email address will not be published. Required fields are marked *