Skip to Main Content
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

Cybercriminals Using ChatGPT to Build Hacking Tools, Write Code

Expert and novice cybercriminals have already started to use OpenAI’s chatbot ChatGPT in a bid to build hacking tools, security analysts have said.

January 8, 2023
(Credit: NurPhoto/Getty Images)

Expert and novice cybercriminals have already started to use OpenAI’s chatbot ChatGPT in a bid to build hacking tools, security analysts have said. 

In one documented example, the Israeli security company Check Point spotted a thread on a popular underground hacking forum by a hacker who said he was experimenting with the popular AI chatbot to “recreate malware strains.” 

The hacker had gone on to compress and share Android malware that had been written by ChatGPT across the web. The malware had the ability to steal files of interest, Forbes reports

The same hacker showed off a further tool that installed a backdoor on a computer and could infect a PC with more malware. 

Check Point noted in its assessment of the situation that some hackers were using ChatGPT to create their first scripts. In the aforementioned forum, another user shared Python code he said could encrypt files and had been written using ChatGPT. The code, he said, was the first such one he had written. 

While such code could be used for harmless reasons, Check Point said that it could “easily be modified to encrypt someone’s machine completely without any user interaction.”

The security company stressed that while ChatGPT-coded hacking tools appeared “pretty basic,” it is “only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.”

A third case of ChatGPT being used for fraudulent activity flagged by Check Point included a cybercriminal who showed it was possible to create a Dark Web marketplace using the AI chatbot. The hacker posted in the underground forum that he had used ChatGPT to create a piece of code that uses third-party API to retrieve up-to-date cryptocurrency prices, which is used for the Dark Web market payment system.

ChatGPT’s developer, OpenAI, has implemented some controls which prevent obvious requests for the AI to build spyware. However, the AI chatbox has come under yet more scrutiny after security analysts and journalists found it could write grammatically correct phishing emails without typos

OpenAI did not immediately respond to a request for comment.

Like What You're Reading?

Sign up for SecurityWatch newsletter for our top privacy and security stories delivered right to your inbox.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

TRENDING

About Marco Marcelline

Contributor

I am interested in how technology and human rights intersect, and how technology shapes cultural trends. I have a master's degree in Investigative Journalism from City University London.

Read Marco's full bio

Read the latest from Marco Marcelline