More than 100,000 ChatGPT users have had their data stolen in malware attacks over the past year, according to research into dark web transactions.
The cyber intelligence firm Group-IB discovered the compromised data within the logs of info-stealing malware traded on various underground websites.
Info-stealers are a form of malware that target account data stored on web browsers. This can include passwords, cookies, browsing history and bank payment details.
In this instance, the researchers believe the attackers targeted users’ ChatGPT login credentials, but this is only the tip of the iceberg. Once inside users’ systems, criminal hackers can access previous conversations and prompts, which may reveal valuable information.
This data breach is the latest in a series of security concerns regarding machine learning tools. Their popularity has soared in the past few months, and the pace at which they have been adopted has been matched only by reports of their vulnerabilities.
Indeed, over a quarter of all the stolen information found by Group-IB came last month, with threat actors posting 26,800 new pieces of data in May 2023.
How was this possible?
Despite the scale of this breach, the criminals’ technique is simple. They purchase off-the-shelf info-stealing malware, which extracts login credentials from a web browser’s SQLite database and then decrypts the stolen information.
This means that the attackers could steal information from ChatGPT without breaching its systems.
ChatGPT’s parent company, OpenAI, confirmed as much in a statement issued this morning, with a spokesperson saying:
“The findings from Group-IB’s Threat Intelligence report [are] the result of commodity malware on people’s devices and not an OpenAI breach.
“We are currently investigating the accounts that have been exposed.
“OpenAI maintains industry best practices for authenticating and authorizing users to services including ChatGPT, and we encourage our users to use strong passwords and install only verified and trusted software to personal computers.”
According to Group-IB’s research, the majority of breaches came via Raccoon Stealer, a type of info-stealer that’s typically delivered via email. Crooks can pay $75 (about £58) to access the malware for a week or $200 (£156) for a month.
This is relatively inexpensive as far as malware attacks go, but with the dark web market saturated with stolen login credentials, compromised data won’t sell for as much as you might think.
The costs will vary greatly depending on the types of data, but among the 100,000 stolen credentials, it’s unlikely that many will sell for more than a few dollars.
Does ChatGPT pose a cyber security risk?
For all the detailed and considered discussions surrounding ChatGPT and other AI tools, the rhetoric has largely come from two groups: those panicking that their job, and their entire skillset, is about to be made redundant, and those who see large language models as a deus ex machina that can produce content quickly and cheapy amid a faltering economy and the cost of living crisis.
Cyber security experts currently sit somewhere between those parties. Some of the biggest names in the industry, such as Google and Microsoft, have launched tools that incorporate machine learning into threat detection systems, while others remain sceptical.
Samsung, for instance, has banned staff from using ChatGPT on work computers and threatened to terminate employees who fail to follow the policy.
Elsewhere, IT Governance Consultant William Gamble noted that ChatGPT, like many technological solutions, is only as useful as the people using them.
He noted how susceptible people are to misinterpreting its capabilities. “We see it as providing accurate, objective answers,” Gamble said, but “algorithm predictions are a matter of probability.”
As a result, the technology is “designed to work on very specific problems in very specific environments”, and it requires human intervention to parse the information provided by AI.
Indeed, these issues mirror the industry’s general concern about the way people view cyber security technology. Impressive though AI chatbots are, they’re not magic truth-telling devices that can take the human out of the equation – whether that’s providing automated responses to prompts or analysing information security risks.
We cannot deny the benefits that the technology will provide to businesses, but we must exercise care regarding the way we use it and the information we hand over.
The post 100,000 ChatGPT Accounts Hacked in Malware Attack appeared first on IT Governance UK Blog.