https://www.thenewdaily.com.au/life/tech/2023/07/20/scams-criminal-ai
ScamGPT: Hackers and criminals are harnessing the power of AI
Parker McKenzie
Jul 20, 2023, updated Jul 20, 2023
10 News First
Hackers and criminals are using older versions of AI language models to create targeted and sophisticated scams, with the potential for greater damage when more powerful technology becomes available.
WormGPT, an alternative to ChatGPT which “lets you do all sorts of illegal stuff”, according to its developer, uses a two-year-old language model without ethical constraints placed upon it like other publicly available artificial intelligence models.
Professor Seyedali Mirjalili, the director of the Centre for Artificial Intelligence Research and Optimisation at Torrens University, said as much as people can use ChatGPT to assist and automate work, hackers and malicious actors can use the same technologies for nefarious reasons.
“The dark web is full of leaked personal data from companies like Optus, which means a data set that has leaked can be used by hackers to train something like ChatGPT,” he said.
“It produces not just a spam or phishing email, but it can also be personalised or target the victim using their data. It’s a big concern.”
WormGPT is available on a well-known forum for hackers, and with ethical constraints removed the chatbot can be instructed to create malware, create phishing emails and give advice on how to assault networks, using the GPT-J open source language model.
Older technology
Dr Andrew Lensen, senior lecturer in artificial intelligence at Victoria University of Wellington, said WormGPT is based on an older version of a language model from 2021 because nefarious actors don’t have access to the newest technology.
“It’s very likely that as we see large language models developing further and more are being used, things like it will become more convincing and more misused as well,” he said.
“Facebook just released their open source version yesterday, and I can really see that, for example, being used for nefarious purposes.”
WormGPT has been used for wide-scale phishing attacks against businesses, where emails and text messages are sent to employees in an attempt to gain access to networks and sensitive data.
Professor Mirijalili said the reason why hackers repurpose older models is because of the huge cost of building a large data set for generative AI.
“Large infrastructure requires billions of dollars in infrastructure and competing devices,” he said.
“Instead they take an existing model, retrain and rewire it so it can be used for other purposes. It’s not that hard to do, but what is difficult to do is to scrape the data from the dark web because it isn’t indexed on other platforms.”
The dark web consists of sites on the internet that aren’t listed on search engines like Google, making them difficult to find unless you know their address.
Legality
Dr Lensen said many scams that models like WormGPT can automate are already illegal, making it difficult to regulate.
“All this is doing is making it easier to automate on a large scale, so rather than having to, for example, handcraft your phishing attack you’re doing to target a specific corporation or individual, you may be able to have an automated approach where the model is tailored to everyone in your database,” he said.
“The question is then should big models be released publicly? Should we have more constraints on how companies develop them and what they release them for, and who can access them?”
He said we need more education about cyber crimes and how not to fall victim to an attack, as well as preventing their use in the first place.
“If you’re using a chatbot, you may think it’s a real person, but it could well be a large language model or a bot,” Dr Lensen said.
“When you start to combine these things with AI voices and you get a convincing phone call that could be from someone at the bank, people are going to struggle and be much more likely to be victimised.”
In 2022, the Australian Cyber Security Centre recorded 76,000 reports of cyber crime in Australia, an increase of 13 per cent from 2021.
Personalised and targeted scams may become a reality for many Australians in the near future. Photo: Getty
Professor Mirijalili said he would like to see greater collaboration between cyber security experts, law enforcement and developers to ensure AI isn’t used for the wrong reasons.
“You can’t blame anyone because regulation always lags behind technology, but what makes this space different is we can’t wait for an incident to happen because by then it will already be too late,” he said.
“Mandating ethical guidelines and frameworks for companies, businesses and organisations is important.”
Further regulation
The Australian government announced its intention to regulate AI technology in June, in an effort to ensure there are safeguards against any risk associated with the technology.
Professor Mirijalili said there is always fear and excitement around any technology, but AI can be harnessed as a force of good.
“I’m a big advocate for responsible and inclusive AI; I buy into a more balanced view in this type of discussion,” he said.
“I believe we have the capability, expertise and resources in businesses, organisations and government to address this and kill it …”
Generative AI is set to contribute between $45 billion and $115 billion annually to the Australian economy by 2030, according to research from TCA and Microsoft, but questions remain on whether the widespread adoption should continue at the current pace given the potential for harm.
Dr Lensen said the issue of widespread adoption of AI comes down to a question as a society on how much we want things to become automated.
“Do you want to have a robot ring up and book an appointment with you or talk through your mortgage repayments? Is that something we want because it is cheaper and more efficient?” he said.
“Or are we going to say that’s not how we want to interact, we want to have that human direction still? Whether or not there will be pushback on some of this technology, I think there’s some really interesting conversations and questions in there.”
Topics: AI, ChatGPT, scams, Technology
Comments