Artificial intelligence has brought a tremendous boost in productivity to the underworld and criminals.

 

Vincenzo Ciancaglini, a senior researcher at the security company Trend Micro, said that generative artificial intelligence offers a range of new and powerful tools, enabling malicious actors to carry out crimes at an unprecedented efficiency on an international level.

 

Ciancaglini said that most criminals "do not live in dark basements plotting evil every day; most of them are ordinary people who also have jobs."

 

In 2023, we witnessed the rise and fall of WormGPT. This is an artificial intelligence language model built on an open-source model, trained with data related to malware, designed to help hackers carry out cyber attacks, and without any ethical rules or constraints.

 

However, in the summer of 2023, after the model began to attract media attention, the developers shut it down. Since then, cyber criminals have mostly stopped developing artificial intelligence models and have instead chosen to use ready-made, reliable artificial intelligence tools.Qi Kankali explained that this is because criminals want a simple life and quick profits. The unknown risks associated with adopting any new technology must be worth the higher returns.

Advertisement

If the risk of being caught using new technology is higher, then it must be better than the technology they are currently using and bring higher returns.

Here are five ways criminals are currently using artificial intelligence.

Phishing

Mislav Balunović, a researcher in artificial intelligence security at the Swiss Federal Institute of Technology in Zurich, said that phishing is currently the most common use of generative artificial intelligence by criminals, which is trying to deceive people into revealing private information.Researchers have found that the rise of ChatGPT has been accompanied by a surge in the number of phishing emails.

Zicarelli said that spam generation services like GoMail Pro have integrated ChatGPT, allowing malicious users to translate or improve the messages sent to potential victims.

Zicarelli said that OpenAI's policies restrict people from using its products for illegal activities, but it is difficult to regulate in practice because many seemingly harmless prompts can also be used for malicious purposes.

OpenAI stated that it uses manual review and automated systems to identify and combat abuse of its models, and issues warnings, enforces temporary account freezes, or outright bans when users violate the company's policies.

A spokesperson for OpenAI told us: "We take the safety of our products seriously and continuously improve our security measures based on how people use our products."The spokesperson added: "We have been working hard to make the model safer and more robust to prevent abuse and jailbreaking, while maintaining the model's usefulness and performance."

In a report released in February, OpenAI stated that it had closed the accounts of five malicious actors supported by state backgrounds.

Zicangalini said that the so-called "Nigerian Prince" scam used to be relatively easy to spot. In this scam, someone promises to give the victim a large sum of money, but the latter needs to pay a small advance payment. These scam emails often have poor English and are full of grammatical errors.

Now, language models allow scammers to generate emails that look very authentic in English.

Zicangalini said: "English speakers used to be relatively safe from being deceived by non-English criminals because you would realize it was a scam, but that is not the case anymore."Thanks to better artificial intelligence translation tools, different criminal gangs around the world can also communicate better. Cicangalini said that the potential risk lies in their ability to coordinate large-scale cross-border actions and carry out deception against victims in other countries.

Deepfake audio scams

Generative artificial intelligence has taken a big step forward in the development of deepfakes, with artificial intelligence synthesized images, videos, and audio more realistic than ever before. Criminals are obviously aware of this.

It is reported that earlier in 2024, a Hong Kong employee was scammed out of 25 million US dollars because online criminals used a deepfake image of the company's chief financial officer to persuade the employee to transfer the money into the scammer's account.Chi Kankali says: "We have already seen that deep fakes have finally begun to enter the underground market."

His team found that people are showing off their deep fake "products" on platforms such as Telegram, and selling their services at prices of 10 dollars per image or 500 dollars per minute of video.

Chi Kankali says that one of the most popular deep fake targets for criminals is Elon Musk.

Although the production of deep fake videos is still very complicated and easier to be found out, the situation is not the same for deep fake audio.

They are low cost to produce, only requiring a few seconds of the target person's voice, which can be intercepted from their social media, to produce high-quality forgery.In the United States, there have been several high-profile cases where people have received phone calls from their loved ones claiming they have been kidnapped and demanding money for their release, only to find out that the caller is a scammer using deepfake audio technology.

Zichangaliani said, "People need to be aware that these things are now possible, and they also need to be aware that the 'Nigerian prince' no longer speaks broken English."

He added, "People can call you in another voice, and they can put you in a very stressful situation."

He said there are ways to protect oneself. Zichangaliani suggested that family members could agree on a secret security word that is regularly changed, which can help confirm the identity of the person on the other end of the phone.

"I now use the security word to protect my grandmother," he said.Bypassing Identity Checks

Another way criminals use deepfakes is to bypass "Know Your Customer" (KYC) verification systems. Banks and cryptocurrency exchanges use these systems to verify that their customers are real and the same person.

These institutions require new users to take a photo of themselves holding an identity document in front of a camera. However, criminals have started selling applications on platforms like Telegram that can trick people into bypassing this process.

The way they work is by providing a fake or stolen identity document and placing a deepfake avatar (similar to a filter) on the face of a real person to deceive the verification system.Qi Kankali discovered some examples where people offered these services for the cryptocurrency company Binance for only 70 US dollars.

 

"The technology they use is quite common," said Qi Kankali. These technologies are similar to Instagram filters that place someone else's face onto one's own face.

 

He said, "We can anticipate that in the future, criminals will use real deepfakes to bypass more complex authentications."

 

Jailbreak as a ServiceIf you ask most artificial intelligence systems how to make a bomb, you won't get useful answers.

This is because artificial intelligence companies have taken various protective measures to prevent their models from generating harmful or dangerous information.

Due to issues of technology and cost, cybercriminals are unwilling to create an artificial intelligence model without safety guards. Instead, they have started to accept a new trend called "jailbreaking as a service."

Most models have policies on how to use them, and the so-called "jailbreaking" allows users to manipulate the artificial intelligence system to generate outputs that violate these policies, such as writing code for ransomware or generating text for scam emails.

Services like EscapeGPT and BlackhatGPT provide application programming interfaces (APIs) and jailbreaking prompts to connect to language models. They also continuously update and claim to offer anonymous access.To counter this burgeoning black industry, artificial intelligence companies like OpenAI and Google often have to plug security vulnerabilities that could lead to the misuse of their models.

Jailbreak services use different tricks to break through security mechanisms, such as posing hypothetical questions or asking in foreign languages.

Artificial intelligence companies strive to prevent their model rules from being broken, while malicious actors strive to come up with increasingly creative jailbreak prompts, and there has always been a game of cat and mouse between the two.

Zicangalini said that these services are becoming the new favorites of criminals.

"Finding ways to jailbreak is a tedious activity. You come up with a new method, and then you need to test it."It will be functional for the coming weeks until OpenAI updates its model, he added, "For criminals, jailbreaking is a very interesting service."

 

 

 

Cyber Unboxing and Surveillance

 

Barunovich said that artificial intelligence language models are not only the perfect tools for phishing but also for doxxing (disclosing someone's private identity information online, also known as "unboxing" or "human flesh search").

 

This is because artificial intelligence language models are trained on a vast amount of internet data, including personal data, and can even infer someone's location.For example, you could have a chatbot pretend to be an experienced private detective, and then have it analyze the text written by the victim, deducing personal information from small clues in the text.

For instance, by inferring their age from the time they attended high school, or by guessing where they live based on the landmarks they mention during their commute. The more information about them on the internet, the easier it is for their information to be identified.

The research team that Baranovich is part of discovered at the end of 2023 that large language models like GPT-4, Llama 2, and Claude can deduce sensitive information such as people's race, location, and occupation purely from everyday conversations with a chatbot.

In theory, anyone who has access to these models can use them in this way.

Since their paper was published, new services that exploit this feature of language models have emerged.Although the existence of these services does not necessarily imply criminal activity, it highlights new capabilities that malicious actors might possess.

Barunovich said that if ordinary people can build such surveillance tools, state-backed malicious actors may have better systems.

He said, "The only way we can prevent these things from happening is by strengthening defenses." He added that companies should invest in data protection and security.

For individuals, raising awareness is key. Barunovich said that people should think twice and carefully decide whether they are willing to disclose their personal information to language models.