Opinion: One group that’s embraced AI: Criminals

cyber

From deepfakes to enhanced password cracking, hackers are discovering the power of AI for malicious use.

Lawmakers are still figuring out how best to use artificial intelligence. Lawbreakers are doing the same.

The malicious use of artificial intelligence is growing. Officials are warning against attacks that use deepfake technology, AI-enhanced \”phishing\” campaigns and software that guesses passwords based on big data analysis.

\”We have crime-as-a-service, we have AI-as-a-service,\” said Philipp Amann, head of strategy at EU law enforcement agency\’s Europol\’s European Cybercrime Centre. \”We\’ll have AI-for-crime-as-a-service too.\”

Most concerning to cybersecurity officials is deepfake technology — which uses reams of photos and videos to develop uncanny likenesses, or entirely new avatars. The technology has the power to generate pictures and videos that trick people into thinking they\’re looking at the real thing, and that\’s precisely what cybersecurity experts worry about.

If cybercriminals \”manage to come up with ways of assuming your identity or my identity, or create somebody from scratch that doesn\’t exist, and they then manage to get through the online verification processes, that\’s a huge risk,\” Amann said.

\”Once you\’ve broken the process, you can quickly generate a large number of accounts,\” he said, adding that this would make money laundering easier, and help criminals carry out fraud on online platforms.

In one case, deepfake technology is thought to have been used to impersonate a chief executive officer for major corporate fraud. In other cases, the techniques have been used to generate fake profiles as part of elaborate phishing scams.

The fear of being duped by deepfakes is so present, it has sometimes been used as cover for being fooled in other ways. Politicians across Europe blamed deep fakes when they were tricked into taking meetings with a man posing as Russian opposition figure Alexei Navalny\’s chief of staff Leonid Volkov. Russian pranksters said the stunt was theirs, and didn\’t involve deepfake technology.

\”Which technology is used to what extent, we often don\’t know. But we\’re constantly questioning ourselves, and questioning who we can trust,\” said Agnes Venema, a tech and national security researcher at the University of Malta.

\”Sometimes that\’s simply the purpose: to spread doubts and raise question,\” she said.

The threat goes beyond deep fakes. Malicious uses of artificial intelligence can range from AI-powered malware, AI-powered fake social media accounts farming, distributed denial-of-service attacks aided by AI, deep generative models to create fake data and AI-supported password cracking, according to a report by the EU\’s cybersecurity agency published in December.

Europol, together with cybersecurity firm Trend Micro and the U.N.\’s research institute UNICRI, found software that guesses passwords based on an AI-powered analysis of 1.4 billion leaked passwords, allowing hackers to gain access to systems quicker.

They also found cheap software offerings that can mislead platforms like streaming services and social media networks in order to create smart bot accounts. In France, a group of independent music labels, collecting societies and producers are complaining to the government about “fake streams,” whereby tracks are shown to be played by bots, or real people hired to artificially boost views, benefiting the artist whose tracks are played.

Other fraudsters are developing AI tools to generate better fake \”phishing\” email content to trick people into handing over login credentials or banking information.

Cybercriminals \”aim for scale,\” she said. In the case of phishing emails, for instance, \”they can send millions of emails and can profit from just one of them.\”

Source: www.politico.eu

Scroll to Top