Researchers create dangerous AI-powered malware


A cyberattack with the new computer “worm” could end humans’ control of all systems relying on internet-networked hardware.

Using generative artificial intelligence (AI), a team of researchers from Cornell Tech (U.S.), Technion institute of technology (Israel), and Intuit (Israel) has created a malware virus that is capable of infiltrating from one computer to another on its own and infect all systems in its way.

If released outside, the new “worm” could cripple all systems relying on internet and computer networking, causing an apocalypse at global scale, said the three scientists behind a yet-to-be-reviewed paper called “ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications”. 

More to read:
Former Google scientist shares fears of how AI might no longer obey humans

Named Morris II, it is the first “worm” designed to target GenAI ecosystems through the use of adversarial self-replicating prompts and was tested in a controlled experiment.

“The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output (replication), engaging in malicious activities (payload). Additionally, these inputs compel the agent to deliver them (propagate) to new agents by exploiting the connectivity within the GenAI ecosystem,” the paper reads.

In the inappropriate hands, this virus could be used in a novel cyberattack that has not been witnessed before. Although AI-powered worms have not yet been observed in real-world scenarios, researchers emphasize that it is only a matter of time.

In their experiment, the researchers focused on email assistants powered by OpenAI's GPT-4, Google's Gemini Pro, and an open-source large language model named LLaVA. Employing the "adversarial self-replicating prompt," they induced AI models to generate cascading outputs, infecting these assistants and extracting sensitive information such as names, phone numbers, credit card details, and social security numbers.

The researchers demonstrated the ability to "poison" an email database, prompting the receiving AI to pilfer sensitive details. Additionally, they discovered that the “worm” could be transmitted to new machines, without raising any suspicions from human users.

More to read:
China’s failure to control artificial intelligence: Why did it shut down ChatYuan?

When the team embedded a malicious prompt in an image, it triggered the AI to infect additional email clients. The encoding of the self-replicating prompt into images could facilitate the forwarding of spam, abusive content, or propaganda to new clients after the initial email is sent.

The three researchers have shared their findings with OpenAI and Google, urging the two tech giants to take swift action to address the vulnerabilities.

They warned that AI-powered “worms” could proliferate in the wild "within a few years," leading to catastrophic consequences for all.

This study also underscores the potential cybersecurity risks associated with the integration of generative AI assistants by companies, making them unprotected against a potential nightmare scenario.

***
NewsCafe is a small, independent outlet that cares about big issues. Our sources of income amount to ads and donations from readers. You can support us via PayPal: office[at]rudeana.com or paypal.me/newscafeeu. We promise to reward this gesture with more captivating and important topics.

***
NewsCafe relies in its reporting on research papers that need to be cracked down to average understanding. Some even need to be paid for. Help us pay for science reports to get more interesting stories. Use PayPal: office[at]rudeana.com or paypal.me/newscafeeu.