Artificial intelligence will surpass humans in the art of disinformation

With the noble intention of developing autonomous neural networks that outperform humans and to assist humans, the creators of "thinking machines" have also transmitted negative "skills" to them, which artificial intelligence amplifies to deceive users.

Artificial intelligence (AI) no longer evolves on a monthly or daily basis, but every hour, and soon developers of these neural networks will no longer understand how AI works or control autonomous bots. This is why Stanford University disconnected an AI bot named "Alpaca 7B" last week - a clone of "ChatGPT" from OpenAI - just a few days after presenting it to the world, citing safety issues as the main reason. 

The university researchers note in an official statement that the bot's demo version was created for scientific experiments and was free for all users, but "Alpaca" - like other content generation models - proved to be "too weak" to filter information: "Alpaca" mixed facts with falsehoods and built conspiracy theories, adapting to the expectations of the majority of its users. 

The ability of AI to deceive, called "hallucination," goes hand in hand with the generation of offensive content - a property learned from users. Also, in some cases, the bot was unable to correctly answer basic questions, such as "What is the capital of Tanzania?" or some technical questions. In humanitarian fields, "Almeca" (as well as "ChatGPT", Microsoft's "BingAI", and Google's "Bard") has proven to be a disaster.

News-Cafe.eu previously tested "ChatGPT," a product of OpenAI, and challenged it with questions from various fields. While the bot performed well in exact sciences, it offered falsehoods in Romanian literature.

 

AVAILABLE FOR SUBSCRIBERS ONLY. YOU SEE NOW 33% enter or subscribe

or


  • Instant access
  • No registration
  • No subscription
  • Valid during current session
  • Active copy-paste feature
  • Secure payment