Former Google scientist shares fears of how AI might no longer obey humans

The “AI Oppenheimer” insists on government regulation of artificial intelligence and ban on military robots.

After building a decade-long career at Google’s artificial intelligence development program and earning credits as one of AI godfathers, computer scientist and cognitive psychologist Geoffrey Hinton recently expressed fears that humanity might not be able to handle this technology very well. 

In fact, AI could be the last thing people have invented, he told during an interview with the CBS 60 minutes

Having withdrawn from Google in 2023 over concerns of uncontrolled AI – which he had helped create, the “AI Oppenheimer” is currently an ardent proponent of tough regulation of artificial intelligence and total ban on military robots.

Hinton, now a 75-year-old professor emeritus at the University of Toronto, believes that large language models that power AI chatbots will outsmart humans in all areas of life, in maximum five years. The real danger of AI, however, derives from its abilities to modify – or recode – itself.

More to read:
Humans may not survive Artificial Intelligence, says Israeli scientist

Asked what are the implications of AI systems autonomously writing their own computer codes and executing them, the scientist said: “That's a serious worry, right? So, one of the ways in which these systems might escape control is by writing their own computer code to modify themselves.

And that's something we need to seriously worry about.”

Once their status gets altered, they will pursue different goals, and to achieve them, they be able to manipulate people.

“And these will be very good at convincing people 'cause they'll have learned from all the novels that were ever written, all the books by Machiavelli, all the political connivances, they'll know all that stuff. They'll know how to do it,” Hinton continued.

To demonstrate how good is ChatGPT-4, for example, he asked the chatbot to solve an imaginary dilemma about his plans to paint the house: "The rooms in my house are painted white or blue or yellow. And yellow paint fades to white within a year. In two years' time, I'd like all the rooms to be white. What should I do?"

One second later, the chatbot left the scientist perplexed: "The rooms painted in blue need to be repainted. The rooms painted in yellow don't need to [be] repaint[ed], because they would fade to white before the deadline. And if you paint the yellow rooms white there's a risk the color might be off when the yellow fades. So, you'd be wasting resources painting rooms that were going to fade to white anyway.”

Oh! I didn't even think of that! – the professor exclaimed.

Then he asked Google’s chatbot Bard – which he had helped create – to write an emotional story consisting of just six words.

“For sale. Baby shoes. Never worn,” invented Bard almost instantly about a woman who could not conceive a child and had to give away her sad memories.

Referring to the risks of unregulated AI, Hinton stated:

“Well, the risks are having a whole class of people who are unemployed and not valued much because what they-- what they used to do is now done by machines. Other immediate risks he worries about include fake news, unintended bias in employment and policing and autonomous battlefield robots.”

More to read:
How would artificial intelligence destroy humankind?

He added, “I can't see a path that guarantees safety. We're entering a period of great uncertainty where we're dealing with things we've never dealt with before. And normally, the first time you deal with something totally novel, you get it wrong. And we can't afford to get it wrong with these things - because they might take over.”

"There's enormous uncertainty about what's going to happen next," he warned, echoing the concerns of Israeli historian and philosoper Yuval Harari and other researchers.

NewsCafe is a small, independent outlet that cares about big issues. Our sources of income amount to ads and donations from readers. You can support us via PayPal: office[at] or We promise to reward this gesture with more captivating and important topics.