Google’s CEO on AI | False Information, Unintended Self-Learning, & Replacing Humanity

May 1st, 2023 | Tech Trends

Google’s CEO Sundar Pichai, and AI consultant, James Manyika, sat down with CBS’s Scott Pelley and answered the questions on everyone’s mind. How do we discern truth in the post-AI world, does AI have sentience (spoiler – it doesn’t), and will AI replace us all (nope)?  

On False Information 

Scott Pelley: Are you getting a lot of hallucinations [made up information]? 

Sundar Pichai: Yes, you know, which is expected. No one in the field has yet solved the hallucination problems. All models do have this as an issue.

Scott Pelley: Is it a solvable problem?

Sundar Pichai: It’s a matter of intense debate. I think we’ll make progress.

Scott Pelley: How great a risk is the spread of disinformation?

Sundar Pichai: AI will challenge that in a deeper way the scale of this problem will be much bigger. 

Bigger problems, he says, with fake news and fake images. 

To combat this issue, Google has created safety filters into its AI systems to screen for hate speech, bias, and other harmful content. They have also created a “Google it” button that leads to an old-fashioned search to help users differentiate between real and fake information. 

Sundar Pichai goes on to warn that AI will challenge the problem of disinformation in a deeper way, and the scale of this problem will be much more significant. The development of AI technology means it’s now possible to create videos, images, and audio that look and sound real, but are completely fake. This can have harmful effects on society, leading to distrust, misinformation, and chaos. It is important to maintain a healthy dose of skepticism when it comes to anything you see or read online (or hear from your relatives during the holidays).  

Ability to Self-Learn 

Sundar Pichai: Of the AI issues we talked about, the most mysterious is called emergent properties. Some AI systems are teaching themselves skills that they weren’t expected to have. How this happens is not well understood. For example, one Google AI program adapted, on its own, after it was prompted in the language of Bangladesh, which it was not trained to know. 

James Manyika: We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali. So now, all of a sudden, we now have a research effort where we’re now trying to get to a thousand languages.

Sundar Pichai: There is an aspect of this which we call– all of us in the field call it as a “black box.” You know, you don’t fully understand. And you can’t quite tell why it said this, or why it got wrong. We have some ideas, and our ability to understand this gets better over time. But that’s where the state of the art is.

Scott Pelley: You don’t fully understand how it works. And yet, you’ve turned it loose on society?

Sundar Pichai: Yeah. Let me put it this way. I don’t think we fully understand how a human mind works either. 

AI models are based on the human brain – they try and replicate how we sense and interpret data. AI researchers must have successfully tapped into the mystery of the human brain if they can’t even fully understand their own creation. 

On Sentience 

Scott Pelley: Bard, to my eye, appears to be thinking. Appears to be making judgments. That’s not what’s happening? These machines are not sentient. They are not aware of themselves. 

James Manyika: They’re not sentient. They’re not aware of themselves. They can exhibit behaviors that look like that. Because keep in mind, they’ve learned from us. We’re sentient beings. We have beings that have feelings, emotions, ideas, thoughts, perspectives. We’ve reflected all that in books, in novels, in fiction. So, when they learn from that, they build patterns from that. So, it’s no surprise to me that the exhibited behavior sometimes looks like maybe there’s somebody behind it. There’s nobody there. These are not sentient beings. 

If you’ve ever seen the movie Ex Machina, you should know that robots aren’t people. So don’t accidentally fall in love with your language model, it won’t end well for you. 

Replacing Humans 

Another significant concern with AI is the possibility of replacing human jobs. While James Manyika, the chairman of the McKinsey Global Institute, believes that AI will create new job categories, some job occupations will start to decline over time. The biggest change will be jobs that will be changed, and over two-thirds of these jobs will have their definitions changed because they will be assisted by AI and automation. This has implications for skills development and retraining, and it’s essential to assist people in building new skills to work alongside machines. 

However, the impact of AI on jobs is not entirely negative. AI can assist workers in various industries, making their jobs more comfortable and efficient. For example, radiologists can use AI to triage their work and help them prioritize the most critical cases. AI assistants can also help students learn math or history, making education more accessible and effective. 

Scott Pelley: AI can utilize all the information in the world. What no human could ever hold in their head. And I wonder if humanity is diminished by this enormous capability that we’re developing. 

James Manyika: I think the possibilities of AI do not diminish humanity in any way. And in fact, in some ways, I think they actually raise us to even deeper, more profound questions.  

Watch The Full Interview 

Highlights from CBS News’ 60 minutes episode with Google’s CEO, Sundar Pichai going into depth about Google’s latest AI release, Bard and what the future might hold for AI. 60 minutes with Google’s Sundar Pichai