We talk about AI in a very nuts-and-bolts way, but much of the discussion centers on whether it will ultimately be a utopian boon or the end of humanity. What is your position on those long-term questions?
AI is one of the most profound technologies we will work on. There are short-term risks, medium-term risks and long-term risks. It’s important to take all of these concerns seriously, but depending on the stage you’re at you have to balance where you put your resources. In the near term, sophisticated LLMs have problems with disillusionment—they can create things. There are areas where it’s appropriate, like imagining your dog’s names, but “What’s the right dose of medicine for a 3-year-old?” So right now, the responsibility is to test it for security and make sure it doesn’t compromise privacy and introduce bias. In the medium term, I worry about whether AI will displace or enhance the labor market. There will be areas where it will be a disruptive force. And there are long-term risks surrounding developing powerful intelligent agents. How can we ensure that they are aligned with human values? How do we control them? Those are all valid things to me.
Have you seen the movie? Oppenheimer?
I am actually reading a book. I am a big fan of reading a book before watching a movie.
I ask because you are one of the people with the most influence over powerful and potentially dangerous technology. Does Oppenheimer’s story touch you in this way?
All of us who are working on powerful technologies in one form or another—not just AI, but genetics like Crispr—must be held accountable. We must ensure that we are an important part of the debate on these matters. You want to learn from history where you can, of course.
Google is a big company. Current and former employees complain that they are slowed down by bureaucracy and red tape. All eight authors are influential “Transformer” paper, which you cite in your letter, have left the company, some saying that Google runs too slowly. Can you slow it down and make Google look like a startup again?
Anytime you’re growing a company, you want to make sure you’re working to reduce bureaucracy and stay as lean and agile as possible. There are many, many areas where we move very quickly. If we hadn’t scaled fast, we wouldn’t have scaled in the cloud. I look at what the YouTube Shorts team has done, I look at what the Pixel team has done, I look at how much the search team has evolved with AI. There are many, many areas where we are making rapid progress.
Yet we hear those complaints, including from people who loved the company but left.
Obviously, when you’re running a big company, there are times when you look around and say, maybe you didn’t move fast enough in some areas—and you work hard to fix that. (Pichai raises his voice.) Do I recruit candidates who come and join us because they feel like they’re in another big company, which is too bureaucratic, and they couldn’t change quickly enough? Absolutely. Are we attracting some of the best talent in the world every week? yes It’s equally important to remember that we have an open culture—people talk a lot about the company. Yes, we lost some people. But we’re retaining people, long term, better than we did. Did OpenAI lose some of the original team that worked on GPT? The answer is yes. You know, I think the company moves faster in the pocket than I remember 10 years ago.