Al Lucca/Semafor
The Scoop
In an exclusive interview, Alphabet and Google CEO Sundar Pichai told Semafor he’s ready to work on a “Manhattan Project” for AI when Donald Trump moves into the White House next year.
“I think there is a chance for us to work as a country together,” he told Semafor earlier this week. “These big, physical infrastructure projects to accelerate progress is something we would be very excited by.”
On the Pichai mood meter, the cerebral boss known for his zen-like calm was downright giddy at the company’s Mountain View campus Wednesday afternoon after a day of major product announcements, which followed a big breakthrough in quantum computing and Nobel prizes awarded for work done within Google. He was set to leave the next day to see President-elect Donald Trump, The Information reported and Semafor confirmed.
The head of the $2.4 trillion company seemed to be enjoying a moment of relief after a few years of intense pressure that followed the launch of ChatGPT in 2022. Less than a year ago, the conventional wisdom was that Pichai may not last long in his role. On Wednesday, he told me he never had any doubt about the company’s direction.
“Internally, I had a palpable sense of the progress we were making,” he said. “It’s definitely very satisfying to see the momentum, but we plan to do a lot more. We’re just getting started.”
At the core of that innovation is Gemini, the company’s flagship frontier AI model. Google took a different approach from competitors, putting its research energy and chops into building a “multi-modal” model from the ground up instead of a text-based large language model.
When Gemini launched a year ago, it didn’t stack up well against competitors in some key benchmarks, like coding. But Gemini 2.0, which came out Wednesday, seems to have leapfrogged ahead of Anthropic into the lead of SWE-Bench, a key indicator of AI capabilities.
The metric shows Google is on the right track. And for Pichai, that goes back to decisions he made almost a decade ago when he became CEO of the search giant.
“In 2015, I set the company in this AI-first direction. As part of that, we said we would do a deep, full-stack approach to AI, all the way from world class research, building the infrastructure … all the way from silicon on,” he said. “That’s the foundation.”
When ChatGPT came out (what Pichai calls “the current gen AI moment”), he says he decided to first invest up front in restructuring the company by combining most of its AI firepower into one organization called Google DeepMind, in an effort to eventually take its cutting-edge research and turn it into consumer products.
That may have made it seem slow to respond to OpenAI, but Pichai said he was playing the long game. “In the current gen AI moment, sometimes you invest to get things right up front. For me, that was getting Google DeepMind set up from the ground up,” he said.
He added that Gemini 2.0 has finally reached a level of capability that will allow a wave of new consumer products. “We already have capable enough models. We can build many, many use cases on top of it. That progress is going to be very real,” he said. “With Gemini 2.0, we are laying the foundation for it to be more agentic.”
Agentic AI refers to the ability for models to take actions on behalf of people. This week, Google showed off a new tool called Project Mariner. While not yet released publicly, Mariner is capable of taking control of a web browser to follow instructions. For instance, “fill out my expense report for me.”
“To watch a model being able to use the browser is pretty incredible, but we have to break through some barriers,” Pichai said. “The saying goes, the final 20% takes 80% of the effort. In this case, the last 10% may take 90% of the effort.”
In addition to AI research breakthroughs, the small handful of companies in the AI race are fighting for three major resources today: compute power, energy and data. Pichai said that Google’s data centers are on par, if not more powerful than rivals. “We are cutting edge, we are scaling it up. And everything I see, everything we benchmark against, I think we are at the frontier there as well,” he said.
And he said the company is making strides on energy, signing small modular nuclear deals, exploring geothermal energy and looking at a huge increase in solar. “I’ve always felt, if you put your mind to it, we should be dealing with the energy surplus,” he said. “Energy should be an accelerant, not a constraint. It’s only our imagination and resolve that’s in the way.”
While the company’s AI research has been thrust into the core of its product development strategy, its quantum computing division has been busy in a satellite office in sleepy Santa Barbara, Calif.
Thanks to a confluence of theoretical physics and engineering elbow grease, the company was able to break through a long standing barrier — the fact that as quantum computers get bigger, they make more errors.
Pichai said he now expects quantum might make meaningful contributions to Google within five years. “Quantum, to me, looks like where AI was in the 2010s. Few people know about it, but you’re working on it methodically,” he said. “This one has definitely been one of the more positive surprises, this is definitely a deeper breakthrough, tackling error correction while you’re scaling up in your quantum computer. It’s definitely been one of the tougher challenges in the field.”
“We published state-of-the-art weather forecasting models with GenCast, but in a future when we can use quantum computing, you shouldn’t underestimate our ability to predict these things on a much deeper, better scale,” he added. “These are profound implications.”
Reed’s view
Last week, Pichai said at the DealBook conference that the next stage of AI development would get more difficult and that the “low hanging fruit” was over. People heard what they wanted to hear from that statement. One headline read: “AI development is finally slowing down.”
But Pichai told me that shouldn’t have been the takeaway. It’s not that progress in AI will plateau — a common refrain these days from tech industry critics — it’s that the winners will be a small handful of teams that have the ability and resources to do it. We know who those players are: Anthropic+Amazon, Google DeepMind, OpenAI+Microsoft and Oracle, xAI and Meta.
There’s a disconnect between the way the outside world looks at AI and the way a lot of AI people see it. And I think it explains why so many people believed — or maybe still believe — that Google’s failure to be the first to build a viral AI chatbot meant its future was doomed.
If you look at AI development on a time scale from November 2022 to now, you could picture a graph of capability over time showing a massive spike, going from almost nothing to something in a matter of months. But since then, things have tapered off. It looks like the tech industry discovered something cool, hyped it up, and two years later, it turned out to be a dud.
People who work in AI have different timelines, some going back to 2011 or 2012, with major breakthroughs like AlexNet that showed how more compute power is tracked with better AI capabilities. Some go back a lot further. DeepMind’s founders, for instance, would fall into the “much further” category.
For almost 15 years, neural network capabilities have increased as the amount of compute power and data has increased. That’s a pretty solid upward trend line with some blips along the way.
If you’re Pichai, you’re betting on that upward trajectory and you need your company to be at the upper right, relative to competitors. You also need Alphabet to keep making enough money to fund an increasingly big investment in that area. And public market investors have to believe in your vision.
It’s that last part — investor confidence — that forced Google to respond to the generative AI moment. If not for external forces, Google probably would have waited to roll out generative AI products until they were absolutely ready.
So with hindsight, I think Pichai’s strategy seems to be working, both in the long term and the short term. The company is right at the upper right corner of the graph, even if the race has gotten a bit tighter than it was.
Short term, Google didn’t panic because of ChatGPT. It put its resources into the models it believed would be better in the long run. Pichai was dragged through the mud by his critics, but that was a small price to pay.
It does seem like Google’s ability to gather valuable training data could be a key differentiator as it begins to roll out new products. I think the company is shy about bragging about that advantage because of the touchiness around consumer privacy.
Here’s how Pichai responded to my question about using data from its generative AI products to train the next multimodal models: “There’s nothing like real world feedback across everything we do. People using Google Lens in search, people as they use Astra, I think that virtuous cycle becomes super important for our products.”
Public relations aside, consumers always seem to be ok trading personal data for amazingly convenient products. And if the data helps build even more convenient products. Many are happy to do it. And if this becomes a “Manhattan Project” for AI, it’s in the national interest to do it.
Obviously, the race is far from over. Anthropic is linked up with Amazon, combining high level AI research with world class data center capability. OpenAI is still arguably in the lead with ChatGPT and has Microsoft and Oracle to help it on the infrastructure side. And then Elon Musk’s xAI could end up being a dark horse leader in the long run.