Sundar Pichai was blindsided by ChatGPT. Soon after being named Google CEO in 2015, he’d declared that the world was entering an AI-first era. He went on to bet his stewardship of the entire company on his belief that the technology would be “an intelligent assistant helping you throughout your day,” as he put it in his first shareholder letter. Yet his prescience hadn’t prevented OpenAI from swooping in on November 30, 2022, with the first product that truly demonstrated the epoch-shifting power of generative AI, a breakthrough that had emerged from Google’s research labs in the first place.
Pichai remembers his instinctive response to ChatGPT: “Wow, this technology is going to diffuse earlier and faster than we were expecting.” The feeling, he says, was “uncomfortably exciting.” He knew that if AI was entering hyperdrive ahead of schedule, Google would have to scramble.
Pichai is sharing this memory in a conference room at Google’s expansive office at Manhattan’s Pier 57, a former steamship cargo facility. As we talk, in early January, he radiates his usual air of genial unflappability—the same manner with which he apparently received the arrival of ChatGPT just over three years ago. “I felt we had all the right building blocks in place,” he explains. “And so my genuine reaction was, ‘How do we meet that moment with the resources we have?’ I was deeply focused on what I needed to do.”
Assembling those building blocks was a yearslong process that led to the company’s newest series of AI models, Gemini 3. It debuted in November with Gemini 3 Pro, which beat its rivals from OpenAI and Anthropic across an array of industry-standard benchmarks for gauging AI capabilities—sometimes by dramatic margins. A faster, more computationally efficient version, Gemini 3 Flash, followed the next month. Both are already powering Google Search and other products, impressing AI watchers. Even OpenAI CEO Sam Altman acknowledged the wind in Google’s sails: “I expect the vibes out there to be rough for a bit,” he told staffers in an internal memo after Gemini 3 Pro’s release.
Gemini 3’s strong start capped a year of steady AI progress mirrored in the stock price of Alphabet, Google’s parent company. After underperforming throughout the broad AI rally and bottoming out in April 2025, its stock price has more than doubled. In January 2026, when Google and Apple announced a deal to run future versions of Siri and other Apple AI features on Gemini, Alphabet hit a $4 trillion market cap for the first time.
That Google is suddenly so widely regarded as one of AI’s biggest winners is striking given the skepticism that once clouded its efforts. Many observers saw the 28-year-old company’s previous success—particularly in monetizing its market-dominating search engine—as an obstacle to it being able to reimagine itself around the technology.
“Google may be only a year or two away from total disruption,” tweeted ex-Googler and Gmail inventor Paul Buchheit the day after ChatGPT’s appearance. “AI will eliminate the Search Engine Result Page, which is where they make most of their money. Even if they catch up on AI, they can’t fully deploy it without destroying the most valuable part of their business!”
Though the next couple of years didn’t spell Google’s doom, they also failed to quell doubts about its future. Outside the company, “there were questions around, ‘Will we be able to do new things? Can we catch up? Can we have momentum?’ ” says Josh Woodward, the VP in charge of the Gemini app. The tech giant with the most AI juice seemed to be Microsoft, thanks in large part to the OpenAI partnership it had established by plowing billions into the startup, starting in 2019.
By some measures, Google is still playing catch-up. According to market intelligence firm Sensor Tower, monthly downloads of the Gemini app grew by 480% over 2025, but its 376 million monthly active users fall far short of ChatGPT’s 945 million. (Using its own methodology, which includes both the Gemini app and its web-based interface, Google says that Gemini has 750 million monthly users.) Another firm, Similarweb, reports that Gemini accounts for 22% of traffic to AI chatbot sites—up more than 670% year over year, but still barely a third of ChatGPT’s 63% share.
But with Gemini coming into its own, there are signs Google can finally take full advantage of some of its defining strengths as a company. The company’s myriad products for work and home, running in data centers equipped with Google-designed Tensor Processing Unit chips, provide it with a wealth of touchpoints for Gemini. Even Alphabet’s Waymo robotaxis call on Gemini to help with particularly tricky scenarios, such as what to do if a vehicle ahead is in flames. “The same underlying technology is driving momentum across what look like very different businesses,” says Pichai.
OpenAI also had a busy 2025, but much of it involved trying to be, well, more like Google. For instance, it released a web browser, Atlas, and started production on its own bespoke AI processors. It’s also in the earliest stages of competing with Google’s $295 billion ad business and—with its $6.5 billion acquisition of Jony Ive’s hardware startup Io—getting into consumer electronics, where Google already offers its Pixel, Home, and Nest gadgets. (Google doesn’t disclose its hardware sales, which it rolls up into a “subscriptions, platforms, and devices” revenue line that totaled $48 billion in 2025.)
Both companies have much left to prove, but even observers who thought AI might be a textbook example of Clayton Christensen’s “innovator’s dilemma” in action have reconsidered their gut reactions. “Google has definitely woken up,” says Gmail creator Buchheit, who now believes it might be the best-positioned company in tech. Suddenly, Pichai’s vision of useful AI everywhere is feeling more like a reality.
Pichai may have been optimistic about Google’s ability to take on ChatGPT, but he treated its arrival as an emergency. Just three weeks after the OpenAI chatbot appeared, The New York Times reported that Google had declared a “Code Red,” instructing staffers to set aside other projects to fast-track new AI products and features. In early February 2023, the company announced a decidedly ChatGPT-esque bot called Bard.
Actually, Bard had been in the works all along but hadn’t previously been considered ready for deployment. Google, which Pichai says aspires to be “bold but responsible,” had been bothered by generative AI’s tendency to hallucinate misinformation. Watching the world become smitten with ChatGPT, the company steeled its nerves and moved forward.
In its initial form, Bard felt like it had been rushed to market. Widely regarded as a tepid response to the white-hot OpenAI, it had so little brand equity after its first year that Google relaunched it as Gemini, aligning the chatbot’s name with the LLM that powered it.
Even as Bard foundered, though, Google was making consequential moves behind the scenes. Cofounders Larry Page and Sergey Brin, who had long been absent from day-to-day operations, threw their weight behind the effort to quicken Google’s AI progress. Brin in particular returned to active duty, participating in everything from hiring decisions to code reviews. “Having the founder of the company sitting together with your engineers sweating out the details of the model—I can’t imagine a more motivating thing for people,” says Pichai, his ego apparently unbruised by Brin’s return.
[carousel_block id=”carousel-1773695343155″]
The need for speed also led Google to take a hard look at its AI research organization—or, rather, organizations. The company had two of them, each managed separately and bulging with world-class talent. One, Google Brain, had been catalyzed within Google X, Google’s incubator for big ideas known as “moonshots.” Eight Google Brain scientists had coauthored “Attention Is All You Need,” the groundbreaking 2017 paper that introduced the concept of transformers, the technology that makes all generative AI possible.
Google’s other AI research arm was London-based DeepMind, a 2014 acquisition. Formed to pursue artificial general intelligence, or AGI—AI capable of at least equaling human cognitive abilities across all domains—it had thrived under Google ownership. Its breakthroughs included the creation of AlphaFold, a protein research technology with the potential to dramatically accelerate drug discovery for which DeepMind cofounder and CEO Demis Hassabis and director John Jumper won the 2024 Nobel Prize in Chemistry.
This sprawl and overlap of responsibilities wasn’t unusual at Google. “While it was great to have two teams, that moment called for more focus,” says Pichai. In April 2023, the labs joined forces to become Google DeepMind, with Hassabis running the combined operation and Google Brain cofounder Jeff Dean as its chief scientist.
The merger acknowledged that Google needed to shift more aggressively from pure research to turning innovations into products. “It’s still research, but it’s research that has impacted the real world,” says Google DeepMind chief technology officer Koray Kavukcuoglu, who joined DeepMind as a research scientist when it was a two-year-old startup. “It has to be done with that mentality and with that collaboration across all of Google.” (In June 2025, Kavukcuoglu pushed this integration even further by taking on an additional role—chief AI architect for all of Google, reporting directly to Pichai.)
Shortly after Google Brain and DeepMind became one, Google hosted I/O, its annual developer conference. The event was its first big chance to steal back some of the attention that OpenAI had sucked up. Among the announcements: Google was reviving an opt-in program, called Google Labs, as a way for users to try AI features under development, with the understanding that they were works in progress.
One of Google Labs’ first rough drafts was an update to Google Search called the Search Generative Experience, or SGE. Its results pages retained the familiar blue links to external websites. In some cases, however, it preceded them with AI-generated summaries.
Google spent a year refining the SGE before fully deploying it. But when the feature—renamed AI Overviews—started showing up in search results in volume in spring 2024, it made the news for all the wrong reasons. In a mishap demonstrating AI’s inability to recognize an old Reddit post’s absurdist humor, one AI Overview suggested using glue to help cheese stick to pizza. Another recommended eating a rock a day.
According to VP of search Liz Reid, these goofs were few in number and sometimes stemmed from Google underestimating the degree to which people would prankishly mess around with AI. As she dryly notes, “Before we had AI Overviews, nobody went to us and was like, ‘How many rocks should I eat?’ ” As the company ironed out AI Overviews’ bugs, it was heartened by research indicating that users valued the feature. “They really wanted to be able to continue this conversation,” she says. And when the overview didn’t show up, “they were grumpy.”
That led to Google Search’s second major foray into generative AI, a tab called AI Mode. Introduced as a Labs experiment in March 2025, it lets users click into a chatbot-style experience that provides more detailed responses than AI Overviews and permits follow-up questions. Reid likens it to the engine’s long-standing tabs for images, news, and shopping—an optional complement to search in its classic, general-purpose form, not a substitute for it.
Nobody thinks Google Search is anywhere near its AI end state. Like everyone else in the tech industry, Google is certain that we’ll increasingly call on agents to perform complex jobs with minimal supervision. Already, a Google Labs experiment called Gemini Agent can assist with tasks such as researching and booking a car rental, though Woodward acknowledges that agentic AI can be “hit or miss” and “slow.”
For now, Search strikes a balance that’s tough to get right: enough AI, but not too much. “It wasn’t like we were going to go change the default to AI Mode,” says Reid. “I don’t think AI for the sake of AI is useful. [Google Search] exists because 2 billion people like using it. You don’t want to betray that trust. You want to continue to live up to that promise.”
Last August, Google DeepMind product manager Naina Raisinghani uploaded a cutting-edge new generative AI image model to LMArena, a widely used AI benchmarking platform. When it came time to fill out a field specifying its name, she didn’t give the matter a whole lot of thought—it was 2:30 a.m.—and mashed up two of her own nicknames. Ta-da: The new model was known as Nano Banana.
The wacky monicker was an attention grabber, but so was the model’s skill set. In seconds, it could perform practically any photo-editing trick that popped into someone’s head—say, replacing a portrait subject’s hoodie with a sequined tuxedo jacket. Google quickly rolled it into the Gemini app, where it became a sensation.
“We almost put a superpower in people’s hands,” says Woodward. “And you could see how fast people were like, ‘Did you see this? Look what I created. Look what I did.’ ” As word spread, Gemini downloads in Apple’s and Google’s app stores surged, briefly passing even those of ChatGPT.
This publicity bonanza was reminiscent of OpenAI’s knack for seizing the spotlight by helping its users create shareable content, such as when ChatGPT added a filter that could give photos a Studio Ghibli–esque anime look. “Google, historically, has not been as good at that,” says independent investor and writer (and Google alum) M.G. Siegler. “Part of it, I think, is just a cultural reticence around wanting to do these viral moments. But they honed it in with Nano Banana.” (Meanwhile, OpenAI’s biggest launch of 2025—the much-anticipated GPT-5—was widely deemed a dud, though its recently released GPT-5.3 Codex is getting rave reviews.)
Google was still riding a wave of buzzy goodwill when it announced Gemini 3 Pro in November. Instead of tentatively making it available to a subset of users for testing purposes, the company went wide.
The new model immediately began powering the Gemini app and, for paying subscribers, Google Search’s AI Mode. Google also made it available as a service for developers via Google Cloud and incorporated it into a new coding platform called Antigravity, a competitor to hot products such as Claude Code and Cursor. Within weeks, it shipped two additional versions: the high-end Gemini 3 Deep Think, optimized for math and science questions, and the lighter-weight Gemini 3 Flash.
Gemini 3’s big bang effect isn’t just evidence of Google’s confidence in its quality. It’s a reflection of its yearslong build-out of the cloud infrastructure necessary to deliver AI to billions of people and do it with optimal speed. Thanks to the Google DeepMind merger, the company has also gotten better at putting new models into the hands of internal teams so they can begin building with them. “We were able to simultaneously bring it to life across many of our products, and that made the launch much, much better,” says Pichai.
More than anything else, Gemini 3 is a foundation—both for future models and useful features Google hasn’t even thought of yet. There’s no lack of work left to do. For example, like Microsoft, Google hasn’t made AI feel essential inside productivity mainstays such as word processing and spreadsheets. “A lot of the things they built specifically for Gemini are great, and then, when they’re throwing Gemini into existing apps . . . it’s basically not useful,” says Creative Strategies analyst Max Weinbach.
Even Pichai concedes that users are wary about AI until it proves its worth. “Forcing the technology on people just because it’s a moment and you think you can put it everywhere, I think that’s where there’s backlash,” he says.
That said, Google is not shy about leveraging its existing apps to Gemini’s advantage. For example, after the company’s search engine business was declared a monopoly under U.S. antitrust law in August 2024, it argued that new restrictions imposed by a U.S. District judge on its distribution tactics shouldn’t prevent it from bundling the Gemini app with Google staples such as Maps and YouTube. OpenAI—whose only blockbuster app is ChatGPT itself—couldn’t pursue a similar strategy.
Two basic facts about generative AI have been in conflict. Running the technology in enormous data centers is pricey—in February, Alphabet startled analysts by saying it may spend $185 billion on capital expenditures in 2026, more than double its 2025 total—yet the overwhelming majority of people who use AI chatbots haven’t been paying or seeing advertising. More than anything else, that explains OpenAI’s estimated $9 billion loss in 2025—and why, in January, the company announced that it had begun testing targeted advertising in ChatGPT, sharing an example in which a user asks the chatbot for Mexican recipes and sees a small boxed promo for hot sauce.
Intermingling organic generative AI responses with paid messaging is still a new proposition. Done badly, it might damage users’ faith that AI-based services are working on their behalf—a point Google’s Hassabis made during an Axios interview at the World Economic Forum in Davos, Switzerland. He expressed surprise that OpenAI was moving ahead with ads in ChatGPT and said Google had no immediate plans to follow suit with Gemini. It’s a competitive advantage that Google—the world’s largest seller of advertising—can afford, at least for now.
That’s not to say that Google refuses to sully new AI products with ads. In 2025, it started testing them in Google Search’s AI Overviews and AI Mode. Rather than selling ads specifically into these features, its algorithms pluck relevant ads from its massive inventory for display. Google is also working with Target, Walmart, Etsy, and others to integrate commerce links into both AI Mode and the Gemini app.
What AI will do to Google’s search revenue over time is anyone’s guess. Early third-party data indicates a high click rate for ads associated with AI Overviews. But it also reports reduced interaction with ads positioned among the classic blue links, which users might ignore altogether if an AI Overview has done its job. Talking about the future, Pichai exhibits the same sort of self-assurance that once led Page and Brin to launch their groundbreaking search engine without having a business model in place at all.
“I’ve always felt if you solve problems for users in meaningful ways, there will be commercial value,” he says. “And inherently, a lot of what people are looking for is also commercial in nature. So I think it’ll tend to work out fine in the long run.”
As Google goes about selling ads and signing up paid users—Google offers three AI plans with progressively unfettered access to its latest models and features, priced from $8 to $250 a month—the company is also managing the tricky economics of AI computational resources. Here, too, it has an underappreciated head start on OpenAI, which wasn’t even founded until four months after Pichai became Google’s CEO.
“They have advantages in a bunch of areas,” says Zach Lloyd, the CEO of AI coding platform Warp and a former Google principal engineer. “They make their own chips, and that really matters. They have all of the cloud infrastructure for serving these models. They have an extremely profitable business with which to fund capital expenditures and train models.”
Google’s investment in AI computing capacity hasn’t attracted much attention, at least compared to Stargate, OpenAI’s splashy collaboration with SoftBank and Oracle to sink up to $500 billion into state-of-the-art AI farms. But Google is spending $40 billion in Texas, where it’s building three huge new AI and cloud data center campuses. It’s pouring tens of billions more into Arkansas, Iowa, Missouri, Oklahoma, South Carolina, and Virginia. Outside the U.S., it’s building out infrastructure in India, Germany, Belgium, and Thailand.
Should Wall Street develop jitters over the tech industry’s present level of spending on AI, even Google might have to dial back. “The market might say, ‘Sorry, but not right now—let’s revisit this in a couple of years,’ ” says investor/writer and ex-Googler Siegler.
Asked whether we’re currently in an AI bubble, Pichai pauses long enough to suggest he’s taking the question seriously. Eventually, an answer comes: “We are going to go through periods of underinvestment and then periods of overinvestment. It’s always tough to predict that. But if I were to take a decade-long view, no, I don’t think we are in an AI bubble.”
Every tech CEO claims to think 10 years into the future. Many move onto new grand pronouncements within a couple of years, well before making the old ones a reality. But when Pichai says he’s taking a decade-long view of where the technology is going, it’s not just a platitude. That universal assistant he wrote about in that 2016 shareholder letter? Google is on the cusp of creating it.
Explore the full 2026 list of Fast Company’s Most Innovative Companies, 720 honorees that are reshaping industries and culture. We’ve selected the companies making the biggest impact across 59 categories, including advertising, applied AI, biotech, retail, sustainability, and more.
