Custom Chips from Google Drive 29% Growth in Cloud Revenue

Google’s Custom AI Chips

Inside Google’s headquarters in Mountain View, California, a significant focus is on the development and testing of its custom microchips, known as Tensor Processing Units (TPUs). These chips, initially designed for internal workloads, have been available to cloud customers since 2018. Google’s TPUs are integral to training AI models, including those used by Apple for its AI initiatives and Google’s own Gemini chatbot.

Daniel Newman, CEO of Futurum Group, noted, “The world sort of has this fundamental belief that all AI, large language models, are being trained on Nvidia, and of course Nvidia has the lion’s share of training volume. But Google took its own path here.” Google was the first cloud provider to introduce custom AI chips, a move followed by Amazon Web Services and Microsoft in subsequent years. Despite being a pioneer, Google has faced challenges in the competitive AI landscape, particularly with delayed product releases like Gemini, which came out more than a year after OpenAI’s ChatGPT.

Google Cloud has seen growth partly due to its AI offerings, with Alphabet reporting a 29% increase in cloud revenue in the most recent quarter, surpassing $10 billion for the first time. 

“The AI cloud era has completely reordered the way companies are seen, and this silicon differentiation, the TPU itself, may be one of the biggest reasons that Google went from the third cloud to being seen truly on parity, and in some eyes, maybe even ahead of the other two clouds for its AI prowess,” Newman added.

Development and Partnerships

The development of Google’s TPUs began with a thought experiment in 2014, as explained by Amin Vahdat, head of custom cloud chips at Google. The team realized that to support voice interactions with Google for just 30 seconds a day, they would need to double the number of computers in their data centers. This led to the creation of custom hardware, specifically TPUs, which are significantly more efficient than general-purpose hardware.

Google’s TPUs are application-specific integrated circuits (ASICs) designed for AI tasks. They dominate the custom cloud AI accelerator market with a 58% share, according to The Futurum Group. Google has also developed other custom chips, such as the Tensor G4 for its Pixel 9 and the A1 chip for the Pixel Buds Pro 2.

The complexity and cost of developing alternatives to Nvidia’s AI engines are significant. Google’s sixth-generation TPU, Trillium, is set to be released later this year. Stacy Rasgon, a senior analyst at Bernstein Research, highlighted the challenges, stating, “It’s expensive. You need a lot of scale. And so it’s not something that everybody can do. But these hyperscalers, they’ve got the scale and the money and the resources to go down that path.”

Google has partnered with Broadcom for the development of its TPUs, with Broadcom investing over $3 billion in these partnerships. The final designs are manufactured primarily by Taiwan Semiconductor Manufacturing Company (TSMC), which produces 92% of the world’s most advanced semiconductors. Vahdat acknowledged the geopolitical risks associated with this reliance but expressed hope that these risks would not materialize.

Expansion into CPUs

Google recently announced its first general-purpose CPU, Axion, which will be available by the end of the year. This move follows similar developments by Amazon, Alibaba, and Microsoft. Vahdat explained the timing, saying, “Our focus has been on where we can deliver the most value for our customers, and there it has been starting with the TPU, our video coding units, our networking. We really thought that the time was now.”

These processors, including Google’s, are based on Arm chip architecture, known for its power efficiency. This efficiency is crucial as AI servers are projected to consume as much power annually as a country like Argentina by 2027. Google’s latest environmental report showed a nearly 50% increase in emissions from 2019 to 2023, partly due to data center growth for AI.

Google has implemented direct-to-chip cooling for its third-generation TPUs to reduce water usage, a method also used by Nvidia for its latest GPUs. Despite the challenges, Vahdat emphasized Google’s commitment to generative AI tools and custom chip development, stating, “I’ve never seen anything like this and there’s no sign of it slowing down quite yet. And hardware is going to play a really important part there.”

Next >>>

Share Article:

abdulrehmanbwp12345@gmail.com

Writer & Blogger

Considered an invitation do introduced sufficient understood instrument it. Of decisively friendship in as collecting at. No affixed be husband ye females brother garrets proceed. Least child who seven happy yet balls young. Discovery sweetness principle discourse shameless bed one excellent. Sentiments of surrounded friendship dispatched connection is he. Me or produce besides hastily up as pleased. 

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • All Post
  • Business
  • Digital Marketing
  • ENTERTAINMENT
  • Lifestyle
  • Tech Software
  • Uncategorized

Dream Life in Paris

Questions explained agreeable preferred strangers too him her son. Set put shyness offices his females him distant.

Join Us Now

You have been successfully Subscribed! Ops! Something went wrong, please try again.
Edit Template

About

Appetite no humoured returned informed. Possession so comparison inquietude he he conviction no decisively.

Recent Post

  • All Post
  • Business
  • Digital Marketing
  • ENTERTAINMENT
  • Lifestyle
  • Tech Software
  • Uncategorized