00:00 Julie
Google has announced two new AI processors in its latest effort to take on Nvidia across AI hardware development. Each of these chips will handle distinct tasks best suited for training AI models and inference work. Google had previously made chips that could do both simultaneously. Dan Howley is tracking all of this. So like, tell me why this is a big deal and what, you know, what kind of leap this represents for Google.
00:27 Dan Howley
Oh, yeah, Julie. I mean, the the big deal here is it’s just more confirmation that Google is coming out against uh, its customer partner, uh Nvidia, uh as well as AMD, um and, you know, that they’re building more and more capable products. So these are the TPUs. They’re their uh Turing processor units. Uh basically they’re uh Asics, uh which are application specific integrated circuits. More or less, they just, you know, do AI. All right? That’s all you got to know. Uh there’s two different versions here. There’s the TPU 8T and the TPU 8i. Now the TPU 8T is meant for training, hence the T. The TPU 8i is meant for inferencing with the I. Uh inferencing basically just means running AI models. And so you know, Google says that they can do both uh you know, as well, but they’re more suited for, you know, the individual applications uh that they’re named after. Uh basically what this means uh for Google and why these chips are important is they’re uh increasingly powerful, they’re more power efficient, uh and so they make them more attractive to potential customers. Now, you usually have when, you know, these companies start out, they just run their chips uh for themselves to train their own models. Obviously, Google uses a lot of its uh own chips to run and train its AI models, but it’s increasingly selling them to outside parties. That includes uh places like Anthropic, uh places like OpenAI, uh and places like Meta. That deal is according to the information, uh you know, getting out there and making these big deals to sell its own chips, something that, you know, we had been hearing about, speculating about when they they started to uh really get further and further into this game. Uh but now that kind of moment is here and they keep putting out better and better chips. And it’s not just Google, by the way. Uh Amazon does the exact same thing with their Trainium chips. They have uh a deal with Anthropic. They just announced an expanded deal the other day. Uh Microsoft working on its own AI chips, Meta has its own AI chips, uh building even better ones, ones that it says can even match what’s on the market now, which is Nvidia. And so, you know, there’s no plans, we don’t know much about whether or not Meta would sell, whether uh Microsoft would sell, uh but Amazon and Google are certainly doing that. And they’re two of its uh two of Nvidia’s biggest customers. Don’t forget, their data center sales, about 50% of that comes from these hyperscalers. So, you know, as we continue to see Google, Amazon, the other two kind of push further into their own chip game, it does start to raise these stakes for Nvidia considering they’re such important customers.
03:13 Julie
Um at the same time, Dan, it seems like that there is such a thirst for compute out there that it feels like everybody is sort of buying everything that they can get their hands on.
03:26 Dan Howley
Yeah, for sure. And and you know, that’s also something to keep in mind and I think when when you talk to analysts, it’s, you know, or or even companies it or just look at what companies are doing, right, with these deals, they’re making massive multi-year, multi-generation as far as the the chips go, right? Like different chip generations, deals for gigawatts of uh uh chips. And so, you know, a lot of these haven’t been stood up yet. You know, uh we’ve talked about deals in the gigawatt space, you know, or range before. those haven’t necessarily been built out in data centers yet. They they’re still to come and get online. Uh but that just gives you the idea that it’s it’s multi-year, multi-generation. Uh the biggest uh issue for a lot of the the hyperscalers is as as you say, capacity basically, that, you know, they want as many chips as they can get, uh and the lack of that is constraining them from being able to sell as much of their AI as they want. Case in point, Microsoft. They’ve been dealing with capacity constraints, they’ve mentioned it on their earnings calls. Uh some of the other companies have also mentioned on their earnings calls, but it’s particularly been uh a kind of, you know, uh bugaboo for Microsoft more recently as far as its stock price goes. That along with, you know, co-pilot uptake, things like that. But you’re right, all these companies are trying to get as much compute as they can. But hey, if they can make a couple of bucks on the side while selling their chips and potentially even more, as we start to see governments try to get more chips, uh different companies try to get into uh the chip game, that could be a big benefit for them down the line
05:19 Julie
Yeah.
05:20 Dan Howley
and a bigger problem for Nvidia.