BIG TECH TALKS--SUMMARY OF COMMENTS ON CAPEX (early 2026 results)
BIG TECH
TALKS—Justifying the biggest capex spend in history
I have put
together quotes from the last earnings call on what Big Tech are seeing and why
they are investing. MSFT META AMZN GOOG
CONTEXT
Below are
selective quotes from the four big AI spenders' recent results. MSFT is perhaps
the most conservative and has to manage the OpenAI relationship that is a double-edged
sword for them. Without a huge in-house AI pedigree, they have the challenge of
staying relevant to their large commercial customer base. Working to embed an interface
between clients and LLMs is how they plan to defend their turf. Meta is perhaps
the most zealous and effusive for AI, being founder-led. Meta is also perhaps
the company that has proven the AI use case in an operational sense, with
spectacular profit growth in its ads business. As per below, they are enthusiastic
to continue the progress on the core ads business and are also spending on personal
superintelligence, which is more speculative. GOOG are historically perhaps the
most guarded in comments. However, they have the full AI stack and can invest
across the chain. GOOG's ambitions are across the board, consumer and corporate.
AMZN I find the most open and forthcoming with detail. Again, a big leading cloud
business and a big retail business with optionality for AI uses. The challenge
is to keep its incumbent AWS business as the leader, with the focus moving from
cloud to AI uses.
SUMMARY
All these players
have common themes. Demand is immense, supply is constrained, monetising is occurring
and encouraging them to invest further. They also believe that the future
demand is much more immense, and it is only a matter of time for all companies to
move to where the leaders are now (Jassey (AMZN) gives the best explanation
below—the barbell analogy). It is a once-in-a-generation opportunity, and they
all want their part in it. Management would see the return on vintage capex,
which they imply, and we can calculate, is attractive. ROIC is falling for all
of them, but if we compare current returns with past capex, the ROIC is much
better. That is, ROIC is falling due to the large increase in the denominator. Will
the numerator catch up? Can the returns hold as the capex balloons in size? Is the
big question. The answer boils down to
incremental returns on capex, as it almost always does.
I tried to put these numbers in context. It would require
a 3-5% eps increase for the entire S&P500 for this capex to earn 15-20% returns
(some broad assumptions here) and all be paid to the providers. That’s big.
MSFT
As
agents proliferate, every customer will need new ways to deploy, manage and
protect them. We believe this creates a major new category and significant
growth opportunity for us.
we
continue to see strong demand across workloads, customer segments and
geographic regions, and demand continues to exceed available supply.
And
the way to think about that is the majority of the capital that we're spending
today, and a lot of the GPUs that we're buying are already contracted for most
of their useful life. And so a way to think about that is much of that risk
that I think you're pointing to isn't there, because they're already sold for
the entirety of their useful life. And so part of it exists because you have
this shorter-dated RPO because of some of the M365 stuff. If you look at the
Azure only, RPO is a little bit more extended. A lot of that is CPU basis. It's
not just GPU. And on the GPU contracts that we've talked about, including for
some of our largest customers, those are sold for the entire useful life of the
GPU…………is
that as you go through the useful life, actually, you get more and more and
more efficient at delivery. So where you've sold the entirety of its life, the
margins actually improved with time.
And
when you think about that portion (non OpenAI Azure backlog) alone growing 28%,
it's really impressive work on the breadth as well as the adoption curve that
we're seeing, which is I think what I get asked most frequently, it's grown by
customer segment, by industry and by geo. And so it's very consistent
That's, I think, the most magical thing, which is you deploy these
things. And suddenly, the agents are helping you coordinate, bring more
leverage to your enterprise.
Then
on top of it, of course, there is the transformation, which is what businesses
are doing. How should we think about customer service. How should we think
about marketing. How should we think about finance. How should we think about
that and build our own agents.
META
Our vision is building personal super intelligence. We're starting
to see the promise of AI that understands our personal context, including our
history, our interests, our content and our relationships. A lot of what makes
agents valuable is the unique context that they can see. And we believe that
Meta will be able to provide a uniquely personal experience.
But
soon, we'll be able to understand people's unique personal goals, and tailor
feeds to show each person content that helps them improve their lives in the
ways that they want
Our feeds will become more interactive overall. Today, our apps
feel like algorithms that recommend content. Soon, you'll open our apps, and
you'll have an AI that understands you and also happens to be able to show you
great content or even generate great personalised content for you.
We're architecting our systems so that we can be flexible in the
systems that we use, and we expect the cost per gigawatt to decrease
significantly over time through optimising both our technology and supply chain.
while continuing to make our systems more responsive to people's
real-time interests. We're also focused on incorporating LLMs to understand
content more deeply across our platform, which will enable more personalised
recommendations.
We're
seeing in our early testing that personalised responses drive higher levels of
engagement, and we expect to significantly advance the personalisation of Meta
AI this year.
This dovetails with our investments in content understanding
Since the beginning of 2025, we've seen a 30% increase in output
per engineer, with the majority of that growth coming from the adoption of
agentic coding, which saw a big jump in Q4. We're seeing even stronger gains
with power users of AI coding tools, whose output has increased 80%
year-over-year. We expect this growth to accelerate through the next half.
We think that there are going to be opportunities, both in terms
of subscriptions and advertising and all of the different things that you see
on that.
so
that way we can just offer more integrated solutions for the many, many
millions of businesses that use and rely on our platforms, which is going to be
really powerful, both for accelerating their results using the existing
products that we have, and I think adding new lines as well
So
we expect over the course of 2026 to have significantly more capacity this year
as we add cloud. But we'll likely still be constrained through much of 2026
until additional capacity from our own facilities comes online later in the
year
I
think the important thing is we're not just launching one thing, and we're
building a lot of things. I think they're -- like AI is going to enable a lot
of new experiences. I outlined thematically a bunch of these in the upfront
comments around personal AI around LLMs combining with the recommendation
systems………There
are all these different things as well as several things that we think are new
that we're going to try that are not just extensions of the current things that
we're doing.
But
it I just think the fact that agents are really starting to work now is quite
profound. And I think it is going to allow -- we're already starting to see the
people who adopt them are just being significantly more productive. And there's
a big delta between the people who do it and do it well and the people who
don't. And I think that's going to just be a very profound dynamic for, I
think, across the whole sector and probably the whole economy going forward in
terms of the productivity and efficiency with which we can run these companies,
which I think -- my hope is that we can use that to just get a lot more done
than we were able to before
This
is the first time we have found a recommendation model architecture that can
scale with similar efficiency as LLMs. And we're hoping that this will unlock
the ability for us to significantly scale up the size of our ranking models
while preserving an attractive ROI
GOOGLE
As
we scale, we are getting dramatically more efficient. We were able to lower the
Gemini serving unit cost by 78% over 2025 through model optimizations,
efficiency and utilization improvements.
Our
first-party models like Gemini now process over 10 billion tokens per minute
via direct API used by our customers, up from 7 billion last quarter
Our 10-year track record in building our own accelerators with
expertise in chips, systems, networking and software translates to leading
power and performance efficiency for large-scale inference and training.
We're
investing in AI compute capacity to support Frontier model development by
Google DeepMind, ongoing efforts to improve the user experience and drive
higher advertiser ROI in Google Services, significant cloud customer demand as
well as strategic investments in Other Bets (Waymo)
I
expect the demand we are seeing across the board across our services, what we
need to invest for future work for Google DeepMind as well as for cloud, I
think, is exceptionally strong. And so I do expect to go through the year in a
supply-constrained way
It was exciting to see the fact that we're already monetizing and
you saw it in the results that we just issued this quarter, the investments
that we've made in AI.
It's
already delivering results across the business. I know it in cloud, it's very
obvious external, but you've heard the comments on the success we're seeing in
search,
But
I think specifically at this moment, maybe the top question is definitely
around compute capacity, all the constraints, be it power, land, supply chain
constraints, how do you ramp up to meet this extraordinary demand for this
moment, get our investments right for the long term and do it all in a way that
we are driving efficiencies and doing it in a world-class way.
approximately
60% of our investment in 2025, and it's going to be fairly similar in 2026,
went towards machines, so the servers. And then 40% is what you referred to as
long-duration assets (land and buildings)
We see AI Overviews and AI Mode continue to drive greater search
usage and growth in overall queries, including important ones in commercial
queries. Gemini-based improvements in search ads help us better match queries
and craft creatives for advertisers. I talked about the understanding of intent
and how this has significantly expanded our ability to deliver ads on longer
and more complex searches that were, frankly, previously difficult to monetise.
AI Max, for example, is already used by hundreds of thousands of advertisers
and continues to unlock billions of net new queries in that sense. We see
strength with SMB advertisers expanding their budgets and adopting automation
tools, leading to better ROI. On the creative side, we're using Gemini to
generate millions of creative assets via text customisation in AI Max and PMax
and so on. So we're very pleased with what we're seeing here.
AMZN
We're seeing strong growth and with the incremental opportunities
available to us in areas like AI, chips, low earth orbit satellites, quick
commerce and serving more consumers' everyday essentials needs, we have a
chance to build an even more meaningful business in Amazon in the coming years
with strong return on invested capital, and we're investing to do so.
We're
continuing to see strong growth in core non-AI workloads as enterprises return
to focusing on moving infrastructure from on-premises to the cloud
We expect to invest about $200 billion in capital expenditures
across Amazon, but predominantly in AWS because we have very high demand,
customers really want AWS for core and AI workloads, and we're monetising
capacity as fast as we can install it. We have deep experience in understanding demand signals in
the AWS business and then turning that capacity into a strong return on
invested capital. We're confident this will be the case here as well.
We
are putting into service with customers all the capacity that we're getting,
and it's immediately useful. And we're also seeing a long arc of additional revenue that we
see from other customers and backlog and commitments that people are anxious to
make with us, especially for AI services. We see a strong return on invested
capital. We see strong demand for these services, and we continue to like the
investments in this area.
I would add to that. If you look at the capital we're spending
and intend to spend this year, it's predominantly in AWS. And some of it is for
our core workloads, which are non-AI workloads because they're growing at a
faster rate than we anticipated. But most of it is in AI. And we just have a
lot of growth and a lot of demand…….And what we're continuing to see is that
as fast as we install this capacity, this AI capacity, we are monetising it.
And so it's just a very unusual opportunity.
And
I think the other thing is that if you really want to use AI in an expansive
way, you need your data in the cloud, and you need your applications in the
cloud. Those are all big tailwinds pushing people towards the cloud. So we're
going to invest aggressively here, and we're going to invest to be the leader
in this space as we have been for the last number of years
I'm very confident we're going to have strong return on invested
capital here.
The way I would describe what we see right now in the AI space is
it's really kind of a barbelled market demand where on one end, you have the AI
labs who are spending gobs and gobs of compute right now, along with what I
would consider a couple of runaway applications (Claude, ChatGPT). And then at
the other side of the barbell, you've got a lot of enterprises that are getting
value out of AI in doing productivity and cost avoidance types of workloads.
These are things like customer service, business process automation or some of
the fraud pieces. And then in the middle of the barbell are all the enterprise
production workloads. And I would say that the enterprises are in various
stages at this point of evaluating how to move those, working on moving those
and then putting them into production. But I think that the middle part of the
barbell very well may end up being the largest and the most durable. And I
would put in the middle of that barbell, too, by the way, I would put just the
altogether brand-new businesses and applications that companies build that
right from the get-go run in production on top of AI.
And so I think that to me, when I look at this, and what's
happening, it's kind of unbelievable if you look at the demand of what you're
seeing already with AI, but the lion's share of that demand is still yet to
come in the middle of that barbell. And that will come over time. It
will come as you have more and more companies with AI talent as more and more
people get educated with an AI background. As inference continues to get less
expensive, and that's a big piece of what we're trying to do with Trainium and
our hardware strategy. And as companies start to have success in moving those
workloads to -- further and further success in moving those workloads to run on
top of AI. So I think there's -- it's just a huge opportunity. It's still in
the relatively early stages, even though it's growing at a very -- like an
unprecedented clip as we've talked about.
So
we're growing at a really unprecedented rate, yet I think every provider would
tell you, including us, that we could actually grow faster if we had all the
supply that we could take.
Disclosure
I hold all these companies.
Comments
Post a Comment