Nvidia 🍟 💻 - NVDA

Makes sense.

It should still be illegal though imo. It’s one of the many things we have Reagan to thank for :roll_eyes:

There is some rubbish being written here.
You buy back when shares are cheap.
Buying back at these p/e is ridiculous. It’s the equivalent of a investment trust buying back and cancelling shares when the trust is trading on a premium.
Do you think 3i group should buy back shares when they are trading on 45% premium?

And again the CEO is selling shares, while they are buying back…do you think he might gain out of that?

How long before this post is cancelled?

No it doesn’t. As he hasn’t explained the benefits, well? @Cameron

Yeah I mean that does make sense in some ways. Although it gives the board / mgmt a lot of power to decide when it’s “cheap”. And often given most CEO pay is around 10% cash 90% stock (and often tied to the stock price) it does create perverse insentives to keep buying back to boost the stock price / hit SP targets / juice the EPS number.

I don’t really have time to write an actually well sourced post atm (and it’s very off topic) but I think lots of the stuff pushed by Reagan (inc. leagalising buybacks) combined with the impact of Jack Welch managment philosophies created a lot if the biggest issues we’re currently living through.

:+1:

4 Likes

5 Likes

Why?

Look at the rise its had, look at the valuation, look at the forecast, look at the month we are in, look at the insider sells and also The U.S. Department of Justice has sent a subpoena to Nvidia.

7 Likes

I also think that there has been massive FOMO from retail investors bumping this to a level that just wasn’t sustainable.

8 Likes

What happened to all the technical chart experts?? I was enjoying those posts. Haven’t seen a bull flag post in a while

2 Likes

The ones who enjoyed the posts like yourself seemingly remained silent, while the negative ninnys flagged them. So it was a pointless exercise. I stick to discords now for such analysis.

Another factor is the narrative of AI labs moving away from GPUs is starting to accelerate e.g. more rumors about OpenAI’s ASIC progress this week:

GDM: In-house TPUs 2015/2017
Anthropic: AWS/GCP TPUs 2023
Microsoft: In-house Maia
ByteDance: AWS
Apple: TPU and In-house
Meta: In-house 2025-2026? (MTIA First gen 2024)
OpenAI: In-house 2026

3 Likes

For a layman, what is involved is switching away from GPU’s? What provides the compute if not a GPU?

I can see from that link this is a way off (2nd half 2026).

1 Like

Basically it involves moving to software that doesn’t require Nvidia’s proprietary CUDA libraries - which used to be quite tricky but now isn’t quite so bad, basically some extra work for the engineering teams there.

Then for the hardware it really depends - either you rent Google or AWS chips, which means you don’t need to go through the big, expensive, long chip design process, but it’s a bit more expensive and you don’t get something fully custom.

Or you have you have you design your own chip in house, partnering with other companies to design some parts, tape it out with a foundry (TSMC/Samsung), book wafers from them, wait for it to be built, tested, shipped and installed etc… Realistically it’s a 2+ year process that will cost $100millions - but at the end you get something entirely custom for your use cases (which can give big performance, cost and efficiency benefits).

If you think you are going to spend $10bn+ on compute for the next few years and already have talented engineers it’s really a no-brainer to skip the middleman

1 Like

it seems the DOJ story was just a rumour :face_with_monocle:

1 Like

Ah interesting, so this would mean if Nvidia don’t stay constantly 2 steps ahead of an in house design team that mutli billion contact becomes harder to justify?

Is there a moores law senario going on here where development keeps going or are there cost / benefit limits?

1 Like

Yeah if Nvidia could make something so much better then it becomes worthwhile to pay the Nvidia tax. In the medium term I’m sure there will still be vast orders basically for that reason (e.g. Microsoft will buy huge numbers of GPUs, alongside Maia 100 and AMD) and outside of big tech plenty of companies can’t design their own chips.

However it’s pretty hard to make something better for everyone in the long run, because vertical integration is incredibly important here, when the question isn’t, ‘what’s the best chip?’ it’s ‘what’s the best 1GW data center?’ you can get a lot of different design considerations.

There are some fundamental limits of how good each individual chip is because the logic for ML is actually incredibly simple (compared to general purpose computing) - really it’s just multiplying vast matrices.

So lots of the innovation hasn’t been within the chip, it’s everything around it - interconnect (moving enough data fast enough to keep chips ‘fed’), cooling efficiency, power delivery, network latency etc…

Yesterday’s SemiAnalyis kind of indicates the scale of these other challenges and why the extra step of designing their own chips isn’t such a big deal for OA/Microsoft:

2 Likes