https://x.com/The_AI_Investor/status/1766346389943128080?s=20
" even when the competitorās chips are offered for free, itās still not cheap enough"
You missed out the important part āthat is our goalā.
Itās clearly not the current situation because TPUv5 is already more efficient and although the MI300X is less efficient itās still going to end up much cheaper from a TCO perspective.
This whole situation is feeling very similar to Tesla, where there was misguided sense of lack of competition/deep moats and people extrapolating temporary aberrations into a new trend. I wasnāt brave enough to short the stock - but I called the margin collapse so Iām willing to do the same here:
I have as much in AMD as NVDA , I can see the MI300X doing well, as for google unless Im missing something, TPU is only good for Ai where as NVDA and the H100 and H200 can do everything
Yeah but the AI compute overhang is what is driving the explosive revenue growth, in an extreme example if you remove that story itās very hard to come up with a target above $200.
So yes ASICs by design have limited applications, but in return they give huge efficiency and at the moment the workflows they do cover make up a huge fraction of Nvidiaās profits.
I think Nvidia is going to continue to do well as a company but the valuation is getting absurd.
Just read an article in the telegraph
ā Every investor should own Nvidia ā hereās why
Donāt be rattled by the tech companyās red hot share priceā
Thatās that then.
Destined to go to zero now the main stream rags are pushing it.
Top has been called
At least you made a million whole it was going good
https://www.tomshardware.com/pc-components/gpus/nvidia-expected-to-give-developers-a-peek-at-next-gen-blackwell-b100-gpu-next-week
The catalyst to push us over $1000 ?
Unstoppable. Itās like a train with no brakes
Blackwell
Thereās no stopping to get off now
Choo choo Nvda train
NVLink
Chips talking to chips
If we get a drop tomorrow on all this news
Because people just donāt understand
Iāll be buying more
Nvidiaās new Blackwell GPU platform is coming to AWS
- AWS to offer NVIDIA Grace Blackwell GPU-based Amazon EC2 instances and NVIDIA DGX Cloud to accelerate performance of building and running inference on multi-trillion parameter LLMs
When you think it canāt get any better
Microsoft Azure to adopt NVIDIA Grace Blackwell Superchip to accelerate customer and first-party AI offerings
- NVIDIA DGX Cloudās native Integration with Microsoft Fabric to streamline custom AI model development with customerās own data
- NVIDIA Omniverse Cloud APIs first on Azure Power ecosystem of industrial design and simulation tools
- Microsoft Copilot enhanced with NVIDIA AI and accelerated computing platforms
- New NVIDIA generative AI Microservices for enterprise, developer and healthcare applications coming to Microsoft Azure AI
The āBlackwellā chip marks a historic achievement of 1,000x AI compute in just 8 years. In 2016, the āPascalā chip had just 19 teraflops vs. 20,000 teraflops in the āBlackwellā chip today.
The first āBlackwellā chip dubbed āGB200ā will be available later this year, and will have 208 billion transistors vs. 80 billion transistors in Nvidiaās āHopper H100ā in the same 4nm process.
āGB200ā AI performance is expected to be 5x more powerful than the āH100ā, from 4 petaflops per second, to 20 petaflops per second.
They are so far ahead of the game now
Interesting new announcement
are you referring to GB200?
or Iāve missed a new announcement?