Nvidia vs. the Whole World: Assessing Risks in the Changing AI Chip Industry

The titan of AI is Nvidia (NVDA). Its market share of AI chips worldwide is expected to be between 70% and 90%. Obtaining its powerful graphics processors, which are ideal for training and using AI models, is a job in and of itself due to its high demand. With the AI boom in full force, Nvidia’s market capitalization surpassed $1 trillion in June. Additionally, the company’s shares reached an all-time high of $549.91 on Friday.

Read More Articles :

New Virus Update In Chinese Lab Creates Mutated COVID-19 Strain That Kills Mice 100% Of The Time In “Humanized” Mice And Dies “Surprisingly” Quickly. 

3M Gets Settlement with Veterans and Military Members

Competition Forecast and Opportunities for India’s $10.45 Billion Ethanol Market

15 Years of Innovation: Nvidia’s Trailblazing Position in the Development of AI

Nvidia stays ahead of its competitors for reasons more than just its hardware. A key component of Nvidia’s longevity is its Cuda software, which programmers utilize to build AI platforms. “Nvidia’s strategic moat is still software,” VP analyst Chirag Dekate of Gartner said. “With these turnkey experiences, Nvidia can lead in both mindshare and adoption.”

Nvidia’s advantage wasn’t achieved quickly. For years, despite investor doubts, it has been developing AI products.

According to Patrick Moorhead, CEO of Moor Insights & Strategy, “Nvidia, to its credit, started about 15 years ago working with universities to find novel things that you could do with GPUs, aside from gaming and visualization.” “What Nvidia does is they help create markets, and by the time competitors catch up, Nvidia is on to the next new thing,” he continued, putting them in a very difficult position.

Threats to Nvidia’s hegemony, however, are growing. Competitors AMD (AMD) and Intel (INTC) are preparing to fight for a piece of the AI market. AMD unveiled the MI300 accelerator in December to compete directly with Nvidia’s data center accelerators. In the meantime, Intel is developing the Gaudi3 AI accelerator, which will rival Nvidia’s products.

15 Years of Innovation: Nvidia's Trailblazing Position in the Development of AI

A Growing Threat to Nvidia’s AI Hegemony: AMD and Intel

However, it goes beyond simply AMD and Intel. The cloud service providers Microsoft (MSFT), Google (GOOG, GOOGL), Amazon (AMZN), and Meta (META) are examples of hyper scalers that are using their processors, called application-specific integrated circuits, or ASICs, more and more.

Consider Nvidia, AMD, and Intel’s AI graphics accelerators to be masters of all trades. The chips are capable of doing a wide range of AI-related jobs, so whatever a corporation requires, it may employ them. Conversely, ASICs are experts in just one field. They are frequently more efficient than the graphics processing units from Nvidia, AMD, and Intel since they are designed especially for a company’s unique AI needs. This presents a challenge for Nvidia since hyperscalers have a significant budget for AI GPUs. However, hyper scalers might require fewer Nvidia processors as they concentrate more on building their ASICs.

Nevertheless, Nvidia’s technology generally outperforms those of its rivals.
According to Dekate, “They have a long-term research pipeline to continue driving the future of GPU leadership.” Regarding AI chips, another consideration is their application. The first method is model training, or just training for short. The second involves putting those models into action so that users may accomplish tasks like producing a desired output, be it text, graphics, or something else entirely. We refer to that as inferencing.

Microsoft has inferences from Copilot, and OpenAI has inferences from ChatGPT. Additionally, each time you submit a request to either program, they use AI accelerators to produce the desired text or image. Eventually, as more businesses look to leverage various AI models, inferencing is probably going to be the main application for AI processors. The boom of AI is still just getting started. Furthermore, the great majority of businesses that stand to gain from AI have not yet entered the fray. Therefore, if the AI industry grows, Nvidia’s revenue will rise even if its market share declines.

A Growing Threat to Nvidia's AI Hegemony: AMD and Intel

Nvidia Has a Strong Historical Record.

A trip to CES, the yearly Consumer Electronics Show in Las Vegas, would be incomplete without a mention of artificial intelligence. Last week, all of the major participants in the race for AI PCs were in attendance, including Qualcomm (QCOM), Microsoft (MSFT), Intel (INTC), AMD (AMD), and Qualcomm (-3.47%).

With a press conference and keynote address showcasing the Intel Core Ultra, the company’s most recent consumer CPU, Intel created the greatest buzz during the event. A pre-recorded “special address” was made by AMD to discuss its progress in the field of artificial intelligence (AI)-powered PCs. The presentation included additional information about the company’s new Ryzen 8040 series of processors, which also feature an integrated network processing unit (NPU) to boost AI. To debunk industry perceptions that Intel’s most recent chip is displacing AMD, AMD unveiled new benchmarks comparing these chips against Intel’s Core Ultra product. As one might expect, the comparisons are positive in terms of both AI results and integrated graphics performance.

However, one name has been oddly absent from the discussion since the “AI PC” slogan began sometime in the middle of last year: Nvidia NVDA, +0.27%. By applying significant force to the promotion and positioning of AI on PCs, the business hopes to alter that perspective. Nvidia, after all, has history on its side: as the hardware provider that invented CUDA, or programmable graphics processing unit (GPU) chips, and established a software ecosystem that rivals anyone in the business, it laid the foundation for the current path that the AI markets are all traveling down.

What does Nvidia gain by making an effort in this AI PC race, given that the business reached its $1 trillion valuation on the strength of the expansion of the AI market for data centers, selling GPUs that cost tens of thousands of dollars to companies training the largest and most significant AI models? The primary advantage is increasing the number of GPUs sold to consumers for computers without an integrated GPU in the base configuration. Though more affordable devices sometimes do not include graphics cards due to cost concerns, gaming laptops, and PCs invariably include them. More sales across the board will result from Nvidia’s ability to argue that every genuine AI PC has a GeForce GPU.

Nvidia Has a Strong Historical Record.

Other advantages include persuading investors and the industry that the introduction of the NPU won’t cause a dramatic change in the landscape of AI computing and keeping Nvidia GPU chips as the cornerstone around which the next big AI application is constructed.

The raw performance of Nvidia GPUs makes them an appealing viewpoint for AI PCs. A high-end GeForce GPU can deliver more than 800 TOPS, whereas the inbuilt NPU on the Intel platform only gives 10 TOPS (tera-operations per second, a common measure of AI performance). An 80x increase in AI computing power implies a much greater capacity to develop novel and ground-breaking AI applications. This year’s NPUs will be far less capable of processing data than even Nvidia’s mainstream discrete GPUs.

Moreover, these GPUs are accessible in notebooks in addition to desktop computers. This implies that a high-performance GPU can be included in the laptop “AI PC” to run the most demanding AI applications, rather than relying solely on an Intel Core Ultra.

Since the outset, graphics processors have served as the foundation for the development of AI applications. Nvidia hardware is currently the best choice for the generative AI push that has greatly increased the popularity of AI in the consumer market. The majority of local stable diffusion programs that generate images from text prompts are designed to work with Nvidia GPU hardware by default; however, they can also be used with Intel or AMD NPUs with some careful configuration and the addition of specialized software modules.

During CES, Nvidia showcased a few impressive demos that emphasized their vision for the PC’s AI space. Initially, it collaborated with Convai on a service that aimed to transform how game creators create content and how players engage with non-player characters in virtual worlds or games. In essence, the implementation enables a game developer to create a virtual character with a realistic personality by using a broad language model such as ChatGPT and adding some flavor and details about a character’s past, traits, and likes and dislikes.

Like a contemporary AI-based chatbot, a player can converse into a microphone, have their speech transformed to text by another AI model, and then send that text to a game character to receive a response that is translated into both speech and animation within the game.

I observed several users interacting with this demo, posing various questions and scenarios to an AI character. The method enabled a real-time interaction with a game- and context-aware AI character, and it performed extremely effectively and swiftly. And in a true best-case scenario for Nvidia, some of this AI processing takes place on a local gaming PC and its GPU and some is done in the cloud on a group of Nvidia GPUs.

Nvidia Has a Strong Historical Record.

An open-source language model was used in another example to create a customized ChatGPT-like assistant that could be accessed by pointing it to a folder containing papers, articles, and other personal materials. This demonstration made use of the GPU capabilities in a desktop machine. The user can converse with or pose queries to the chatbot using this extra data, which includes private emails and past writings, which “fine-tunes” the AI model. This is one of the claims made by an AI PC, and in this case, everything is powered by an Nvidia GPU. It was merely a tech demo and not yet ready for widespread deployment.

Of course, there are trade-offs. The inbuilt NPU on a CPU like the Intel Core Ultra will typically consume far less power than the Nvidia GeForce GPUs found in laptops and desktop computers. However, the power of discrete GPUs will enable you to do AI work more rapidly when you require an output.

AI will undoubtedly improve human interactions with and use of computers, and it will do so much sooner than most people think. This will be made possible by a variety of technologies, including cloud and edge-linked computing, high-performance GPUs from Nvidia and AMD, and low-power integrated NPUs found in the newest laptop chips from Intel, AMD, and Qualcomm. To give customers the greatest experiences possible, all of these options will be combined. However, it’s a major error to talk about the “AI PC” revolution coming to us without Nvidia.

The Future is Here: Examining CES 2024’s AI-Powered Innovations with Nvidia

The PC business never stops coming up with new ideas, even when you think they have run out.

With an early peek at the newest innovations in computers, phones, laptops, TVs, monitors, and all the wearable tech in between, CES 2024 has come and gone. Oh, and some manufacturers decided to force AI into everything since they heard that you enjoy it. It’s time to think back on the coolest and most intriguing desktop and PC components we saw at CES 2024 now that the dust has cleared from CES and I’ve had some time to read up on the flood of computer innovation that came flying our way and filter out AI.

CES 2024 provided a clear indication of the direction things are heading. Naturally, the excitement surrounding AI is exploding, but there have also been some exciting advancements for PC games and novel approaches to portable computing. These new developments demonstrate why now is a terrific moment to be a PC user:

1. Nvidia makes its RTX 40-series GPUs supersized.

Nvidia makes its RTX 40-series GPUs supersized.

Better value than a refresh, but still worth it

Rather than waiting for its rivals to hurry up with their Blackwell GPUs, Nvidia decided to gatecrash the CES party with its 40-series Super graphics card. With the same beloved silicon from the current 40-series portfolio, it brought us the RTX 4070 Super, RTX 4070 Ti Super, and RTX 4080 Super cards. Since it makes the new graphics cards—especially the RTX 4080 Super—more competitive, I see this update more as a price reduction.

Regarding the RTX 40-series Super GPUs, Nvidia isn’t claiming any remarkable performance feats. As a result, the RTX 4080 Super should provide, on average, a 2% to 3% performance boost over the RTX 4080. The RTX 4070 Ti Super is 10% faster than its non-Super cousin and is now receiving an upgrade to the AD103 GPU similar to the RTX 4080. Finally, Nvidia claims that the RTX 4070 Super is quicker than the RTX 3090 and is around 15% faster than the standard RTX 4070. Amazingly, the Zotac RTX 4070 Super Trinity Black Edition GPU is nearly as good as my overclocked RTX 4070 Ti graphics card, especially at $200 cheaper.

2. Alienware 32 4K QD-OLED

Alienware 32 4K QD-OLED

240Hz 4K gaming monitors will be introduced this year.

I have written previously about the monitors at CES, but I think it is worth mentioning again: the Alienware AW3225QF 4K QD-OLED monitor. As the others are scheduled for availability in the upcoming weeks and months, this is the only 4K QD-OLED monitor from CES that is currently for sale. While it is pricey, its 240Hz refresh rate and 4K resolution make it an excellent choice for both movie consumption and strenuous gaming sessions. As envious as I am of the superior monitors that manufacturers have chosen to use—LG’s future WOLED panels, for example, allow refresh rates of up to 480 Hz—I’m not sure I want to wait until Q3 2024.

3. Nvidia RX 7600 XT

Nvidia RX 7600 XT

Beats the RTX 4060 Ti in price

The RX 7600 XT, priced at $330, is a new addition to the RX 7000 line of graphics cards. The GPU used in this product is the same as that of the RX 7600 and the RX 7600 XT, but it has 16GB of VRAM, which is more than even the RTX 4060 Ti and the recently released RTX 4070 Super. Since AMD has long provided huge memory capacities on comparatively inexpensive GPUs, the RX 7600 XT’s copious amounts of video memory come as no surprise.

Although how it compares to other GPUs on the market is still to be seen, given that it is entering uncharted terrain for Nvidia, it should have little trouble finding a home in the market.

4. APUs with AMD Ryzen 8000G

APUs with AMD Ryzen 8000G

The fastest GPU on the planet

AMD’s introduction of the Ryzen 8000G APU at CES reintroduced APUs into the mainstream. This combines a mobile graphics chip, such as the Radeon 780M used in handhelds like the ASUS ROG Ally, with up to eight Zen 4 CPU cores. According to AMD, the graphics processor is now even quicker, enabling its top-tier Ryzen 8000G CPU to run even the most demanding games, such as Cyberpunk 2077 and Alan Wake 2, at 60 frames per second.

That’s only true if you render them with low graphics settings and at 1080p resolution, but even then, it’s still nothing to laugh at. I’m eager to test it out, and $330 for the top-tier Ryzen 7 8700G APU isn’t a horrible asking price either. For individuals wishing to create an entry-level gaming PC for occasional use, there are lots of options available. The other variants highlighted below grow cheaper from then on out.

5. The Monokei Systems

The Monokei Systems

Makes me long for a discrete keyboard

These days, not a few keyboards catch my attention as much as Monokei Systems. For starters, even though it’s a low-profile keyboard, you can customize everything about it, including the switches and keycaps. Its manufacturer, Monokei, is well-known in the mechanical keyboard industry for its partnerships with Singa, TGR, and other companies. This is another reason why I appreciate it. You may be sure that you’ll receive a tried-and-true product created by experts with a long history in the enthusiast keyboard industry.

With this keyboard, Monokei has made it incredibly simple to customize the parts to your desire. Although I was unable to try this 75% keyboard firsthand, my colleague João Carrasqueira was so impressed with it at CES that he wanted to take it home with him. That is from someone who has lately experimented with many mechanical low-profile keyboards, such as the Cherry KW X ULP.

6. Tower 300 Thermaltake

ower 300 Thermaltake

A micro-ATX tower with lots of functionality

At CES 2024, not many manufacturers made a splash with their PC cases, but a handful stood out. The Tower 300 micro-ATX case from Thermaltake is the first that springs to mind. The tower is octagonal and has a tonne of vents for ventilation. If you want more ventilation vents on the sides, you may even place it on the optional stand. It’s quite the focal point of your arrangement and is sure to draw attention. The micro-ATX form factor, which I have personally preferred since building the Asus Prime AP201 PC case, is the best feature of this case.

A 3.9-inch LCD, tool-free panels for simple access, two case fans that come pre-installed, support for up to a 420mm radiator, and plenty of room for all the best components available are some more highlights. Of course, not everyone needs a case like this, but it’s ideal for anyone who wants to customize their mid-to-high-end gaming PC in 2024.

Concluding remarks

Even while CES isn’t exactly known for revealing big innovations in the PC hardware arena, there was still a tonne of fascinating announcements, first glances, and other events. The MSI Claw was the only portable game console available at CES, and there weren’t many of them. However, in the months running up to Computex, I anticipate more, so pay attention.

Leave a Comment