Meta's CEO Mark Zuckerberg met with NVIDIA's Jensen Huang, as both of them performed an iconic "jersey-swap" and probably had a talk about a thing or two on AI.
09.03.2024 - 22:49 / wccftech.com / Muhammad Zuhair
A recently conducted survey has revealed that a large chunk of AI professionals are looking to switch from NVIDIA to AMD Instinct MI300X GPUs.
Jeff Tatarchuk, who works at TensorWave, recently surveyed 82 engineers and AI professionals, claiming that around 50% of them expressed their confidence in utilizing AMD Instinct MI300X GPU, considering that it offers a better price-to-performance ratio, along with having extensive availability compared to counterparts such as NVIDIA H100s. Apart from that, Jeff also says that TensorWave will employ the MI300X AI accelerators, which is another promising news for Team Red, considering that their Instinct lineup, in general, hasn't seen the adoption level compared to others like NVIDIA.
For a quick rundown on the MI300X Instinct AI GPU, it is designed solely on the CDNA 3 architecture, and a lot of stuff is going on. It hosts a mix of 5nm and 6nm IPs, combining to deliver up to 153 Billion transistors (MI300X). Memory is another area where a huge upgrade has been witnessed, with the MI300X boasting 50% more HBM3 capacity than its predecessor, the MI250X (128 GB). Here is how it compares to NVIDIA's H100:
We recently reported how AMD's flagship Instinct AI accelerator has been causing "headaches" for market competitors. This is because not only are the performance gains with the MI300X huge, but AMD has timed its release perfectly, as NVIDIA is currently stuck with the "weight" of order backlogs, which has hindered its growth in gaining new clients. While Team Red didn't get the start they wanted to have, it seems like the upcoming period could prove to be fruitful for the firm, potentially coming head-to-head with market competitors.
News Source: Jeff Tatarchuk
Meta's CEO Mark Zuckerberg met with NVIDIA's Jensen Huang, as both of them performed an iconic "jersey-swap" and probably had a talk about a thing or two on AI.
NVIDIA's GeForce RTX 4060 GPU is now retailing for as low as $279 US as it approaches its sweet spot positioning from its retail MSRP of $299.
Intel has disclosed the compute performance that you would need to run Microsoft's Copilot locally on Windows-based AI PCs.
Qualcomm, Intel, and Google have reportedly formed a new "strategic" coalition in an attempt to dethrone NVIDIA from the AI markets.
The Chinese government has blocked the use of Intel and AMD CPUs in government computers, creating new hostilities in the tech industry.
AMD has removed the memory OC limits from its Radeon RX 7900 GRE GPU in the latest driver release, allowing for some impressive performance uplifts.
NVIDIA is expecting a steady CoWoS packaging supply for its Blackwell AI GPUs as its CEO sees optimism in the supply chain.
MediaTek might be the first company to confirm the use of NVIDIA's next-gen RTX & AI GPU IP for its upcoming Dimensity SOCs for automobiles.
TinyCorp, the company that has recently made headlines due to its AI venture, has ultimately decided to part ways with AMD due to the firmware constraints they are witnessing and now utilizing NVIDIA and Intel hardware.
NVIDIA's Blackwell AI GPUs unveiled at GTC 2024, will cost a hefty price for potential buyers, as the firm is estimated to have poured several billion dollars into the project.
AMD's "gold rush" time in the AI markets might finally come, as industry reports that the firm faces massive demand for its cutting-edge MI300X AI accelerators.
A new research paper has discovered the usefulness of DRAM cache for GPUs which can help enable higher performance at low power.