Your guide to a better future
Stephen Shankland has been a reporter at CNET since 1998 and covers browsers, microprocessors, digital photography, quantum computing, supercomputers, drone delivery, and other new technology. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Nvidia is releasing a new chip, the H100 “Hopper,” that has the potential to speed up artificial intelligence that’s sweeping the tech industry.
The chip helps cement Nvidia’s lead in technology that’s revolutionizing everything in computing from self-driving cars to translating language as people speak.
Nvidia will begin selling a new AI acceleration chip later this year, part of the company’s efforts to secure its leadership in a computing revolution. The faster chip should let AI developers speed up their research and build more advanced AI models, especially for complex challenges like understanding human language and piloting self-driving cars.
The H100 “Hopper” processor, which Nvidia Chief Executive Jensen Huang unveiled in March, is expected to begin shipping next quarter. The processor has a whopping 80 billion transistors and measures 814 square millimeters, which is almost as big as is physically possible with today’s chipmaking equipment. (CNET got an advance look at the H100 Hopper chips and Nvidia’s new Voyager building that will house hardware and software development work.)
The H100 competes with huge, power-hungry AI processors like AMD’s MI250X, Google’s TPU v4 and Intel’s upcoming Ponte Vecchio. Such chips are goliaths most often found in the preferred environment for AI training systems, data centers packed with racks of computing gear and laced with fat copper power cables.
The new chip embodies Nvidia’s evolution from a designer of graphical processing units used for video games to an AI powerhouse. The company did this by adapting GPUs for the particular mathematics of AI like multiplying arrays of numbers.
Circuitry for speeding up AI is becoming increasingly important as the technology arrives in everything from iPhones to Aurora, expected to be the world’s fastest supercomputer. Chips like the H100 are critical for speeding up tasks such as training an AI model to translate live speech live from one language to another or to automatically generate video captions. Faster performance means AI developers can tackle more challenging tasks like autonomous vehicles and speed up their experimentation, but one of the biggest areas of improvement is in processing language.
Linley Gwennap, an analyst at TechInsights, says the H100, along with Nvidia’s software tools, cements its position in the AI processor market.
“Nvidia towers over its competitors,” Gwennap wrote in a report in April.
Pindrop, a longtime Nvidia customer that uses AI-based voice analysis to help customer service representatives authenticate legitimate clients and spot scammers, says the chipmaker’s steady progress has let it expand to identifying audio deepfakes. Deepfakes are sophisticated computer simulations that can be used to perpetrate fraud or spread misinformation.
“We couldn’t get there if we didn’t have the latest generation of Nvidia GPUs,” said Ellie Khoury, the company’s director of research.
Training their AI system involves processing an enormous quantity of information, including audio data from 100,000 voices, each one manipulated in several ways to simulate things like background chatter and bad telephone connections. That’s why H100 advancements, like expanded memory and faster processing, are important to AI customers.
Nvidia estimates its H100 is six times faster overall than the A100 predecessor the company launched two years ago. One important area that definitely benefits is natural language processing. Also known as NLP, the AI domain helps computers understand your speech, summarize documents and translate languages, among other tasks.
Nvidia is a strong player in NLP, a field at the vanguard of AI. Google’s Palm AI system, can tease apart cause and effect in a sentence, write programming code, explain jokes, and play the emoji movie game. But Nvidia’s flexible GPUs are popular with researchers. For example, Meta, Facebook’s parent company, this week released sophisticated NLP technology for free to accelerate AI research, and it runs on 16 Nvidia GPUs.
With the H100, NLP researchers and product developers can work faster, said Ian Buck, vice president of Nvidia’s hyperscale and high-performance computing group. “What took months should take less than a week.”
The H100 offers a big step up in transformers, an AI technology created by Google that can assess the importance of context around words and detect subtle relationships between information in one area and another. Data like photos, speech and text that’s used to train AI often must be carefully labeled before use, but transformer-based AI models can use raw data like vast tracts of text on the web, said Aidan Gomez, co-founder of AI language startup Cohere.
“It reads the internet. It consumes the internet,” Gomez said, then the AI model turns that raw data into useful information that captures what humans know about the world. The effect transformers “shot my timeline forward decades” when it comes to the pace of AI progress.
We all stand to benefit from the H100’s ability to accelerate AI research and development, said Hang Liu, an assistant professor at the Stevens Institute of Technology. Amazon can spot more fake reviews, chipmakers can lay out chip circuitry better and a computer can turn your words into Chinese as you speak them, he said. “Right now AI is completely reshaping almost any sector of commercial life.”
Your guide to a better future