Advanced Micro Devices (AMD) is maintaining its share gains late Monday morning after announcing an AI chip partnership with ...
The broad-brush strokes on how to build a great AI training cluster are pretty settled: Get as many GPUs together as you can, densely pack them with fast networking, and pump in as much data as ...
The AI industry is undergoing a transformation of sorts right now: one that could define the stock market winners – and losers – for the rest of the year and beyond. That is, the AI model-making ...
Nvidia has decided that building GPUs the size of small towns is the way to stay ahead in AI, and its latest weapon is the Rubin CPX which is aimed squarely at inferencing and large context models.
Good inferencing chips can move data very quickly. The number of new inferencing chip companies announced this past year is enough to make your head spin. With so many chips and no lack of any quality ...
Lenovo has launched an entry-level AI inferencing server designed to make edge AI accessible and affordable for SMBs and enterprises. Showcased as part of a full-stack of cost-effective, sustainable, ...
Lenovo has unveiled its new ThinkEdge SE100 AI inferencing server at Mobile World Congress. In a statement, the company said the offering was the “first-to-market, entry-level AI inferencing server,” ...
Nvidia has announced a new GPU designed for large-scale inferencing tasks. The company this week announced the Rubin CPX, a new class of GPU purpose-built for massive-context processing. The chip ...
Data center provider Equinix has launched its Distributed AI infrastructure, which includes a new AI-ready backbone to ...