Monument Updates On Cerebras' Progress

Logo

Dublin, Ireland Mar 23, 2025 (Issuewire.com) - A major expansion of its data center footprint and two significant enterprise partnerships were announced Tuesday by Cerebras Systems, an AI hardware startup that has been steadily challenging Nvidia's dominance in the artificial intelligence market. These partnerships position Cerebras Systems to become the leading provider of high-speed AI inference services.

The company's inference capacity will increase twentyfold to over 40 million tokens per second with the addition of six new AI data centers in North America and Europe. With 85% of the expansion's capacity in the US, the facilities are spread among Dallas, Minneapolis, Oklahoma City, Montreal, New York, and France.

As businesses look for quicker alternatives to Nvidia's GPU-based solutions, the company's audacious wager that the market for high-speed AI inferencethe process by which trained AI models produce outputs for real-world applicationswill expand significantly is reflected in the expansion of the data center.

Cerebras announced alliances with AlphaSense, a market intelligence platform extensively utilized in the financial services sector, and Hugging Face, a well-known AI development platform, in addition to the infrastructure expansion.

With just one click, its five million developers will be able to access Cerebras Inference through the Hugging Face connection, eliminating the need for them to register for Cerebras individually. This may become a significant Cerebras distribution channel, especially for developers using open-source models like Llama 3.3 70B.

With the financial intelligence platform moving from a global, top-three closed-source AI model vendor to Cerebras, the AlphaSense relationship marks a big enterprise customer gain. Cerebras is helping AlphaSense, which provides services to over 85% of Fortune 100 organizations, speed up its AI-powered market intelligence search capabilities.

About Cerebras Systems
Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the worlds largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise. For further information, visit cerebras.ai

Cerebras inferenceCerebras logo

Source :Monument

This article was originally published by IssueWire. Read the original article here.

Popular news