Cerebras Alliance Offers CS-2 AI Supercomputer As Cloud Service

High-performance AI chip maker Cerebras, maker of the Wafer-Scale engine, the world’s largest computer chip, today announced Cerebras Cloud @ Cirrascale, providing its AI accelerator as a cloud service. The alliance combines Cerebras’ CS-2 HPC / AI system, featuring 850,000 AI-optimized compute cores, with Cirrascale’s deep learning cloud services infrastructure.

“With Cerebras Cloud @ Cirrascale, (the CS-2) is available at your fingertips,” said Gil Haberman, senior director of product marketing at Cerebras. “Cerebras Cloud is available as weekly or monthly flat rate allocations, and as you grow there are discounts offered for predictable longer-term usage. In our experience, as users observe the meteoric performance of the CS-2, ideas for new models and experiments emerge – such as training from scratch on domain-specific datasets, using more efficient sparse models or experimenting with smaller batch sizes – resulting in more efficient production models and a faster pace of innovation.

Cerebras was one of the early entrants into the growing field of AI chips – processors specifically designed for training machine learning models rather than general purpose chips placed in training workloads. models. Contrary to the usual trend in the tech industry towards miniaturization, the unusually large plate-sized chips of Cerebras have gained wide attention, as shown in a portrait of Cerebras published in The New Yorker last month.

“Almost every day, we engage with scientists and machine learning (ML) engineers who seek to push the boundaries of deep learning, but find themselves limited by the long training times of existing offerings,” said Haberman said. “In contrast, our solution was designed from the ground up for AI. It offers hundreds or thousands of times more performance than the alternatives, allowing data scientists and ML practitioners to train. and iterate over large, advanced models in minutes or hours rather than days or weeks.

Haberman said that the Cerebras Cloud offering with Cirrascale extends the reach of the business, “an important step in the real democratization of high performance AI computing.”

Cerebras Wafer-Scale Engine

“The CS-2 systems that power Cerebras Cloud deliver cluster-wide performance with the ease of programming of a single node,” he said. “Whether the model is large or small, our compiler optimizes the runtime to get the most out of the system. As a result, orchestration, synchronization, and tuning of the cluster model are eliminated, allowing you to focus on innovation rather than the overhead of cluster management.

The software platform integrates with popular machine learning frameworks, such as TensorFlow and PyTorch, “so you can use familiar tools to immediately start running models on the CS-2. The Cerebras Graph compiler automatically translates your neural network from your framework representation into a CS-2 executable, optimizing compute, memory and communication to maximize utilization and performance, ”said Haberman.

He added that for customers whose data is stored in other cloud services, Cirrascale can integrate the Cerebras Cloud with other cloud-based workflows to create a multi-cloud solution.

Leave a Reply

Your email address will not be published. Required fields are marked *