Glossary#

Activations, Batches, Deltas, Gradients, Weights – The data in neural network training. A batch is a set of training examples presented to the network. Activations are data that flow from layer to layer in the network as it propagates a batch in the forward direction. Deltas are derivatives of the loss on the batch with respect to the activations; they flow from layer to layer in the reverse direction during backpropagation. Gradients are the derivatives of the batch loss with respect to the model weights at each network layer, used by the learning method to update the weights after each batch.

Appliance – A cluster of CS-3 nodes and supporting CPU servers that, together with Cerebras software, lets users interact with CS-3 system(s) and all supporting CPU nodes as they would with a single entity that functions as an appliance for model training.

Cerebras System, CS System – A CS system is a 16RU high rack-mounted computing device that contains one Wafer-Scale Engine (WSE) processor; the system provides power and delivers data to the WSE and keeps it comfortably cool.

CPU cluster – Each CS system is deployed together with a supporting CPU cluster. A CPU cluster runs Cerebras software and is responsible for interaction with one CS system or with a cluster of CS systems. ML users interact directly with one of the CPU nodes in the CPU cluster.

CS-2 - A second-generation CS system containing one WSE-2, a second-generation Cerebras Wafer-Scale Engine.

CS-3 - A third-generation CS system containing one WSE-3, a third-generation Cerebras Wafer-Scale Engine.

Input Pre-Processing Server – A CPU server in the cluster that provides training data batches to compute nodes.

(Layer) Pipelined Execution – An execution mode in which all the model weights are stored in on-chip memory for the whole duration of a job. This execution mode relies on model parallelism (within each layer and layer pipeline) to distribute a training job across all of the AI cores of a WSE. This mode is best for models that can fit into WSE on-chip SRAM. This mode doesn’t support distributed training across multiple CS-3 systems. This mode is only supported on Original Cerebras Installations or releases prior to R1.9 on Wafer-Scale Clusters.

MemoryX – A large-capacity off-wafer memory service, used to store model weights, gradients, optimizer states, when using Weight Streaming execution on a Cerebras Wafer-Scale cluster.

Original Cerebras Installation – This installation is designed for a single CS-3 deployment and can only support models below 1B parameters with Pipelined execution. Consists of a CS-3 system and a CPU cluster with CPU nodes playing roles of a coordinator and input workers.

Processing Element – The PE is the replicated element on the WSE. In WSE-3 there are about 850,000 of them. They are interconnected as a two-dimensional mesh. Each PE has a compute engine, a memory, and a router. The router connects to the compute engine and to the routers of the four nearest neighboring PEs in the mesh.

SwarmX – A broadcast/reduce fabric that connects the memory service (MemoryX) to each of the CS-3 systems in a Wafer-Scale cluster. SwarmX coordinates the broadcast of model layer weights, giving each CS-3 a local copy, and it receives and aggregates (by addition) the independent weight gradients that each of the data parallel systems produces during backpropagation communicating the aggregated gradient to MemoryX for learning (weight update) at the end of each input data batch.

Wafer-Scale Cluster – This installation is designed to support large-scale models (up to and well beyond 1 billion parameters) and large-scale inputs. It can contain single or multiple CS-3 systems with the ability to distribute jobs across all or a subset of CS-3 systems in the cluster. A supporting CPU cluster in this installation consists of MemoryX, SwarmX, management, and input worker nodes. Starting with release 1.9, Wafer-Scale Cluster only supports Weight Streaming execution.

Weight Streaming Execution – Cerebras’s revolutionary processor is a single 200-mm-square silicon chip, the largest square that can be cut from a single wafer. It includes thousands of independent processors, called AI-cores, on the wafer. They each have a fast local memory and a connection to a network-on-wafer that interconnects the AI cores as a two-dimensional mesh. The memory and interconnect bandwidths are orders of magnitude greater per unit compute performance than a conventional compute node or a cluster of conventional nodes.

WSE, Wafer-Scale Engine – Cerebras’s revolutionary processor.

WSE-3 – A third-generation WSE.