Workflow for PyTorch on CS
On This Page
Workflow for PyTorch on CS¶
When you are targeting the Cerebras system for your neural network jobs, start with the high-level workflow described here.
Attention
Several PyTorch models, such as the following, and run scripts, are provided in Cerebras Model Zoo:
Note
Cerebras has moved away from huggingface model implementations in favor for our own PyTorch layer API. One of the many benefits of using our PyTorch layer API is that it is designed to be (near) drop-in compatible with the transformer layers that are included in PyTorch. It is not possible (at least for T5 and Transformer) to maintain the same naming scheme in the migrated model as in the original.
1. Port PyTorch to CS¶
See Porting PyTorch Model to CS for detailed documentation on how to port your PyTorch model to work on a Cerebras System.
2. Load the data¶
As mentioned in Porting PyTorch Model to CS, you need to provide
get_train_dataloader
for training and get_eval_dataloader
for
evaluation. See cerebras.framework.torch.dataloader() for details.
3. Compile on CPU¶
Before you run it on the Cerebras system, we recommend that you iterate until your model first compiles successfully on a CPU node. Make sure that this CPU node has the Cerebras Singularity container client software.
As described in Porting PyTorch Model to CS, you can verify that your model
will compile by passing in the --compile_only
flag to the run
function’s CLI.
4. Train on the Cerebras system¶
As described in Porting PyTorch Model to CS, you can train or evaluate your model by providing the IP address of the Cerebras system and the mode you want to run.