Model Zoo API
Cerebras PyTorch API
Training and fine-tuning a Large Language Model (LLM)
Extend context length using Position Interpolation
Train an LLM with a large or small context window
Instruction fine-tuning for LLMs
Train a model with weight sparsity
Optimize Performance with Automatic Microbatching
Train an LLM using Maximal Update Parameterization
Configure μP for GPT-Style Models
Configure μP for BERT Pretrain (Beta)
Configure μP for T5 (Beta)
Train an LLM using Maximal Update Parameterization with Legacy Params
Dynamic loss scaling
Training with number of tokens loss scaling
Tutorials
#