Convert checkpoints and configurations#
Overview#
We have designed the deep learning models in the Cerebras Model Zoo as highly generalizable, allowing users to easily make architectural modifications from a single configuration file. Unfortunately, the general model implementations make it difficult to directly use model configs and checkpoints from other code repositories (e.g., Hugging Face) with Cerebras Model Zoo. Therefore, we offer you the Checkpoint and Config Converter tool, that was built to allow users to easily convert model implementations between Cerebras Model Zoo and other code repositories.
Some use cases include:
Take a pretrained checkpoint from another code repository and convert it into the equivalent Cerebras Model Zoo compatible config & checkpoint so that you can continue training on the CS system.
Train a model within the Cerebras ecosystem using the Cerebras Model Zoo, then convert to another equivalent implementation (e.g., Hugging Face) to run inference.
“Upgrade” an old Cerebras Model Zoo config and checkpoint to a new one (e.g., convert Cerebras Model Zoo rel 1.6 checkpoints to 1.7) to ensure that old checkpoints can continue to be used in new releases if our model implementations evolve.
Note
We only support conversions between Hugging Face (HF) and Cerebras Model Zoo (CS) implementations.
Location#
Definition#
The tool offers three commands:
Command |
Description |
|
Displays all the available conversions (models and formats) |
|
Performs only config conversion. If you intend to convert from Cerebras Model Zoo to another repository at any point, we recommend running config conversion before training the model. This way, you can determine whether your configuration within Cerebras Model Zoo is a candidate for conversion. Conversions are only sometimes possible as other repositories are less general than our Cerebras Model Zoo implementations (e.g., many Hugging Face NLP model implementations support a limited range of positional embeddings). |
|
Performs both config and checkpoint conversion. In other words, the tool is supplied the old config and checkpoint and produces a new config and checkpoint. |
Note
Cerebras configuration files contain model parameters and configurations for the optimizer
, train_input
, eval_input
, and runconfig
. Most other open-source repositories (ex: Hugging Face) need this information. Since these cannot be inferred by the converter tool, you will need to modify the output config with these additional properties. As a starter, you can look at the example configs in Cerebras Model Zoo.
Usage#
The three commands introduced above are used as follows:
To get a list of all models/conversions that we support, use the following command:
(venv_cerebras_pt) $ python <modelzoo path>/modelzoo/common/pytorch/model_utils/convert_checkpoint.py \ listNote
It is essential that you read the notes section of the list command output before using the converter! This section explains the exact model classes that are being converted from/to. It also lists any caveats about the conversion process. For example, many NLP models offer
-headless
variants which are missing a language model head.
To convert a config file only, use the following command:
(venv_cerebras_pt) $ python <modelzoo path>/modelzoo/common/pytorch/model_utils/convert_checkpoint.py \ convert-config \ --model <model name> \ --src-fmt <format of input config> \ --tgt-fmt <format of output config> \ --output-dir <location to save output config> \ <config file path>
To convert a checkpoint and its corresponding config, use the following command:
(venv_cerebras_pt) $ python <modelzoo path>/modelzoo/common/pytorch/model_utils/convert_checkpoint.py \ convert \ --model <model name> \ --src-fmt <format of input checkpoint> \ --tgt-fmt <format of output checkpoint> \ --output-dir <location to save output checkpoint> \ <input checkpoint file path> \ --config <input config file path>
To learn more about usage and optional parameters about a particular subcommand, you can pass the -h
flag. For example:
(venv_cerebras_pt) $ python <modelzoo path>/modelzoo/common/pytorch/model_utils/convert_checkpoint.py \
convert -h
Models supported#
The following is a list of models supported by the Checkpoint and Config Converter tool:
bert |
bert-sequence-classifier |
bert-token-classifier |
bert-summarization |
bert-q&a |
codegen |
codegen-headless |
gpt2 |
gpt2-headless |
gptj |
gptj-headless |
gpt-neox |
gpt-neox-headless |
llama |
llama-headless |
opt |
opt-headless |
t5 |
transformer |
falcon |
falcon-headless |
Examples#
Converting Eleuther AI GPT-J 6B (from model card) to Cerebras Model Zoo#
Eleuther’s final GPT-J checkpoint can be accessed on Hugging Face at EleutherAI/gpt-j-6B. Rather than manually entering the values from the model architecture table into a config file and writing a script to convert their checkpoint, we can auto-generate these with a single command.
First, we need to download the config and checkpoint files from the model card locally:
$ mkdir opensource_checkpoints
$ wget -P opensource_checkpoints https://huggingface.co/EleutherAI/gpt-j-6B/raw/main/config.json
$ wget -P opensource_checkpoints https://huggingface.co/EleutherAI/gpt-j-6B/resolve/main/pytorch_model.bin
Note
Use the appropriate https link when downloading files from Hugging Face model card pages. For config files, use the path that contains …/raw/…; for checkpoint files, use the path that contains …/resolve/….
Hugging Face configs contain the architecture
property, which specifies the class with which the checkpoint was generated. According to config.json
, the HF checkpoint is from the GPTJForCausalLM
class. Using this information, we can use the checkpoint converter tool’s list
command to find the appropriate converter. In this case, we want to use the gptj
model, with a source format of hf
, and a target format of cs-1.9
.
Now to convert the config & checkpoint, run the following command:
(venv_cerebras_pt) $ python <modelzoo path>/modelzoo/common/pytorch/model_utils/convert_checkpoint.py \
convert \
--model gptj \
--src-fmt hf \
--tgt-fmt cs-1.9 \
--output-dir opensource_checkpoints/ \
opensource_checkpoints/pytorch_model.bin \
--config opensource_checkpoints/config.json
This produces two files:
opensource_checkpoints/pytorch_model_to_cs-1.9.mdl
opensource_checkpoints/config_to_cs-1.9.yaml
The output YAML config file contains the auto-generated model
parameters from the Eleuther implementation. Before you can train/eval the model on the Cerebras cluster, add the train_input
, eval_input
, optimizer
, and runconfig
parameters to the YAML. Examples for these parameters can be found in the configs/
folder for each model within Model Zoo. In this case, we can copy the missing information from modelzoo/transformers/pytorch/gptj/configs/params_gptj_6B.yaml
into opensource_checkpoints/config_to_cs-1.9.yaml
. Make sure you modify the dataset paths under train_input
and eval_input
if they are stored elsewhere.
The following command demonstrates using the converted config and checkpoint for continuous pretraining:
(venv_cerebras_pt) $ python <modelzoo path>/modelzoo/transformers/pytorch/gptj/run.py \
CSX \
--mode train \
--params opensource_checkpoints/config_to_cs-1.9.yaml \
--checkpoint_path opensource_checkpoints/pytorch_model_to_cs-1.9.mdl \
--model_dir gptj6b_continuous_pretraining \
--mount_dirs {paths to modelzoo and to data} \
--python_paths {paths to modelzoo and other python code if used}
Note
First navigate to the directory of the model (GPT-J in this case) before executing run.py
. Additional details about the run.py
command can be found on the Launch your job page.
Converting a Hugging Face model without a model card to Cerebras Model Zoo#
Not all pretrained checkpoints on Hugging Face have corresponding model card web pages. You can still download these checkpoints and configs to convert them into a Model Zoo-compatible format.
For example, Hugging Face has a model card for BertForMaskedLM
accessible through the name bert-base-uncased
. However, it doesn’t have a webpage for BertForPreTraining
, which we’re interested in.
We can manually get the config and checkpoint for this model as follows:
>>> from transformers import BertForPreTraining
>>> model = BertForPreTraining.from_pretrained("bert-base-uncased")
>>> model.save_pretrained("bert_checkpoint")
This saves two files: bert_checkpoint/config.json
and bert_checkpoint/pytorch_model.bin
Now that you have downloaded the required files, you can convert the checkpoints. Use the --model bert
flag since the Hugging Face checkpoint is from the BertForPreTraining
class. If you want to use another checkpoint from a different variant (such as a finetuning model), see the other bert-
model converters.
The final conversion command is:
(venv_cerebras_pt) $ python <modelzoo path>/modelzoo/common/pytorch/model_utils/convert_checkpoint.py \
convert \
--model bert \
--src-fmt hf \
--tgt-fmt cs-1.9 \
bert_checkpoint/pytorch_model.bin \
--config bert_checkpoint/config.json
...
Checkpoint saved to bert_checkpoint/pytorch_model_to_cs-1.9.mdl
Config saved to bert_checkpoint/config_to_cs-1.9.yaml
Converting Cerebras Model Zoo GPT-2 checkpoint to Hugging Face#
Suppose you just finished training GPT-2 on CS and want to run the model within the Hugging Face ecosystem. In this example, the configuration file is saved at model_dir/train/params_train.yaml
and the checkpoint (corresponding to step 10k) is at model_dir/checkpoint_10000.mdl
To convert the Hugging Face, run the following command:
(venv_cerebras_pt) $ python <modelzoo path>/modelzoo/common/pytorch/model_utils/convert_checkpoint.py \
convert \
--model gpt2 \
--src-fmt cs-1.9 \
--tgt-fmt hf \
model_dir/checkpoint_10000.mdl \
--config model_dir/train/params_train.yaml
Since the --output-dir
flag is omitted, the two output files are saved to the same directories as the original files: model_dir/train/params_train_to_hf.json
and model_dir/checkpoint_10000_to_hf.bin
Converting a GPT2 muP checkpoint to Hugging Face#
Hugging Face does not support the muP model. However, if you have a Cerebras GPT2/3 checkpoint that uses muP, it is possible to convert it to a Hugging Face model to run inference. This process only works for models that can be converted to a Hugging Face GPT2 model, in particular, it is not compatible with models that use Alibi or Swiglu.
Proceed with the following steps to convert:
Use the transformers/pytorch/gpt2/scripts/fold_mup.py script to fold the muP scaling constants into the weights of the model. This script takes a path to a muP checkpoint and its associated config file and outputs a folded checkpoint and a matching params file. For example,
# Create /path/to/sP/output/checkpoint.mdl and /path/to/sP/output/params.yaml
$ python fold_mup.py --src /path/to/muP/checkpoint.mdl --params /path/to/muP/params.yaml --dest /path/to/sP/output/checkpoint.mdl
Once you have folded the muP constants into the weights of the model, use the existing monolith checkpoint conversion scripts to convert. For example,
$ python convert_checkpoint.py convert /path/to/sp/output/checkpoint.mdl --config /path/to/sP/output/params.yaml --src-fmt cs-1.9 --tgt-fmt hf --output-dir /path/to/hf/output/dir
Upgrading Checkpoints and Configs to the current release#
As our Model Zoo implementations evolve over time, the changes may sometimes break out-of-the-box compatibility when moving to a new release. To ensure that you can continue using your old checkpoints, we offer converters that allow you to “upgrade” configs and checkpoints when necessary. The section below covers conversions that are required when moving to a particular release. If a converter doesn’t exist, no explicit conversion is necessary.
Release 1.9#
All configs & checkpoints from 1.8 can continue to be used in 1.9 without any conversion.
Release 1.8#
T5 / Vanilla Transformer
As described in the release notes, the behavior of the use_pre_encoder_decoder_layer_norm
flag has been flipped. In order to continue using rel 1.7 checkpoints in rel 1.8, you’ll need to update the config to reflect this change. You can do this automatically using the config converter tool as follows:
(venv_cerebras_pt) $ python <modelzoo path>/modelzoo/common/pytorch/model_utils/convert_checkpoint.py \
convert-config \
--model <model type> \
--src-fmt cs-1.7 \
--tgt-fmt cs-1.8 \
<config file path>
In the command above, --model
should be either t5
or transformer
depending on which model you’re using. The config file path should point to the train/params_train.yaml
file within your model directory.
BERT
As described in the release notes, we expanded the BERT model configurations to expose two additional parameters: pooler_nonlinearity
and mlm_nonlinearity
. Due to a change in the default value of the mlm_nonlinearity
parameter, you will need to update the config when using a rel 1.7 checkpoint in rel 1.8. You can do this automatically using the config converter tool as follows:
(venv_cerebras_pt) $ python <modelzoo path>/modelzoo/common/pytorch/model_utils/convert_checkpoint.py \
convert-config \
--model bert \
--src-fmt cs-1.7 \
--tgt-fmt cs-1.8 \
<config file path>
The config file path should point to the train/params_train.yaml
file within your model directory.
Frequently Asked Questions#
Question |
Answer |
Which models, formats, classes, etc, are supported? |
See the |
Which frameworks are supported? |
PyTorch only. |
Does the optimizer state get converted? |
No. Hugging Face checkpoints contain model state information only; unlike CS, they do not contain optimizer state information. |
Sometimes, when I run the checkpoint converter tool, it runs for a while before saying |
The program hit the memory limit. Unfortunately, PyTorch pickling works by storing whole checkpoints in the same file, forcing everything to be read into memory at once. Ensure that the system you’re running on has at least as much RAM as the size of the checkpoint file. |
Conversion failed with a |
Conversions are only sometimes possible as other repositories are less general than our Model Zoo implementations (ex: many Hugging Face NLP model implementations support limited types of positional embeddings while Model Zoo includes an expanded range). For this reason, we recommend that you run config conversion before training the model if you intend to convert a Model Zoo model to another repository at any time. This will allow you to determine if the configuration you are using within Model Zoo can be converted before you train the model. |
Sometimes during config conversion, I see the following: |
No. Not all keys in one config format must be converted to another. This warning message is simply printing out the keys that will be ignored. |
Model conversion failed with the following error: |
The checkpoint contains keys that weren’t expected, and, therefore, couldn’t be converted. The converters are heavily tested, so this error message highlights an issue with the input checkpoint or the command being run, not the converter itself. |
I am unable to use a converted checkpoint because I get the following errors: |
There is a discrepancy between the format of the converted checkpoint and the expected format that you’re loading the model into. This is caused by a misspecified |
I have a sharded checkpoint. How do I use the checkpoint converter tool? |
Starting 1.9, the checkpoint & config converter tool support sharded Hugging Face checkpoints. To convert from a sharded HF checkpoint, download all shards (Pytorch |