Evaluate your model during training#
Eval All Mode#
This feature allows users to run evaluation on multiple model checkpoints within a provided model_dir. This permits users to evaluate models that have already been trained, though train_and_eval (Train and Eval Mode) may be more suitable for evaluating a model throughout a training run.
How to use this feature#
Provide eval_all as the argument for the --mode flag, specify the directory with model checkpoints with the --model_dir flag
Example:
python run.py --mode=eval_all --model_dir=<path> ...<rest of the args>
Train and Eval Mode#
This feature allows users to evaluate models throughout long training runs. This is beneficial to identify any issues with models earlier, rather than after training runs finish.
How to use this feature#
Within the runconfig portion of the config yaml:
either
num_epochsornum_stepsmust be defined (but not both)If
num_epochsis defined,train_and_evaltrains fornum_epochepochs, with each epoch being followed by an evaluationIf
num_stepsis defined, the user must also defineeval_frequencyin the config.num_stepsgoverns the total number of steps the model will train for andeval_frequencyindicates how many steps will pass between each evaluation.For example, if
num_stepsis 100 andeval_frequencyis 20, the model will train for 100 steps and will be evaluated after 20, 40, 60, 80, and 100 steps.
When running your model, enable the --mode flag with train_and_eval
Example:
python run.py --mode=train_and_eval --model_dir=<path> --params=<config_path> ...<rest of the args>