common.pytorch.optim package#
Submodules#
common.pytorch.optim.ASGD module#
Cerebras implementation of ASGD optimizer. Adapted from the torch.optim.ASGD implementation.
- class common.pytorch.optim.ASGD.ASGD[source]#
Bases:
modelzoo.common.pytorch.optim.CSOptimizer.CSOptimizer
ASGD optimizer implemented to conform to execution within the constraints of the Cerebras WSE, including pre-initializing optimizer state.
For more details, see https://dl.acm.org/citation.cfm?id=131098
- __init__(params, lr=0.01, lambd=0.0001, alpha=0.75, t0=1000000.0, weight_decay=0, maximize: bool = False)[source]#
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (Callable, optional) – A closure that reevaluates the model and returns the loss.
common.pytorch.optim.Adadelta module#
- class common.pytorch.optim.Adadelta.Adadelta[source]#
Bases:
modelzoo.common.pytorch.optim.CSOptimizer.CSOptimizer
Adadelta optimizer implemented to perform the required pre-initialization of the optimizer state.
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
common.pytorch.optim.Adafactor module#
- class common.pytorch.optim.Adafactor.Adafactor[source]#
Bases:
torch.optim.Optimizer
Adafactor optimizer implemented to conform to execution within the constraints of the Cerebras WSE.
- __init__(params, lr=None, eps=(1e-30, 0.001), clip_threshold=1.0, decay_rate=- 0.8, beta1=None, weight_decay=0.0, scale_parameter=True, relative_step=True, warmup_init=False)[source]#
- add_global_step(global_step)[source]#
Stores a global_step tensor which will be used in computation and shared by all params.
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (
Callable
, optional) – A closure that reevaluatesloss. (the model and returns the) –
common.pytorch.optim.Adagrad module#
- class common.pytorch.optim.Adagrad.Adagrad[source]#
Bases:
modelzoo.common.pytorch.optim.CSOptimizer.CSOptimizer
Adagrad optimizer implemented to conform to execution within the constraints of the Cerebras WSE.
- Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-2)
lr_decay (float, optional) – learning rate decay (default: 0)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-10)
maximize (bool, optional) – maximize the params based on the objective, instead of minimizing (default: False)
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization: http://jmlr.org/papers/v12/duchi11a.html
- __init__(params, lr=0.01, lr_decay=0, weight_decay=0, initial_accumulator_value=0, eps=1e-06, maximize: bool = False)[source]#
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
common.pytorch.optim.AdamBase module#
- class common.pytorch.optim.AdamBase.Adam[source]#
- class common.pytorch.optim.AdamBase.AdamBase[source]#
Bases:
modelzoo.common.pytorch.optim.CSOptimizer.CSOptimizer
AdamW optimizer implemented to conform to execution within the constraints of the Cerebras WSE, including pre-initilizing optimizer state and performing a gradual reduction of bias correction using exponential decay of beta1_power and beta2_power rather than recomputing beta1^step each step.
- __init__(params: Iterable[torch.nn.parameter.Parameter], lr: float = 0.001, betas: Tuple[float, float] = (0.9, 0.999), eps: float = 1e-06, weight_decay: float = 0.0, l2_regularization_rate: float = 0.0, correct_bias: bool = True, amsgrad: bool = False)[source]#
- convert_state_dict_for_checkpoint(state_dict)[source]#
Converts the state_dict for compatibility with AdamW from huggingface_common, which is the optimizer used by PyTorchBaseModel when not run on WSE and is otherwise API compatible.
- Parameters
state_dict (dict) – optimizer state. Should be an object returned from a call to
state_dict
.
Returns the modified state_dict.
- load_state_dict(state_dict)[source]#
Loads the optimizer state.
- Parameters
state_dict (dict) – optimizer state. Should be an object returned from a call to
state_dict
.
This overrides torch.optim.Optimizer to add checkpoint compatibility with the AdamW from huggingface_common, which is otherwise API compatible.
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (
Callable
, optional) – A closure that reevaluates the model and returns the loss.
- class common.pytorch.optim.AdamBase.AdamW[source]#
common.pytorch.optim.Adamax module#
- class common.pytorch.optim.Adamax.Adamax[source]#
Bases:
modelzoo.common.pytorch.optim.CSOptimizer.CSOptimizer
- __init__(params: Iterable[torch.nn.parameter.Parameter], lr: float = 0.001, betas: Tuple[float, float] = (0.9, 0.999), eps: float = 1e-06, weight_decay: float = 0.0, maximize: bool = False)[source]#
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (
Callable
, optional) – A closure that reevaluates the model and returns the loss.
common.pytorch.optim.CSOptimizer module#
Abstract base class for Cerebras Optimizers.
- class common.pytorch.optim.CSOptimizer.CSOptimizer[source]#
Bases:
torch.optim.Optimizer
,abc.ABC
Cerebras Base Optimizer class
Cerebras Base Optimizer class handles preinitialization of optimizer states for non-CS runs, making the implementation of the optimizer compatible with both CS and non-CS runs. It also preinitializes global steps tensor and provides a method to retrieve the global steps.
- __init__(params, defaults, enable_global_step=False)[source]#
Cerebras Base Optimizer class handles preinitialization of optimizer states for non-CS runs, making the implementation of the optimizer compatible with both CS and non-CS runs. It also preinitializes global steps tensor and provides a method to retrieve the global steps.
- post_load_state_dict()[source]#
Actions to perform after initializing state and loading the state dict
- abstract preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- abstract state_names_to_sparsify()[source]#
Return the names of of per-parameter states that need to be sparsified when applying sparsity to the underlying parameters.
common.pytorch.optim.Lamb module#
Copyright cybertonai and Cerebras, see LICENSE_LambOptimizer
- class common.pytorch.optim.Lamb.Lamb[source]#
Bases:
modelzoo.common.pytorch.optim.CSOptimizer.CSOptimizer
Implements Lamb algorithm. It has been proposed in Large Batch Optimization for Deep Learning: Training BERT in 76 minutes.
- Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-3)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
adam (bool, optional) – always use trust ratio = 1, which turns this into Adam. Useful for comparison purposes.
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
common.pytorch.optim.Lion module#
- class common.pytorch.optim.Lion.Lion[source]#
Bases:
modelzoo.common.pytorch.optim.CSOptimizer.CSOptimizer
Implements Lion algorithm. As proposed in Symbolic Discovery of Optimization Algorithms.
- Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-4)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.99))
weight_decay (float, optional) – weight decay coefficient (default: 0)
- __init__(params: Iterable[torch.nn.parameter.Parameter], lr: float = 0.0001, betas: Tuple[float, float] = (0.9, 0.99), weight_decay: float = 0.0)[source]#
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
common.pytorch.optim.NAdam module#
Cerebras implementation of NAdam optimizer. Adapted from the torch.optim.NAdam implementation.
- class common.pytorch.optim.NAdam.NAdam[source]#
Bases:
modelzoo.common.pytorch.optim.CSOptimizer
Implements NAdam algorithm to execute within the constraints of the Cerebras WSE, including pre-initializing optimizer state.
- Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 2e-3)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
momentum_decay (float, optional) – momentum momentum_decay (default: 4e-3)
foreach (bool, optional) – whether foreach implementation of optimizer is used (default: None)
For further details regarding the algorithm refer to Incorporating Nesterov Momentum into Adam: https://openreview.net/forum?id=OM0jvwB8jIp57ZJjtNEZ
- __init__(params: Iterable[torch.nn.parameter.Parameter], lr: float = 0.002, betas: Tuple[float, float] = (0.9, 0.999), eps: float = 1e-08, weight_decay: float = 0, momentum_decay: float = 0.004)[source]#
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
common.pytorch.optim.RAdam module#
- class common.pytorch.optim.RAdam.RAdam[source]#
Bases:
modelzoo.common.pytorch.optim.CSOptimizer.CSOptimizer
RAdam optimizer implemented to conform to execution within the constraints of the Cerebras WSE.
- Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-3)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-6)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
- __init__(params: Iterable[torch.nn.parameter.Parameter], lr: float = 0.001, betas: Tuple[float, float] = (0.9, 0.999), eps: float = 1e-06, weight_decay: float = 0.0)[source]#
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
common.pytorch.optim.RMSprop module#
- class common.pytorch.optim.RMSprop.RMSprop[source]#
Bases:
modelzoo.common.pytorch.optim.CSOptimizer.CSOptimizer
RMSprop optimizer implemented to perform the required pre-initialization of the optimizer state.
- __init__(params, lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False)[source]#
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
common.pytorch.optim.Rprop module#
- class common.pytorch.optim.Rprop.Rprop[source]#
Bases:
modelzoo.common.pytorch.optim.CSOptimizer.CSOptimizer
Rprop optimizer implemented to conform to execution within the constraints of the Cerebras WSE, including pre-initializing optimizer state
- Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-3)
etas (Tuple[float, float], optional) – step size multipliers
step_size (Tuple[float, float], optional) – Tuple of min, max step size values. Step size is clamped to be between these values.
- __init__(params: Iterable[torch.nn.parameter.Parameter], lr: float = 0.001, etas: Tuple[float, float] = (0.5, 1.2), step_sizes: Tuple[float, float] = (1e-06, 50.0))[source]#
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
common.pytorch.optim.SGD module#
- class common.pytorch.optim.SGD.SGD[source]#
Bases:
modelzoo.common.pytorch.optim.CSOptimizer.CSOptimizer
SGD optimizer implemented to conform to execution within the constraints of the Cerebras WSE, including pre-initializing optimizer state
- __init__(params, lr, momentum=0, dampening=0, weight_decay=0, nesterov=False, maximize=False)[source]#
- preinitialize()[source]#
Allocates tensors for the optimizer state to allow direct compilation of the model before the first step.
- step(closure=None)#
Performs a single optimization step.
- Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
common.pytorch.optim.lr_scheduler module#
- class common.pytorch.optim.lr_scheduler.ChainedScheduler[source]#
Bases:
torch.optim.lr_scheduler.ChainedScheduler
Chains list of learning rate schedulers. It takes a list of chainable learning rate schedulers and performs consecutive step() functions belonging to them by just one call.
- class common.pytorch.optim.lr_scheduler.ConstantLR[source]#
Bases:
common.pytorch.optim.lr_scheduler.LRScheduler
Maintains a constant learning rate for each parameter group (no decaying).
- Parameters
optimizer – The optimizer to schedule
val – The learning_rate value to maintain
decay_steps – The number of steps to decay for
- class common.pytorch.optim.lr_scheduler.CosineAnnealingLR[source]#
Bases:
common.pytorch.optim.lr_scheduler.LRScheduler
Set the learning rate of each parameter group using a cosine annealing schedule, where \(\eta_{max}\) is set to the initial lr and \(T_{cur}\) is the number of steps since the last restart in SGDR:
\[\begin{split}\begin{aligned} \eta_t & = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{max}}\pi\right)\right), & T_{cur} \neq (2k+1)T_{max}; \\ \eta_{t+1} & = \eta_{t} + \frac{1}{2}(\eta_{max} - \eta_{min}) \left(1 - \cos\left(\frac{1}{T_{max}}\pi\right)\right), & T_{cur} = (2k+1)T_{max}. \end{aligned}\end{split}\]Notice that because the schedule is defined recursively, the learning rate can be simultaneously modified outside this scheduler by other operators. If the learning rate is set solely by this scheduler, the learning rate at each step becomes:
\[\eta_t = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{max}}\pi\right)\right)\]It has been proposed in SGDR: Stochastic Gradient Descent with Warm Restarts. Note that this only implements the cosine annealing part of SGDR, and not the restarts.
This class is similar to the Pytorch CosineAnnealingLR LRS.
- Parameters
optimizer – The optimizer to schedule
initial_learning_rate – The initial learning rate.
T_max – Maximum number of iterations.
eta_min – Minimum learning rate.
- class common.pytorch.optim.lr_scheduler.CosineAnnealingWarmRestarts[source]#
Bases:
common.pytorch.optim.lr_scheduler.LRScheduler
Set the learning rate of each parameter group using a cosine annealing schedule, where \(\eta_{max}\) is set to the initial lr, \(T_{cur}\) is the number of steps since the last restart and \(T_{i}\) is the number of steps between two warm restarts in SGDR:
\[\eta_t = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{i}}\pi\right)\right)\]When \(T_{cur}=T_{i}\), set \(\eta_t = \eta_{min}\). When \(T_{cur}=0\) after restart, set \(\eta_t=\eta_{max}\).
It has been proposed in SGDR: Stochastic Gradient Descent with Warm Restarts.
This class is similar to the Pytorch CosineAnnealingWarmRestarts LRS.
- Parameters
optimizer – The optimizer to schedule
initial_learning_rate – The initial learning rate.
T_0 – Number of iterations for the first restart.
T_mult – A factor increases Ti after a restart. Currently T_mult must be set to 1.0
eta_min – Minimum learning rate.
- class common.pytorch.optim.lr_scheduler.CosineDecayLR[source]#
Bases:
common.pytorch.optim.lr_scheduler.LRScheduler
Applies the cosine decay schedule as described in the Keras CosineDecay class.
- Parameters
optimizer – The optimizer to schedule
initial_learning_rate – The initial learning rate.
end_learning_rate – The final learning rate
decay_steps – Number of steps to perform the decay
- class common.pytorch.optim.lr_scheduler.CyclicLR[source]#
Bases:
common.pytorch.optim.lr_scheduler.LRScheduler
Sets the learning rate of each parameter group according to cyclical learning rate policy (CLR). The policy cycles the learning rate between two boundaries with a constant frequency, as detailed in the paper Cyclical Learning Rates for Training Neural Networks. The distance between the two boundaries can be scaled on a per-iteration or per-cycle basis.
Cyclical learning rate policy changes the learning rate after every batch. step should be called after a batch has been used for training.
This class has three built-in policies, as put forth in the paper:
“triangular”: A basic triangular cycle without amplitude scaling.
“triangular2”: A basic triangular cycle that scales initial amplitude by half each cycle.
“exp_range”: A cycle that scales initial amplitude by \(\text{gamma}^{\text{cycle iterations}}\) at each cycle iteration.
This class is similar to the Pytorch CyclicLR LRS.
- Parameters
optimizer – The optimizer to schedule.
base_lr – Initial learning rate which is the lower boundary in the cycle.
max_lr – Upper learning rate boundaries in the cycle.
step_size_up – Number of training iterations in the increasing half of a cycle.
step_size_down – Number of training iterations in the decreasing half of a cycle.
mode – One of {‘triangular’, ‘triangular2’, ‘exp_range’}.
gamma – Constant in ‘exp_range’ scaling function: gamma**(cycle iterations).
scale_mode – {‘cycle’, ‘iterations’} Defines whether scale_fn is evaluated on cycle number or cycle iterations.
- __init__(optimizer: torch.optim.Optimizer, base_lr: float, max_lr: float, step_size_up: int, step_size_down: int, mode: <module 'string' from '/home/docs/checkouts/readthedocs.org/user_builds/cerebras-systems-cerebras-systems-developer-documentation/conda/1.9.1/lib/python3.9/string.py'>, gamma: float, scale_mode: <module 'string' from '/home/docs/checkouts/readthedocs.org/user_builds/cerebras-systems-cerebras-systems-developer-documentation/conda/1.9.1/lib/python3.9/string.py'>, disable_lr_steps_reset: bool = False)[source]#
- class common.pytorch.optim.lr_scheduler.ExponentialLR[source]#
Bases:
common.pytorch.optim.lr_scheduler.LRScheduler
Decays the learning rate of each parameter group by decay_rate every step.
This class is similar to the Pytorch ExponentialLR LRS.
- Parameters
optimizer – The optimizer to schedule
initial_learning_rate – The initial learning rate.
decay_steps – Number of steps to perform the decay
decay_rate – The decay rate
staircase – If True decay the learning rate at discrete intervals
- class common.pytorch.optim.lr_scheduler.InverseExponentialTimeDecayLR[source]#
Bases:
common.pytorch.optim.lr_scheduler.LRScheduler
Decays the learning rate inverse-exponentially over time, as described in the Keras InverseTimeDecay class.
- Parameters
optimizer – The optimizer to schedule
initial_learning_rate – The initial learning rate.
step_exponent – Exponential value.
decay_steps – Number of steps to perform the decay.
decay_rate – The decay rate.
staircase – If True decay the learning rate at discrete intervals.
- class common.pytorch.optim.lr_scheduler.InverseSquareRootDecayLR[source]#
Bases:
common.pytorch.optim.lr_scheduler.LRScheduler
Decays the learning rate inverse-squareroot over time, as described in the following equation:
\[\begin{aligned} lr_t & = \frac{\text{scale}}{\sqrt{\max\{t, \text{warmup_steps}\}}}. \end{aligned}\]- Parameters
optimizer – The optimizer to schedule
initial_learning_rate – The initial learning rate.
scale – Multiplicative factor to scale the result.
warmup_steps – use initial_learning_rate for the first warmup_steps.
- class common.pytorch.optim.lr_scheduler.LRScheduler[source]#
Bases:
torch.optim.lr_scheduler.LambdaLR
,abc.ABC
Cerebras specific learning rate scheduler base class.
The learning rate schedulers implemented in this file are specifically designed to be run on a Cerebras system. This means that there are certain caveats to these custom schedulers that differ from a typical LR scheduler found in core PyTorch.
The learning rate schedulers here are intended to be stepped at every iteration. This means lr_scheduler.step() should be called after every optimizer.step(). Hence, the learning rate schedulers operate on a step-by-step basis. Having said that, there are some variables used such as last_epoch that might indicate otherwise. The only reason these variables are used is to match what is used in core PyTorch. It does not indicate that things are operating on an epoch-by-epoch basis.
Also, note that the above means that our LR schedulers are incompatible with the LR schedulers found in core PyTorch. The state cannot simply be transferred between the two. So, one of the LR schedulers defined here must be used in order to have LR scheduling on the Cerebras system.
- __init__(optimizer, decay_steps: Optional[int] = None, disable_lr_steps_reset: bool = False)[source]#
- global_start_step = 0#
- initial_epoch = 0#
- class common.pytorch.optim.lr_scheduler.LambdaLR[source]#
Bases:
common.pytorch.optim.lr_scheduler.LRScheduler
Sets the learning rate of each parameter group to the initial lr times a given function (which is specified by overriding set_lr_lambda).
- Parameters
optimizer – The optimizer to schedule
initial_learning_rate – The initial learning rate.
- class common.pytorch.optim.lr_scheduler.MultiStepLR[source]#
Bases:
common.pytorch.optim.lr_scheduler.LRScheduler
Decays the learning rate of each parameter group by gamma once the number of steps reaches one of the milestones. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler.
This class is similar to the Pytorch MultiStepLR LRS.
- Parameters
optimizer – The optimizer to schedule
initial_learning_rate – The initial learning rate.
gamma – Multiplicative factor of learning rate decay.
milestones – List of step indices. Must be increasing.
- class common.pytorch.optim.lr_scheduler.MultiplicativeLR[source]#
Bases:
common.pytorch.optim.lr_scheduler.LRScheduler
Multiply the learning rate of each parameter group by the supplied coefficient.
- Parameters
optimizer – The optimizer to schedule
initial_learning_rate – The initial learning rate.
coefficient – Multiplicative factor of learning rate.
- class common.pytorch.optim.lr_scheduler.OneCycleLR[source]#
Bases:
common.pytorch.optim.lr_scheduler.LRScheduler
Sets the learning rate of each parameter group according to the 1cycle learning rate policy. The 1cycle policy anneals the learning rate from an initial learning rate to some maximum learning rate and then from that maximum learning rate to some minimum learning rate much lower than the initial learning rate. This policy was initially described in the paper Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates.
This scheduler is not chainable.
This class is similar to the Pytorch OneCycleLR LRS.
- Parameters
optimizer – The optimizer to schedule
initial_learning_rate – Initial learning rate. Compared with PyTorch, this is equivalent to max_lr / div_factor.
max_lr – Upper learning rate boundaries in the cycle.
total_steps – The total number of steps in the cycle.
pct_start – The percentage of the cycle (in number of steps) spent increasing the learning rate.
final_div_factor – Determines the minimum learning rate via min_lr = initial_lr/final_div_factor.
three_phase – If True, use a third phase of the schedule to annihilate the learning rate
anneal_strategy – Specifies the annealing strategy: “cos” for cosine annealing, “linear” for linear annealing.
- __init__(optimizer: torch.optim.Optimizer, initial_learning_rate: float, max_lr: float, total_steps: int, pct_start: float, final_div_factor: float, three_phase: bool, anneal_strategy: <module 'string' from '/home/docs/checkouts/readthedocs.org/user_builds/cerebras-systems-cerebras-systems-developer-documentation/conda/1.9.1/lib/python3.9/string.py'>, disable_lr_steps_reset: bool = False)[source]#
- class common.pytorch.optim.lr_scheduler.PiecewiseConstantLR[source]#
Bases:
common.pytorch.optim.lr_scheduler.SequentialLR
Adjusts the learning rate to a predefined constant at each milestone and holds this value until the next milestone. Notice that such adjustment can happen simultaneously with other changes to the learning rate from outside this scheduler.
- Parameters
optimizer – The optimizer to schedule
learning_rates – List of learning rates to maintain before/during each milestone.
milestones – List of step indices. Must be increasing.
- class common.pytorch.optim.lr_scheduler.PolynomialLR[source]#
Bases:
common.pytorch.optim.lr_scheduler.LRScheduler
Decays the learning rate of each parameter group using a polynomial function in the given decay_steps.
This class is similar to the Pytorch PolynomialLR LRS.
- Parameters
optimizer – The optimizer to schedule
initial_learning_rate – The initial learning rate.
end_learning_rate – The final learning rate
decay_steps – Number of steps to perform the decay
power – Exponent to apply to “x” (as in y=mx+b), which is ratio of step completion (1 for linear) Default: 1.0 (only Linear supported at the moment)
cycle – Whether to cycle
- class common.pytorch.optim.lr_scheduler.ScalePerParamLR[source]#
Bases:
common.pytorch.optim.lr_scheduler.LRScheduler
Wraps a scheduler: torch.optim.lr_scheduler.LambdaLR object or torch.optim.lr_scheduler.SequentialLR object and scales the learning rate of a param_group based on the corresponding adjust_learning_rate factor :param optimizer: The optimizer to schedule :param scheduler: The scheduler that provides updated lr with its _lr_function method :param decay_steps: Number of steps to perform the decay
- class common.pytorch.optim.lr_scheduler.SequentialLR[source]#
Bases:
torch.optim.lr_scheduler.SequentialLR
Receives the list of schedulers that is expected to be called sequentially during optimization process and milestone points that provides exact intervals to reflect which scheduler is supposed to be called at a given step.
This class is a wrapper around the Pytorch SequentialLR LRS.
- Parameters
optimizer – Wrapped optimizer
schedulers (list) – List of chained schedulers.
milestones (list) – List of integers that reflects milestone points.
last_epoch (int) – The index of last epoch. Default: -1.
- class common.pytorch.optim.lr_scheduler.StepLR[source]#
Bases:
common.pytorch.optim.lr_scheduler.LRScheduler
Decays the learning rate of each parameter group by gamma every step_size. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler.
This class is similar to the Pytorch StepLR LRS.
- Parameters
optimizer – The optimizer to schedule
initial_learning_rate – The initial learning rate.
step_size – Period of learning rate decay.
gamma – Multiplicative factor of learning rate decay.