CyclicLR
CyclicLR¶
class modelzoo.common.pytorch.optim.lr_scheduler.CyclicLR
(optimizer: torch.optim.optimizer.Optimizer, base_lr: float, max_lr: float, step_size_up: int, step_size_down: int, mode: str, gamma: float, scale_mode: str, disable_lr_steps_reset: bool = False)
Sets the learning rate of each parameter group according to cyclical learning rate policy (CLR). The policy cycles the learning rate between two boundaries with a constant frequency, as detailed in the paper Cyclical Learning Rates for Training Neural Networks. The distance between the two boundaries can be scaled on a per-iteration or per-cycle basis.
Cyclical learning rate policy changes the learning rate after every batch. step
should be called after a batch has been used for training.
This class has three built-in policies, as put forth in the paper:
“triangular”: A basic triangular cycle without amplitude scaling.
“triangular2”: A basic triangular cycle that scales initial amplitude by half each cycle.
“exp_range”: A cycle that scales initial amplitude by gammacycle iterations at each cycle iteration.
- Parameters:
optimizer – The optimizer to schedule.
base_lr – Initial learning rate which is the lower boundary in the cycle.
max_lr – Upper learning rate boundaries in the cycle.
step_size_up – Number of training iterations in the increasing half of a cycle.
step_size_down – Number of training iterations in the decreasing half of a cycle.
mode – One of {‘triangular’, ‘triangular2’, ‘exp_range’}.
gamma – Constant in ‘exp_range’ scaling function: gamma**(cycle iterations).
scale_mode – {‘cycle’, ‘iterations’} Defines whether scale_fn is evaluated on cycle number or cycle iterations.