Shortcuts

FastaiLRFinder

class ignite.contrib.handlers.lr_finder.FastaiLRFinder[source]

Learning rate finder handler for supervised trainers.

While attached, the handler increases the learning rate in between two boundaries in a linear or exponential manner. It provides valuable information on how well the network can be trained over a range of learning rates and what can be an optimal learning rate.

Examples:

from ignite.contrib.handlers import FastaiLRFinder

trainer = ...
model = ...
optimizer = ...

lr_finder = FastaiLRFinder()
to_save = {"model": model, "optimizer": optimizer}

with lr_finder.attach(trainer, to_save=to_save) as trainer_with_lr_finder:
    trainer_with_lr_finder.run(dataloader)

# Get lr_finder results
lr_finder.get_results()

# Plot lr_finder results (requires matplotlib)
lr_finder.plot()

# get lr_finder suggestion for lr
lr_finder.lr_suggestion()

Note

When context manager is exited all LR finder’s handlers are removed.

Note

Please, also keep in mind that all other handlers attached the trainer will be executed during LR finder’s run.

Note

This class may require matplotlib package to be installed to plot learning rate range test:

pip install matplotlib

References

Cyclical Learning Rates for Training Neural Networks: https://arxiv.org/abs/1506.01186

fastai/lr_find: https://github.com/fastai/fastai

Methods

attach

Attaches lr_finder to a given trainer.

get_results

Returns: dictionary with loss and lr logs fromm the previous run

lr_suggestion

Returns: learning rate at the minimum numerical gradient

plot

Plots the learning rate range test.

Return type

None

attach(trainer, to_save, output_transform=<function FastaiLRFinder.<lambda>>, num_iter=None, end_lr=10.0, step_mode='exp', smooth_f=0.05, diverge_th=5.0)[source]

Attaches lr_finder to a given trainer. It also resets model and optimizer at the end of the run.

Usage:

to_save = {"model": model, "optimizer": optimizer}
with lr_finder.attach(trainer, to_save=to_save) as trainer_with_lr_finder:
    trainer_with_lr_finder.run(dataloader)
Parameters
  • trainer (ignite.engine.engine.Engine) – lr_finder is attached to this trainer. Please, keep in mind that all attached handlers will be executed.

  • to_save (Mapping) – dictionary with optimizer and other objects that needs to be restored after running the LR finder. For example, to_save={‘optimizer’: optimizer, ‘model’: model}. All objects should implement state_dict and load_state_dict methods.

  • output_transform (Callable) – function that transforms the trainer’s state.output after each iteration. It must return the loss of that iteration.

  • num_iter (Optional[int]) – number of iterations for lr schedule between base lr and end_lr. Default, it will run for trainer.state.epoch_length * trainer.state.max_epochs.

  • end_lr (float) – upper bound for lr search. Default, 10.0.

  • step_mode (str) – “exp” or “linear”, which way should the lr be increased from optimizer’s initial lr to end_lr. Default, “exp”.

  • smooth_f (float) – loss smoothing factor in range [0, 1). Default, 0.05

  • diverge_th (float) – Used for stopping the search when current loss > diverge_th * best_loss. Default, 5.0.

Returns

trainer_with_lr_finder (trainer used for finding the lr)

Return type

Any

Note

lr_finder cannot be attached to more than one trainer at a time.

get_results()[source]

Returns: dictionary with loss and lr logs fromm the previous run

Return type

Dict[str, List[Any]]

lr_suggestion()[source]

Returns: learning rate at the minimum numerical gradient

Return type

Any

plot(skip_start=10, skip_end=5, log_lr=True)[source]

Plots the learning rate range test.

This method requires matplotlib package to be installed:

pip install matplotlib
Parameters
  • skip_start (int) – number of batches to trim from the start. Default: 10.

  • skip_end (int) – number of batches to trim from the start. Default: 5.

  • log_lr (bool) – True to plot the learning rate in a logarithmic scale; otherwise, plotted in a linear scale. Default: True.

Return type

None