Shortcuts

auto_optim

ignite.distributed.auto.auto_optim(optimizer)[source]

Helper method to adapt optimizer for non-distributed and distributed configurations (supporting all available backends from available_backends()).

Internally, this method is no-op for non-distributed and torch native distributed configuration.

For XLA distributed configuration, we create a new class that inherits from provided optimizer. The goal is to override the step() method with specific xm.optimizer_step implementation.

For Horovod distributed configuration, optimizer is wrapped with Horovod Distributed Optimizer and its state is broadcasted from rank 0 to all other processes.

Examples:

import ignite.distributed as idist

optimizer = idist.auto_optim(optimizer)
Parameters

optimizer (torch.optim.optimizer.Optimizer) – input torch optimizer

Returns

Optimizer

Return type

torch.optim.optimizer.Optimizer

Changed in version 0.4.2: Added Horovod distributed optimizer.