ignite.utils¶
Module with helper methods
Move tensors to relevant device. |
|
Apply a function on a tensor or mapping, or sequence of tensors. |
|
Apply a function on an object of input_type or mapping, or sequence of objects of input_type. |
|
Convert a tensor of indices of any shape (N, …) to a tensor of one-hot indicators of shape (N, num_classes, …) and of type uint8. |
|
Setups logger: name, level, format etc. |
|
Setup random state from a seed for torch, random and optionally numpy (if can be imported). |
-
ignite.utils.
apply_to_tensor
(x, func)[source]¶ Apply a function on a tensor or mapping, or sequence of tensors.
- Parameters
x (Union[torch.Tensor, collections.abc.Sequence, collections.abc.Mapping, str, bytes]) – input tensor or mapping, or sequence of tensors.
func (Callable) – the function to apply on
x
.
- Return type
Union[torch.Tensor, collections.abc.Sequence, collections.abc.Mapping, str, bytes]
-
ignite.utils.
apply_to_type
(x, input_type, func)[source]¶ Apply a function on an object of input_type or mapping, or sequence of objects of input_type.
- Parameters
x (Union[Any, collections.abc.Sequence, collections.abc.Mapping, str, bytes]) – object or mapping or sequence.
input_type (Union[Type, Tuple[Type[Any], Any]]) – data type of
x
.func (Callable) – the function to apply on
x
.
- Return type
Union[Any, collections.abc.Sequence, collections.abc.Mapping, str, bytes]
-
ignite.utils.
convert_tensor
(x, device=None, non_blocking=False)[source]¶ Move tensors to relevant device.
- Parameters
x (Union[torch.Tensor, collections.abc.Sequence, collections.abc.Mapping, str, bytes]) – input tensor or mapping, or sequence of tensors.
device (Optional[Union[str, torch.device]]) – device type to move
x
.non_blocking (bool) – convert a CPU Tensor with pinned memory to a CUDA Tensor asynchronously with respect to the host if possible
- Return type
Union[torch.Tensor, collections.abc.Sequence, collections.abc.Mapping, str, bytes]
-
ignite.utils.
manual_seed
(seed)[source]¶ Setup random state from a seed for torch, random and optionally numpy (if can be imported).
Changed in version 0.4.3: Added
torch.cuda.manual_seed_all(seed)
.
-
ignite.utils.
setup_logger
(name=None, level=20, stream=None, format='%(asctime)s %(name)s %(levelname)s: %(message)s', filepath=None, distributed_rank=None)[source]¶ Setups logger: name, level, format etc.
- Parameters
name (Optional[str]) – new name for the logger. If None, the standard logger is used.
level (int) – logging level, e.g. CRITICAL, ERROR, WARNING, INFO, DEBUG.
stream (Optional[TextIO]) – logging stream. If None, the standard stream is used (sys.stderr).
format (str) – logging format. By default, %(asctime)s %(name)s %(levelname)s: %(message)s.
filepath (Optional[str]) – Optional logging file path. If not None, logs are written to the file.
distributed_rank (Optional[int]) – Optional, rank in distributed configuration to avoid logger setup for workers. If None, distributed_rank is initialized to the rank of process.
- Returns
logging.Logger
- Return type
For example, to improve logs readability when training with a trainer and evaluator:
from ignite.utils import setup_logger trainer = ... evaluator = ... trainer.logger = setup_logger("trainer") evaluator.logger = setup_logger("evaluator") trainer.run(data, max_epochs=10) # Logs will look like # 2020-01-21 12:46:07,356 trainer INFO: Engine run starting with max_epochs=5. # 2020-01-21 12:46:07,358 trainer INFO: Epoch[1] Complete. Time taken: 00:5:23 # 2020-01-21 12:46:07,358 evaluator INFO: Engine run starting with max_epochs=1. # 2020-01-21 12:46:07,358 evaluator INFO: Epoch[1] Complete. Time taken: 00:01:02 # ...
Changed in version 0.4.3: Added
stream
parameter.
-
ignite.utils.
to_onehot
(indices, num_classes)[source]¶ Convert a tensor of indices of any shape (N, …) to a tensor of one-hot indicators of shape (N, num_classes, …) and of type uint8. Output’s device is equal to the input’s device`.
- Parameters
indices (torch.Tensor) – input tensor to convert.
num_classes (int) – number of classes for one-hot tensor.
- Return type
Changed in version 0.4.3: This functions is now torchscriptable.