vision.pytorch.input package#

Subpackages#

Submodules#

vision.pytorch.input.transforms module#

vision.pytorch.input.transforms.create_transform(transform_spec)[source]#

Create the specified transform. For each transform, the parameter list (name and default value) follows those in torchvision 0.12 (https://pytorch.org/vision/0.12/transforms.html)

Parameters
  • name (str) – name of the transform

  • args (dict) – a dictionary of parameters used to initialize the transform. Default is None.

vision.pytorch.input.utils module#

class vision.pytorch.input.utils.FastDataLoader[source]#

Bases: torch.utils.data.DataLoader

__init__(*args, **kwargs)[source]#
class vision.pytorch.input.utils.ShardedSampler[source]#

Bases: torch.utils.data.Sampler

Modified from: https://pytorch.org/docs/stable/_modules/torch/utils/data/distributed.html#DistributedSampler Sampler that restricts data loading to a subset of the dataset.

Dataset is assumed to be of constant size.

Parameters
  • dataset (torch.utils.data.Dataset) – Dataset used for sampling.

  • mode (modes) – Instance of modes to indicate train or eval mode.

  • shuffle (bool, optional) – If True (default), sampler will shuffle the indices.

  • seed (int, optional) – Random seed used to shuffle the sampler if shuffle=True. This number should be identical across all processes in the distributed group. Default: 0.

  • drop_last (bool, optional) – If True, then the sampler will drop the tail of the data to make it evenly divisible across the number of replicas. If False, the sampler will add extra indices to make the data evenly divisible across the replicas. Default: False.

__init__(dataset, shuffle=True, seed=None, drop_last=False)[source]#
vision.pytorch.input.utils.create_worker_cache(src_dir: str, force_overwrite: bool = False)[source]#

Checks for the dir in the worker_cache (SSD) on the worker node corresponding to the src_dir. If the directory exists and is same as the src_dir, it returns the dir path on worker_cache. Otherwise writes the directory to the worker_cache and returns the dir path. Writing to the cache can take a while, depending on the size of the src_dir: Displays a progress bar (in the worker logs) which shows progress of the cache Forces cache overwrite irrespective of a cache hit, when force_overwrite is True.

vision.pytorch.input.utils.is_gpu_distributed()[source]#

Returns True if DDP is enabled

vision.pytorch.input.utils.num_tasks()[source]#
vision.pytorch.input.utils.task_id()[source]#

Module contents#