cerebras.modelzoo.data.common.h5_map_dataset.dataset.HDF5DatasetConfig#
- class cerebras.modelzoo.data.common.h5_map_dataset.dataset.HDF5DatasetConfig(*args, **kwargs)[source]#
Bases:
cerebras.modelzoo.config.base_config.BaseConfig
Methods
check_for_deprecated_fields
check_mutual_exclusivity
copy
get_orig_class
get_orig_class_args
model_copy
model_post_init
post_init
Attributes
The batch size
The path to the HDF5 files.
An optional specification to only consider a subset of the full dataset, useful for sequence length scheduling and multi-epoch testing.
Similar to the PyTorch drop_last setting except that samples that when set to True, samples that would have been dropped at the end of one epoch are yielded at the start of the next epoch so that there is no data loss.
The sequence length of samples produced by the dataloader.
An optional specification of multiple datasets to mix over to create one single weighted combination.
model_config
The number of samples to shuffle over (if shuffling is enabled).
Flag to enable padding of the last batch so that the last batch has the same batch size as the rest of the batches.
Whether or not to shuffle the dataset.
The seed used for deterministic shuffling.
Whether or not the reader should sort the input files.
Flag to enable variable sequence length training.
Whether or not to copy data to storage that is directly attached to each individual worker node.
- data_dir = None#
The path to the HDF5 files. Exactly one of “data_dir” or “mixture” must be specified.
- batch_size = Ellipsis#
The batch size
- shuffle = False#
Whether or not to shuffle the dataset.
- shuffle_seed = 0#
The seed used for deterministic shuffling.
- use_worker_cache = False#
Whether or not to copy data to storage that is directly attached to each individual worker node. Useful when your network storage is unusually slow, but otherwise discouraged.
- max_sequence_length = None#
The sequence length of samples produced by the dataloader. When using the ‘corpus’ data format, the same preprocessed data will work with any max sequence length, so this may be set at runtime. When using the ‘sample’ format this must be set to None.
- data_subset = None#
An optional specification to only consider a subset of the full dataset, useful for sequence length scheduling and multi-epoch testing. Expected to be a comma separated list of ranges, e.g. ‘0.0-0.5’ or ‘0.1-0.3,0.7-1.0’. Specifying ‘0.0-0.5’ creates a dataset from the first half of the data on disk and disregards the second half.
- mixture = None#
An optional specification of multiple datasets to mix over to create one single weighted combination. Each element must be a dictionary containing keys data_dir and weight. data_dir serves the same purpose as mentioned above. weight defines the probability with which this dataset should be sampled from. Weights are normalized to sum to 1. Optionally, the dictionary may also contain a data_subset field which functions the same as the data_subset argument above.
- drop_last = True#
Similar to the PyTorch drop_last setting except that samples that when set to True, samples that would have been dropped at the end of one epoch are yielded at the start of the next epoch so that there is no data loss. This is necessary for a data ordering that is independent of the distributed setup being used.
- num_samples = None#
The number of samples to shuffle over (if shuffling is enabled). In multi-epoch training, it is common to set this to the total number of samples that you plan to train on so that epochs are not sequential but instead shuffled together for potentially improved convergence.
- sort_files = True#
Whether or not the reader should sort the input files. This is included for backwards compatibility and should almost always be set to True.
- use_vsl = False#
Flag to enable variable sequence length training. It requires the dataset to have two extra features: the attention_span of keys and the position_ids of tokens. Defaults to False.
- pad_last = False#
Flag to enable padding of the last batch so that the last batch has the same batch size as the rest of the batches.
- __call__(**kwargs)#
Construct the original class with the current config.
By original class, we mean the class that this config class is associated with.