cerebras.modelzoo.data.nlp.bert.BertHDF5DataProcessor.BertHDF5DataProcessorConfig#

class cerebras.modelzoo.data.nlp.bert.BertHDF5DataProcessor.BertHDF5DataProcessorConfig(*args, **kwargs)[source]#

Bases: cerebras.modelzoo.data.common.HDF5DataProcessor.HDF5DataProcessorConfig

Methods

check_for_deprecated_fields

check_literal_discriminator_field

check_mutual_exclusivity

copy

get_orig_class

get_orig_class_args

model_copy

model_post_init

post_init

Attributes

batch_size

The batch size

data_dir

The path to the HDF5 files.

data_subset

An optional specification to only consider a subset of the full dataset, useful for sequence length scheduling and multi-epoch testing.

dataset_map_fn

discriminator

discriminator_value

drop_last

Similar to the PyTorch drop_last setting except that samples that when set to True, samples that would have been dropped at the end of one epoch are yielded at the start of the next epoch so that there is no data loss.

max_sequence_length

The sequence length of samples produced by the dataloader.

mixture

An optional specification of multiple datasets to mix over to create one single weighted combination.

model_config

num_samples

The number of samples to shuffle over (if shuffling is enabled).

num_workers

The number of PyTorch processes used in the dataloader.

pad_last

Flag to enable padding of the last batch so that the last batch has the same batch size as the rest of the batches.

persistent_workers

Whether or not to keep workers persistent between epochs.

prefetch_factor

The number of batches to prefetch in the dataloader.

shuffle

Whether or not to shuffle the dataset.

shuffle_seed

The seed used for deterministic shuffling.

sort_files

Whether or not the reader should sort the input files.

use_vsl

Flag to enable variable sequence length training.

use_worker_cache

Whether or not to copy data to storage that is directly attached to each individual worker node.

vocab_size

data_processor

num_workers = 0#

The number of PyTorch processes used in the dataloader.

prefetch_factor = 10#

The number of batches to prefetch in the dataloader.

persistent_workers = True#

Whether or not to keep workers persistent between epochs.