cerebras.modelzoo.data.multimodal.llava.MultimodalSimpleHDF5MapDataProcessor.MultimodalSimpleHDF5MapDataProcessorConfig#
- class cerebras.modelzoo.data.multimodal.llava.MultimodalSimpleHDF5MapDataProcessor.MultimodalSimpleHDF5MapDataProcessorConfig(*args, **kwargs)[source]#
Bases:
cerebras.modelzoo.data.common.h5_map_dataset.dataset.MultimodalSimpleHDF5DatasetConfig
,cerebras.modelzoo.config.data_config.DataConfig
Methods
check_for_deprecated_fields
check_literal_discriminator_field
check_mutual_exclusivity
copy
get_orig_class
get_orig_class_args
model_copy
model_post_init
post_init
Attributes
batch_size
The batch size
bos_token_id
data_dir
The path to the HDF5 files.
data_subset
An optional specification to only consider a subset of the full dataset, useful for sequence length scheduling and multi-epoch testing.
dataset_map_fn
discriminator
discriminator_value
drop_last
Similar to the PyTorch drop_last setting except that samples that when set to True, samples that would have been dropped at the end of one epoch are yielded at the start of the next epoch so that there is no data loss.
fp16_type
image_data_size
The final C x H x W shape of the image.
img_data_dir
The path to the directory containing the images.
max_num_img
The maximum number of images.
max_sequence_length
The sequence length of samples produced by the dataloader.
micro_batch_size
mixed_precision
mixture
An optional specification of multiple datasets to mix over to create one single weighted combination.
model_config
noaugment
num_patches
The number of patches.
num_samples
The number of samples to shuffle over (if shuffling is enabled).
The number of PyTorch processes used in the dataloader.
pad_last
Flag to enable padding of the last batch so that the last batch has the same batch size as the rest of the batches.
pad_token_id
Whether or not to keep workers persistent between epochs.
pos_token_id
The number of batches to prefetch in the dataloader.
shuffle
Whether or not to shuffle the dataset.
shuffle_seed
The seed used for deterministic shuffling.
sort_files
Whether or not the reader should sort the input files.
transforms
A specification of the torchvision transforms.
use_vsl
Flag to enable variable sequence length training.
use_worker_cache
Whether or not to copy data to storage that is directly attached to each individual worker node.
vocab_size
data_processor
- num_workers = 0#
The number of PyTorch processes used in the dataloader.
- prefetch_factor = 10#
The number of batches to prefetch in the dataloader.
- persistent_workers = True#
Whether or not to keep workers persistent between epochs.