cerebras.modelzoo.data.multimodal.llava.LlavaHDF5MapDataProcessor.LlavaHDF5MapDataProcessorConfig#

class cerebras.modelzoo.data.multimodal.llava.LlavaHDF5MapDataProcessor.LlavaHDF5MapDataProcessorConfig(*args, **kwargs)[source]#

Bases: cerebras.modelzoo.data.common.h5_map_dataset.dataset.MultiModalHDF5DatasetConfig, cerebras.modelzoo.config.data_config.DataConfig

Methods

check_for_deprecated_fields

check_literal_discriminator_field

check_mutual_exclusivity

copy

get_orig_class

get_orig_class_args

model_copy

model_post_init

post_init

Attributes

batch_size

The batch size

bos_token_id

data_dir

The path to the HDF5 files.

data_subset

An optional specification to only consider a subset of the full dataset, useful for sequence length scheduling and multi-epoch testing.

dataset_map_fn

discriminator

discriminator_value

drop_last

Similar to the PyTorch drop_last setting except that samples that when set to True, samples that would have been dropped at the end of one epoch are yielded at the start of the next epoch so that there is no data loss.

image_data_size

The final C x H x W shape of the image.

img_data_dir

The path to the directory containing the images.

max_sequence_length

The sequence length of samples produced by the dataloader.

mixed_precision

mixture

An optional specification of multiple datasets to mix over to create one single weighted combination.

model_config

num_samples

The number of samples to shuffle over (if shuffling is enabled).

num_workers

The number of PyTorch processes used in the dataloader.

pad_last

Flag to enable padding of the last batch so that the last batch has the same batch size as the rest of the batches.

persistent_workers

Whether or not to keep workers persistent between epochs.

pos_token_id

prefetch_factor

The number of batches to prefetch in the dataloader.

shuffle

Whether or not to shuffle the dataset.

shuffle_seed

The seed used for deterministic shuffling.

sort_files

Whether or not the reader should sort the input files.

transforms

A specification of the torchvision transforms.

use_vsl

Flag to enable variable sequence length training.

use_worker_cache

Whether or not to copy data to storage that is directly attached to each individual worker node.

vocab_size

data_processor

num_workers = 0#

The number of PyTorch processes used in the dataloader.

prefetch_factor = 10#

The number of batches to prefetch in the dataloader.

persistent_workers = True#

Whether or not to keep workers persistent between epochs.