data_processing.huggingface package#
Submodules#
data_processing.huggingface.CSDataCollatorForLanguageModeling module#
data_processing.huggingface.HF_converter_example_BookCorpus module#
Example script to convert HF BookCorpus dataset to HDF5 dataset
data_processing.huggingface.HF_converter_example_Eli5 module#
Example script to convert HF Eli5 dataset to HDF5 dataset
data_processing.huggingface.HuggingFaceDataProcessor module#
Pytorch HuggingFace Dataloader
- class data_processing.huggingface.HuggingFaceDataProcessor.HuggingFaceDataProcessor[source]#
Bases:
object
A HuggingFace map-style Data Processor. :param dict params: dict containing training
input parameters for creating dataset.
Expects the following fields: - “batch_size” (int): Batch size. - “shuffle” (bool): Flag to enable data shuffling. - “shuffle_seed” (int): Shuffle seed. - “shuffle_buffer” (int): Size of shuffle buffer in samples. - “num_workers” (int): How many subprocesses to use for data loading. - “drop_last” (bool): If True and the dataset size is not divisible
by the batch size, the last incomplete batch will be dropped.
“prefetch_factor” (int): Number of batches loaded in advance by each worker.
- “persistent_workers” (bool): If True, the data loader will not shutdown
the worker processes after a dataset has been consumed once.
data_processing.huggingface.HuggingFace_BookCorpus module#
HuggingFace BookCorpus Dataset
data_processing.huggingface.HuggingFace_Eli5 module#
HuggingFace Eli5 Dataset