data_processing.huggingface package#

Submodules#

data_processing.huggingface.CSDataCollatorForLanguageModeling module#

class data_processing.huggingface.CSDataCollatorForLanguageModeling.CSDataCollatorForLanguageModeling[source]#

Bases: transformers.DataCollatorForLanguageModeling

Overrides DataCollatorForLanguageModeling from HF to shift the inputs/labels in the dataloader

torch_call(examples: List[Union[List[int], Any, Dict[str, Any]]]) Dict[str, Any][source]#

data_processing.huggingface.HF_converter_example_BookCorpus module#

Example script to convert HF BookCorpus dataset to HDF5 dataset

data_processing.huggingface.HF_converter_example_BookCorpus.main()[source]#

data_processing.huggingface.HF_converter_example_Eli5 module#

Example script to convert HF Eli5 dataset to HDF5 dataset

data_processing.huggingface.HF_converter_example_Eli5.main()[source]#

data_processing.huggingface.HuggingFaceDataProcessor module#

Pytorch HuggingFace Dataloader

class data_processing.huggingface.HuggingFaceDataProcessor.HuggingFaceDataProcessor[source]#

Bases: object

A HuggingFace map-style Data Processor. :param dict params: dict containing training

input parameters for creating dataset.

Expects the following fields: - “batch_size” (int): Batch size. - “shuffle” (bool): Flag to enable data shuffling. - “shuffle_seed” (int): Shuffle seed. - “shuffle_buffer” (int): Size of shuffle buffer in samples. - “num_workers” (int): How many subprocesses to use for data loading. - “drop_last” (bool): If True and the dataset size is not divisible

by the batch size, the last incomplete batch will be dropped.

  • “prefetch_factor” (int): Number of batches loaded in advance by each worker.

  • “persistent_workers” (bool): If True, the data loader will not shutdown

    the worker processes after a dataset has been consumed once.

__init__(params)[source]#
create_dataloader()[source]#

Classmethod to create the dataloader object.

data_processing.huggingface.HuggingFace_BookCorpus module#

HuggingFace BookCorpus Dataset

data_processing.huggingface.HuggingFace_BookCorpus.HuggingFace_BookCorpus(sequence_length, num_workers=8)[source]#

data_processing.huggingface.HuggingFace_Eli5 module#

HuggingFace Eli5 Dataset

data_processing.huggingface.HuggingFace_Eli5.HuggingFace_Eli5(split='train', num_workers=8)[source]#

Module contents#