Create a dataset loading script
Last updated
Last updated
The dataset script is likely not needed if your dataset is in one of the following formats: CSV, JSON, JSON lines, text or Parquet. With those formats, you should be able to load your dataset automatically with , as long as your dataset repository has a .
Write a dataset script to load and share datasets that consist of data files in unsupported formats or require more complex data preparation. This is a more advanced way to define a dataset than using . A dataset script is a Python file that defines the different configurations and splits of your dataset, as well as how to download and process the data.
The script can download data files from any website, or from the same dataset repository.
A dataset loading script should have the same name as a dataset repository or directory. For example, a repository named my_dataset
should contain my_dataset.py
script. This way it can be loaded with:
Copied
Copied
The following guide includes instructions for dataset scripts for how to:
Add dataset metadata.
Download data files.
Generate samples.
Generate dataset metadata.
Upload a dataset to the Hub.
The first step is to add some information, or attributes, about your dataset in DatasetBuilder._info()
. The most important attributes you should specify are:
DatasetInfo.description
provides a concise description of your dataset. The description informs the user whatβs in the dataset, how it was collected, and how it can be used for a NLP task.
Copied
DatasetInfo.homepage
contains the URL to the dataset homepage so users can find more details about the dataset.
DatasetInfo.citation
contains a BibTeX citation for the dataset.
After youβve filled out all these fields in the template, it should look like the following example from the SQuAD loading script:
Copied
Copied
Create instances of your config to specify the values of the attributes of each configuration. This gives you the flexibility to specify all the name and description of each configuration. These sub-class instances should be listed under DatasetBuilder.BUILDER_CONFIGS
:
Copied
Now, users can load a specific configuration of the dataset with the configuration name
:
Copied
Copied
Users must specify a configuration name when they load a dataset with multiple configurations. Otherwise, π Datasets will raise a ValueError
, and prompt the user to select a configuration name. You can avoid this by setting a default dataset configuration with the DEFAULT_CONFIG_NAME
attribute:
Copied
Only use a default configuration when it makes sense. Donβt set one because it may be more convenient for the user to not specify a configuration when they load your dataset. For example, multi-lingual datasets often have a separate configuration for each language. An appropriate default may be an aggregated configuration that loads all the languages of the dataset if the user doesnβt request a particular one.
After youβve defined the attributes of your dataset, the next step is to download the data files and organize them according to their splits.
Create a dictionary of URLs in the loading script that point to the original SQuAD data files:
Copied
If the data files live in the same folder or repository of the dataset script, you can just pass the relative paths to the files instead of URLs.
The name
of each split. You should use the standard split names: Split.TRAIN
, Split.TEST
, and Split.VALIDATION
.
gen_kwargs
provides the file paths to the data files to load for each split.
Your DatasetBuilder._split_generator()
should look like this now:
Copied
At this point, you have:
Added the dataset attributes.
Provided instructions for how to download the data files.
Organized the splits.
The next step is to actually generate the samples in each split.
DatasetBuilder._generate_examples
takes the file path provided by gen_kwargs
to read and parse the data files. You need to write a function that loads the data files and extracts the columns.
Your function should yield a tuple of an id_
, and an example from the dataset.
Copied
Adding dataset metadata is a great way to include information about your dataset. The metadata is stored in the dataset card README.md
in YAML. It includes information like the number of examples required to confirm the dataset was correctly generated, and information about the dataset like its features
.
Run the following command to generate your dataset metadata in README.md
and make sure your new dataset loading script works correctly:
Copied
If your dataset loading script passed the test, you should now have a README.md
file in your dataset folder containing a dataset_info
field with some metadata.
Congratulations, you can now load your dataset from the Hub! π₯³
Copied
To make it work, we consider lists of files in gen_kwargs
to be shards. Therefore π Datasets can automatically spawn several workers to run _generate_examples
in parallel, and each worker is given a subset of shards to process.
Copied
Users can also specify num_proc=
in load_dataset()
to specify the number of processes to use as workers.
For some datasets it can be much faster to yield batches of data rather than examples one by one. You can speed up the dataset generation by yielding Arrow tables directly, instead of examples. This is especially useful if your data comes from Pandas DataFrames for example, since the conversion from Pandas to Arrow is as simple as:
Copied
Copied
Donβt forget to keep your script memory efficient, in case users run them on machines with a low amount of RAM.
Open the template to follow along on how to share a dataset.
To help you get started, try beginning with the dataset loading script !
DatasetInfo.features
defines the name and type of each column in your dataset. This will also provide the structure for each example, so it is possible to create nested subfields in a column if you want. Take a look at for a full list of feature types you can use.
In some cases, your dataset may have multiple configurations. For example, the dataset is a collection of 5 datasets designed to evaluate language understanding tasks. π Datasets provides which allows you to create different configurations for the user to select from.
Letβs study the to see how you can define several configurations.
Create a subclass with attributes about your dataset. These attributes can be the features of your dataset, label classes, and a URL to the data files.
Additionally, users can instantiate a custom builder configuration by passing the builder configuration arguments to :
takes this dictionary and downloads the data files. Once the files are downloaded, use to organize each split in the dataset. This is a simple class that contains:
Once your script is ready, and .
If your dataset is made of many big files, π Datasets automatically runs your script in parallel to make it super fast! It can help if you have hundreds or thousands of TAR archives, or JSONL files like for example.
To yield Arrow tables instead of single examples, make your dataset builder inherit from instead of , and use _generate_tables
instead of _generate_examples
: