Loading methods
Last updated
Last updated
Methods for listing and loading datasets and metrics:
datasets.list_datasets
( with_community_datasets = Truewith_details = False )
Parameters
with_community_datasets (bool
, optional, defaults to True
) — Include the community provided datasets.
with_details (bool
, optional, defaults to False
) — Return the full details on the datasets instead of only the short name.
List all the datasets scripts available on the BOINC AI Hub.
Example:
Copied
datasets.load_dataset
Parameters
path (str
) — Path or name of the dataset. Depending on path
, the dataset builder that is used comes from a generic dataset script (JSON, CSV, Parquet, text etc.) or from the dataset script (a python file) inside the dataset directory.
For local datasets:
if path
is a local directory (containing data files only) -> load a generic dataset builder (csv, json, text etc.) based on the content of the directory e.g. './path/to/directory/with/my/csv/data'
.
if path
is a local dataset script or a directory containing a local dataset script (if the script has the same name as the directory) -> load the dataset builder from the dataset script e.g. './dataset/squad'
or './dataset/squad/squad.py'
.
For datasets on the BOINC AI Hub (list all available datasets with boincai_hub.list_datasets
)
if path
is a dataset repository on the HF hub (containing data files only) -> load a generic dataset builder (csv, text etc.) based on the content of the repository e.g. 'username/dataset_name'
, a dataset repository on the HF hub containing your data files.
if path
is a dataset repository on the HF hub with a dataset script (if the script has the same name as the directory) -> load the dataset builder from the dataset script in the dataset repository e.g. glue
, squad
, 'username/dataset_name'
, a dataset repository on the HF hub containing a dataset script 'dataset_name.py'
.
name (str
, optional) — Defining the name of the dataset configuration.
data_dir (str
, optional) — Defining the data_dir
of the dataset configuration. If specified for the generic builders (csv, text etc.) or the Hub datasets and data_files
is None
, the behavior is equal to passing os.path.join(data_dir, **)
as data_files
to reference all the files in a directory.
data_files (str
or Sequence
or Mapping
, optional) — Path(s) to source data file(s).
split (Split
or str
) — Which split of the data to load. If None
, will return a dict
with all splits (typically datasets.Split.TRAIN
and datasets.Split.TEST
). If given, will return a single Dataset. Splits can be combined and specified like in tensorflow-datasets.
cache_dir (str
, optional) — Directory to read/write data. Defaults to "~/.cache/huggingface/datasets"
.
features (Features
, optional) — Set the features type to use for this dataset.
Added in 2.9.1
ignore_verifications (bool
, defaults to False
) — Ignore the verifications of the downloaded/processed dataset information (checksums/size/splits/…).
Deprecated in 2.9.1
ignore_verifications
was deprecated in version 2.9.1 and will be removed in 3.0.0. Please use verification_mode
instead.
save_infos (bool
, defaults to False
) — Save the dataset information (checksums/size/splits/…).
token (str
or bool
, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True
, or not specified, will get token from "~/.huggingface"
.
use_auth_token (str
or bool
, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True
, or not specified, will get token from "~/.huggingface"
.
Deprecated in 2.14.0
use_auth_token
was deprecated in favor of token
in version 2.14.0 and will be removed in 3.0.0.
Deprecated in 2.13.0
task
was deprecated in version 2.13.0 and will be removed in 3.0.0.
Note that streaming works for datasets that use data formats that support being iterated over like txt, csv, jsonl for example. Json files may be downloaded completely. Also streaming from remote zip or gzip files is supported but other compressed formats like rar and xz are not yet supported. The tgz format doesn’t allow streaming.
num_proc (int
, optional, defaults to None
) — Number of processes when downloading and generating the dataset locally. Multiprocessing is disabled by default.
Added in 2.7.0
storage_options (dict
, optional, defaults to None
) — Experimental. Key/value pairs to be passed on to the dataset file-system backend, if any.
Added in 2.11.0
Returns
if split
is not None
: the dataset requested,
if split
is not None
, the dataset is requested
if split
is None
, a ~datasets.streaming.IterableDatasetDict
with each split.
Load a dataset from the BOINC AI Hub, or a local dataset.
A dataset is a directory that contains:
some data files in generic formats (JSON, CSV, Parquet, text, etc.).
and optionally a dataset script, if it requires some code to read the data files. This is used to load any kind of formats or structures.
Note that dataset scripts can also download and read data files from anywhere - in case your data files already exist online.
This function does the following under the hood:
Download and import in the library the dataset script from path
if it’s not already cached inside the library.
If the dataset has no dataset script, then a generic dataset script is imported instead (JSON, CSV, Parquet, text, etc.)
Dataset scripts are small python scripts that define dataset builders. They define the citation, info and format of the dataset, contain the path or URL to the original data files and the code to load examples from the original data files.
Run the dataset script which will:
Download the dataset file from the original URL (see the script) if it’s not already available locally or cached.
Process and cache the dataset in typed Arrow tables for caching.
Arrow table are arbitrarily long, typed tables which can store nested objects and be mapped to numpy/pandas/python generic types. They can be directly accessed from disk, loaded in RAM or even streamed over the web.
Return a dataset built from the requested splits in split
(default: all).
It also allows to load a dataset from a local directory or a dataset repository on the BOINC AI Hub without dataset script. In this case, it automatically loads all the data files from the directory or the dataset repository.
Example:
Load a dataset from the BOINC AI Hub:
Copied
Load a local dataset:
Copied
Copied
Load an image dataset with the ImageFolder
dataset builder:
Copied
datasets.load_from_disk
Parameters
fs (~filesystems.S3FileSystem
or fsspec.spec.AbstractFileSystem
, optional) — Instance of the remote filesystem used to download the files from.
Deprecated in 2.9.0
fs
was deprecated in version 2.9.0 and will be removed in 3.0.0. Please use storage_options
instead, e.g. storage_options=fs.storage_options
.
storage_options (dict
, optional) — Key/value pairs to be passed on to the file-system backend, if any.
Added in 2.9.0
Returns
If dataset_path
is a path of a dataset directory: the dataset requested.
Example:
Copied
datasets.load_dataset_builder
( path: strname: typing.Optional[str] = Nonedata_dir: typing.Optional[str] = Nonedata_files: typing.Union[str, typing.Sequence[str], typing.Mapping[str, typing.Union[str, typing.Sequence[str]]], NoneType] = Nonecache_dir: typing.Optional[str] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonedownload_config: typing.Optional[datasets.download.download_config.DownloadConfig] = Nonedownload_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = Nonerevision: typing.Union[str, datasets.utils.version.Version, NoneType] = Nonetoken: typing.Union[bool, str, NoneType] = Noneuse_auth_token = 'deprecated'storage_options: typing.Optional[typing.Dict] = None**config_kwargs )
Parameters
path (str
) — Path or name of the dataset. Depending on path
, the dataset builder that is used comes from a generic dataset script (JSON, CSV, Parquet, text etc.) or from the dataset script (a python file) inside the dataset directory.
For local datasets:
if path
is a local directory (containing data files only) -> load a generic dataset builder (csv, json, text etc.) based on the content of the directory e.g. './path/to/directory/with/my/csv/data'
.
if path
is a local dataset script or a directory containing a local dataset script (if the script has the same name as the directory) -> load the dataset builder from the dataset script e.g. './dataset/squad'
or './dataset/squad/squad.py'
.
For datasets on the BOINC AI Hub (list all available datasets with boincai_hub.list_datasets
)
if path
is a dataset repository on the HF hub (containing data files only) -> load a generic dataset builder (csv, text etc.) based on the content of the repository e.g. 'username/dataset_name'
, a dataset repository on the HF hub containing your data files.
if path
is a dataset repository on the HF hub with a dataset script (if the script has the same name as the directory) -> load the dataset builder from the dataset script in the dataset repository e.g. glue
, squad
, 'username/dataset_name'
, a dataset repository on the HF hub containing a dataset script 'dataset_name.py'
.
name (str
, optional) — Defining the name of the dataset configuration.
data_dir (str
, optional) — Defining the data_dir
of the dataset configuration. If specified for the generic builders (csv, text etc.) or the Hub datasets and data_files
is None
, the behavior is equal to passing os.path.join(data_dir, **)
as data_files
to reference all the files in a directory.
data_files (str
or Sequence
or Mapping
, optional) — Path(s) to source data file(s).
cache_dir (str
, optional) — Directory to read/write data. Defaults to "~/.cache/huggingface/datasets"
.
token (str
or bool
, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True
, or not specified, will get token from "~/.huggingface"
.
use_auth_token (str
or bool
, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True
, or not specified, will get token from "~/.huggingface"
.
Deprecated in 2.14.0
use_auth_token
was deprecated in favor of token
in version 2.14.0 and will be removed in 3.0.0.
storage_options (dict
, optional, defaults to None
) — Experimental. Key/value pairs to be passed on to the dataset file-system backend, if any.
Added in 2.11.0
Load a dataset builder from the BOINC AI Hub, or a local dataset. A dataset builder can be used to inspect general information that is required to build a dataset (cache directory, config, dataset info, etc.) without downloading the dataset itself.
A dataset is a directory that contains:
some data files in generic formats (JSON, CSV, Parquet, text, etc.)
and optionally a dataset script, if it requires some code to read the data files. This is used to load any kind of formats or structures.
Note that dataset scripts can also download and read data files from anywhere - in case your data files already exist online.
Example:
Copied
datasets.get_dataset_config_names
( path: strrevision: typing.Union[str, datasets.utils.version.Version, NoneType] = Nonedownload_config: typing.Optional[datasets.download.download_config.DownloadConfig] = Nonedownload_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = Nonedynamic_modules_path: typing.Optional[str] = Nonedata_files: typing.Union[str, typing.List, typing.Dict, NoneType] = None**download_kwargs )
Parameters
path (str
) — path to the dataset processing script with the dataset builder. Can be either:
a local path to processing script or the directory containing the script (if the script has the same name as the directory), e.g. './dataset/squad'
or './dataset/squad/squad.py'
revision (Union[str, datasets.Version]
, optional) — If specified, the dataset module will be loaded from the datasets repository at this version. By default:
it is set to the local version of the lib.
it will also try to load it from the main branch if it’s not available at the local version of the lib. Specifying a version that is different from your local version of the lib might cause compatibility issues.
dynamic_modules_path (str
, defaults to ~/.cache/huggingface/modules/datasets_modules
) — Optional path to the directory in which the dynamic modules are saved. It must have been initialized with init_dynamic_modules
. By default the datasets and metrics are stored inside the datasets_modules
module.
data_files (Union[Dict, List, str]
, optional) — Defining the data_files of the dataset configuration.
Get the list of available config names for a particular dataset.
Example:
Copied
datasets.get_dataset_infos
( path: strdata_files: typing.Union[str, typing.List, typing.Dict, NoneType] = Nonedownload_config: typing.Optional[datasets.download.download_config.DownloadConfig] = Nonedownload_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = Nonerevision: typing.Union[str, datasets.utils.version.Version, NoneType] = Nonetoken: typing.Union[bool, str, NoneType] = Noneuse_auth_token = 'deprecated'**config_kwargs )
Parameters
path (str
) — path to the dataset processing script with the dataset builder. Can be either:
a local path to processing script or the directory containing the script (if the script has the same name as the directory), e.g. './dataset/squad'
or './dataset/squad/squad.py'
revision (Union[str, datasets.Version]
, optional) — If specified, the dataset module will be loaded from the datasets repository at this version. By default:
it is set to the local version of the lib.
it will also try to load it from the main branch if it’s not available at the local version of the lib. Specifying a version that is different from your local version of the lib might cause compatibility issues.
data_files (Union[Dict, List, str]
, optional) — Defining the data_files of the dataset configuration.
token (str
or bool
, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True
, or not specified, will get token from "~/.huggingface"
.
use_auth_token (str
or bool
, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True
, or not specified, will get token from "~/.huggingface"
.
Deprecated in 2.14.0
use_auth_token
was deprecated in favor of token
in version 2.14.0 and will be removed in 3.0.0.
**config_kwargs (additional keyword arguments) — Optional attributes for builder class which will override the attributes if supplied.
Get the meta information about a dataset, returned as a dict mapping config name to DatasetInfoDict.
Example:
Copied
datasets.get_dataset_split_names
( path: strconfig_name: typing.Optional[str] = Nonedata_files: typing.Union[str, typing.Sequence[str], typing.Mapping[str, typing.Union[str, typing.Sequence[str]]], NoneType] = Nonedownload_config: typing.Optional[datasets.download.download_config.DownloadConfig] = Nonedownload_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = Nonerevision: typing.Union[str, datasets.utils.version.Version, NoneType] = Nonetoken: typing.Union[bool, str, NoneType] = Noneuse_auth_token = 'deprecated'**config_kwargs )
Parameters
path (str
) — path to the dataset processing script with the dataset builder. Can be either:
a local path to processing script or the directory containing the script (if the script has the same name as the directory), e.g. './dataset/squad'
or './dataset/squad/squad.py'
config_name (str
, optional) — Defining the name of the dataset configuration.
data_files (str
or Sequence
or Mapping
, optional) — Path(s) to source data file(s).
token (str
or bool
, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True
, or not specified, will get token from "~/.huggingface"
.
use_auth_token (str
or bool
, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True
, or not specified, will get token from "~/.huggingface"
.
Deprecated in 2.14.0
use_auth_token
was deprecated in favor of token
in version 2.14.0 and will be removed in 3.0.0.
**config_kwargs (additional keyword arguments) — Optional attributes for builder class which will override the attributes if supplied.
Get the list of available splits for a particular config and dataset.
Example:
Copied
datasets.inspect_dataset
( path: strlocal_path: strdownload_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None**download_kwargs )
Parameters
path (str
) — Path to the dataset processing script with the dataset builder. Can be either:
a local path to processing script or the directory containing the script (if the script has the same name as the directory), e.g. './dataset/squad'
or './dataset/squad/squad.py'
.
local_path (str
) — Path to the local folder to copy the dataset script to.
Allow inspection/modification of a dataset script by copying on local drive at local_path.
datasets.list_metrics
( with_community_metrics = Truewith_details = False )
Parameters
with_community_metrics (bool
, optional, default True
) — Include the community provided metrics.
with_details (bool
, optional, default False
) — Return the full details on the metrics instead of only the short name.
List all the metrics script available on the BOINC AI Hub.
Deprecated in 2.5.0
Example:
Copied
datasets.load_metric
( path: strconfig_name: typing.Optional[str] = Noneprocess_id: int = 0num_process: int = 1cache_dir: typing.Optional[str] = Noneexperiment_id: typing.Optional[str] = Nonekeep_in_memory: bool = Falsedownload_config: typing.Optional[datasets.download.download_config.DownloadConfig] = Nonedownload_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = Nonerevision: typing.Union[str, datasets.utils.version.Version, NoneType] = None**metric_init_kwargs )
Parameters
path (str
) — path to the metric processing script with the metric builder. Can be either:
a local path to processing script or the directory containing the script (if the script has the same name as the directory), e.g. './metrics/rouge'
or './metrics/rogue/rouge.py'
a metric identifier on the BOINC AI datasets repo (list all available metrics with datasets.list_metrics()
) e.g. 'rouge'
or 'bleu'
config_name (str
, optional) — selecting a configuration for the metric (e.g. the GLUE metric has a configuration for each subset)
process_id (int
, optional) — for distributed evaluation: id of the process
num_process (int
, optional) — for distributed evaluation: total number of processes
cache_dir (Optional str) — path to store the temporary predictions and references (default to ~/.cache/huggingface/metrics/)
experiment_id (str
) — A specific experiment id. This is used if several distributed evaluations share the same file system. This is useful to compute metrics in distributed setups (in particular non-additive metrics like F1).
keep_in_memory (bool) — Whether to store the temporary results in memory (defaults to False)
download_config (Optional datasets.DownloadConfig
— specific download configuration parameters.
revision (Optional Union[str, datasets.Version]
) — if specified, the module will be loaded from the datasets repository at this version. By default, it is set to the local version of the lib. Specifying a version that is different from your local version of the lib might cause compatibility issues.
Load a datasets.Metric.
Deprecated in 2.5.0
Example:
Copied
datasets.inspect_metric
( path: strlocal_path: strdownload_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None**download_kwargs )
Parameters
path (str
) — path to the dataset processing script with the dataset builder. Can be either:
a local path to processing script or the directory containing the script (if the script has the same name as the directory), e.g. './dataset/squad'
or './dataset/squad/squad.py'
a dataset identifier on the BOINC AI Hub (list all available datasets and ids with datasets.list_datasets()
) e.g. 'squad'
, 'glue'
or 'openai/webtext'
local_path (str
) — path to the local folder to copy the datset script to.
download_config (Optional datasets.DownloadConfig
) — specific download configuration parameters.
**download_kwargs (additional keyword arguments) — optional attributes for DownloadConfig() which will override the attributes in download_config if supplied.
Allow inspection/modification of a metric script by copying it on local drive at local_path.
Deprecated in 2.5.0
Configurations used to load data files. They are used when loading local files or a dataset repository:
local files: load_dataset("parquet", data_dir="path/to/data/dir")
dataset repository: load_dataset("allenai/c4")
Copied
( name: str = 'default'version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0data_dir: typing.Optional[str] = Nonedata_files: typing.Optional[datasets.data_files.DataFilesDict] = Nonedescription: typing.Optional[str] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Noneencoding: str = 'utf-8'errors: dataclasses.InitVar[typing.Optional[str]] = 'deprecated'encoding_errors: typing.Optional[str] = Nonechunksize: int = 10485760keep_linebreaks: bool = Falsesample_by: str = 'line' )
BuilderConfig for text files.
( cache_dir: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Nonehash: typing.Optional[str] = Nonebase_path: typing.Optional[str] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonetoken: typing.Union[bool, str, NoneType] = Noneuse_auth_token = 'deprecated'repo_id: typing.Optional[str] = Nonedata_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = Nonedata_dir: typing.Optional[str] = Nonestorage_options: typing.Optional[dict] = Nonewriter_batch_size: typing.Optional[int] = Nonename = 'deprecated'**config_kwargs )
( name: str = 'default'version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0data_dir: typing.Optional[str] = Nonedata_files: typing.Optional[datasets.data_files.DataFilesDict] = Nonedescription: typing.Optional[str] = Nonesep: str = ','delimiter: typing.Optional[str] = Noneheader: typing.Union[int, typing.List[int], str, NoneType] = 'infer'names: typing.Optional[typing.List[str]] = Nonecolumn_names: typing.Optional[typing.List[str]] = Noneindex_col: typing.Union[int, str, typing.List[int], typing.List[str], NoneType] = Noneusecols: typing.Union[typing.List[int], typing.List[str], NoneType] = Noneprefix: typing.Optional[str] = Nonemangle_dupe_cols: bool = Trueengine: typing.Union[typing.Literal['c', 'python', 'pyarrow'], NoneType] = Noneconverters: typing.Dict[typing.Union[int, str], typing.Callable[[typing.Any], typing.Any]] = Nonetrue_values: typing.Optional[list] = Nonefalse_values: typing.Optional[list] = Noneskipinitialspace: bool = Falseskiprows: typing.Union[int, typing.List[int], NoneType] = Nonenrows: typing.Optional[int] = Nonena_values: typing.Union[str, typing.List[str], NoneType] = Nonekeep_default_na: bool = Truena_filter: bool = Trueverbose: bool = Falseskip_blank_lines: bool = Truethousands: typing.Optional[str] = Nonedecimal: str = '.'lineterminator: typing.Optional[str] = Nonequotechar: str = '"'quoting: int = 0escapechar: typing.Optional[str] = Nonecomment: typing.Optional[str] = Noneencoding: typing.Optional[str] = Nonedialect: typing.Optional[str] = Noneerror_bad_lines: bool = Truewarn_bad_lines: bool = Trueskipfooter: int = 0doublequote: bool = Truememory_map: bool = Falsefloat_precision: typing.Optional[str] = Nonechunksize: int = 10000features: typing.Optional[datasets.features.features.Features] = Noneencoding_errors: typing.Optional[str] = 'strict'on_bad_lines: typing.Literal['error', 'warn', 'skip'] = 'error'date_format: typing.Optional[str] = None )
BuilderConfig for CSV.
( cache_dir: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Nonehash: typing.Optional[str] = Nonebase_path: typing.Optional[str] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonetoken: typing.Union[bool, str, NoneType] = Noneuse_auth_token = 'deprecated'repo_id: typing.Optional[str] = Nonedata_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = Nonedata_dir: typing.Optional[str] = Nonestorage_options: typing.Optional[dict] = Nonewriter_batch_size: typing.Optional[int] = Nonename = 'deprecated'**config_kwargs )
( name: str = 'default'version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0data_dir: typing.Optional[str] = Nonedata_files: typing.Optional[datasets.data_files.DataFilesDict] = Nonedescription: typing.Optional[str] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Noneencoding: str = 'utf-8'encoding_errors: typing.Optional[str] = Nonefield: typing.Optional[str] = Noneuse_threads: bool = Trueblock_size: typing.Optional[int] = Nonechunksize: int = 10485760newlines_in_values: typing.Optional[bool] = None )
BuilderConfig for JSON.
( cache_dir: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Nonehash: typing.Optional[str] = Nonebase_path: typing.Optional[str] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonetoken: typing.Union[bool, str, NoneType] = Noneuse_auth_token = 'deprecated'repo_id: typing.Optional[str] = Nonedata_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = Nonedata_dir: typing.Optional[str] = Nonestorage_options: typing.Optional[dict] = Nonewriter_batch_size: typing.Optional[int] = Nonename = 'deprecated'**config_kwargs )
( name: str = 'default'version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0data_dir: typing.Optional[str] = Nonedata_files: typing.Optional[datasets.data_files.DataFilesDict] = Nonedescription: typing.Optional[str] = Nonebatch_size: int = 10000columns: typing.Optional[typing.List[str]] = Nonefeatures: typing.Optional[datasets.features.features.Features] = None )
BuilderConfig for Parquet.
( cache_dir: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Nonehash: typing.Optional[str] = Nonebase_path: typing.Optional[str] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonetoken: typing.Union[bool, str, NoneType] = Noneuse_auth_token = 'deprecated'repo_id: typing.Optional[str] = Nonedata_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = Nonedata_dir: typing.Optional[str] = Nonestorage_options: typing.Optional[dict] = Nonewriter_batch_size: typing.Optional[int] = Nonename = 'deprecated'**config_kwargs )
( name: str = 'default'version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0data_dir: typing.Optional[str] = Nonedata_files: typing.Optional[datasets.data_files.DataFilesDict] = Nonedescription: typing.Optional[str] = Nonefeatures: typing.Optional[datasets.features.features.Features] = None )
BuilderConfig for Arrow.
( cache_dir: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Nonehash: typing.Optional[str] = Nonebase_path: typing.Optional[str] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonetoken: typing.Union[bool, str, NoneType] = Noneuse_auth_token = 'deprecated'repo_id: typing.Optional[str] = Nonedata_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = Nonedata_dir: typing.Optional[str] = Nonestorage_options: typing.Optional[dict] = Nonewriter_batch_size: typing.Optional[int] = Nonename = 'deprecated'**config_kwargs )
( name: str = 'default'version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0data_dir: typing.Optional[str] = Nonedata_files: typing.Optional[datasets.data_files.DataFilesDict] = Nonedescription: typing.Optional[str] = Nonesql: typing.Union[str, ForwardRef('sqlalchemy.sql.Selectable')] = Nonecon: typing.Union[str, ForwardRef('sqlalchemy.engine.Connection'), ForwardRef('sqlalchemy.engine.Engine'), ForwardRef('sqlite3.Connection')] = Noneindex_col: typing.Union[str, typing.List[str], NoneType] = Nonecoerce_float: bool = Trueparams: typing.Union[typing.List, typing.Tuple, typing.Dict, NoneType] = Noneparse_dates: typing.Union[typing.List, typing.Dict, NoneType] = Nonecolumns: typing.Optional[typing.List[str]] = Nonechunksize: typing.Optional[int] = 10000features: typing.Optional[datasets.features.features.Features] = None )
BuilderConfig for SQL.
( cache_dir: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Nonehash: typing.Optional[str] = Nonebase_path: typing.Optional[str] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonetoken: typing.Union[bool, str, NoneType] = Noneuse_auth_token = 'deprecated'repo_id: typing.Optional[str] = Nonedata_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = Nonedata_dir: typing.Optional[str] = Nonestorage_options: typing.Optional[dict] = Nonewriter_batch_size: typing.Optional[int] = Nonename = 'deprecated'**config_kwargs )
( name: str = 'default'version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0data_dir: typing.Optional[str] = Nonedata_files: typing.Optional[datasets.data_files.DataFilesDict] = Nonedescription: typing.Optional[str] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonedrop_labels: bool = Nonedrop_metadata: bool = None )
BuilderConfig for ImageFolder.
( cache_dir: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Nonehash: typing.Optional[str] = Nonebase_path: typing.Optional[str] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonetoken: typing.Union[bool, str, NoneType] = Noneuse_auth_token = 'deprecated'repo_id: typing.Optional[str] = Nonedata_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = Nonedata_dir: typing.Optional[str] = Nonestorage_options: typing.Optional[dict] = Nonewriter_batch_size: typing.Optional[int] = Nonename = 'deprecated'**config_kwargs )
( name: str = 'default'version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0data_dir: typing.Optional[str] = Nonedata_files: typing.Optional[datasets.data_files.DataFilesDict] = Nonedescription: typing.Optional[str] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonedrop_labels: bool = Nonedrop_metadata: bool = None )
Builder Config for AudioFolder.
( cache_dir: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Nonehash: typing.Optional[str] = Nonebase_path: typing.Optional[str] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonetoken: typing.Union[bool, str, NoneType] = Noneuse_auth_token = 'deprecated'repo_id: typing.Optional[str] = Nonedata_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = Nonedata_dir: typing.Optional[str] = Nonestorage_options: typing.Optional[dict] = Nonewriter_batch_size: typing.Optional[int] = Nonename = 'deprecated'**config_kwargs )
( path: strname: typing.Optional[str] = Nonedata_dir: typing.Optional[str] = Nonedata_files: typing.Union[str, typing.Sequence[str], typing.Mapping[str, typing.Union[str, typing.Sequence[str]]], NoneType] = Nonesplit: typing.Union[str, datasets.splits.Split, NoneType] = Nonecache_dir: typing.Optional[str] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonedownload_config: typing.Optional[datasets.download.download_config.DownloadConfig] = Nonedownload_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = Noneverification_mode: typing.Union[datasets.utils.info_utils.VerificationMode, str, NoneType] = Noneignore_verifications = 'deprecated'keep_in_memory: typing.Optional[bool] = Nonesave_infos: bool = Falserevision: typing.Union[str, datasets.utils.version.Version, NoneType] = Nonetoken: typing.Union[bool, str, NoneType] = Noneuse_auth_token = 'deprecated'task = 'deprecated'streaming: bool = Falsenum_proc: typing.Optional[int] = Nonestorage_options: typing.Optional[typing.Dict] = None**config_kwargs ) → or
download_config (, optional) — Specific download configuration parameters.
download_mode ( or str
, defaults to REUSE_DATASET_IF_EXISTS
) — Download/generate mode.
verification_mode ( or str
, defaults to BASIC_CHECKS
) — Verification mode determining the checks to run on the downloaded/processed dataset information (checksums/size/splits/…).
keep_in_memory (bool
, defaults to None
) — Whether to copy the dataset in-memory. If None
, the dataset will not be copied in-memory unless explicitly enabled by setting datasets.config.IN_MEMORY_MAX_SIZE
to nonzero. See more details in the section.
revision ( or str
, optional) — Version of the dataset script to load. As datasets have their own git repository on the Datasets Hub, the default version “main” corresponds to their “main” branch. You can specify a different version than the default “main” by using a commit SHA or a git tag of the dataset repository.
task (str
) — The task to prepare the dataset for during training and evaluation. Casts the dataset’s to standardized column names and types as detailed in datasets.tasks
.
streaming (bool
, defaults to False
) — If set to True
, don’t download the data files. Instead, it streams the data progressively while iterating on the dataset. An or is returned instead in this case.
**config_kwargs (additional keyword arguments) — Keyword arguments to be passed to the BuilderConfig
and used in the .
or
if split
is None
, a with each split.
or or : if streaming=True
You can find the list of datasets on the or with boincai_hub.list_datasets
.
You can find the complete list of datasets in the Datasets .
Load an :
( dataset_path: strfs = 'deprecated'keep_in_memory: typing.Optional[bool] = Nonestorage_options: typing.Optional[dict] = None ) → or
dataset_path (str
) — Path (e.g. "dataset/train"
) or remote URI (e.g. "s3://my-bucket/dataset/train"
) of the or directory where the dataset will be loaded from.
keep_in_memory (bool
, defaults to None
) — Whether to copy the dataset in-memory. If None
, the dataset will not be copied in-memory unless explicitly enabled by setting datasets.config.IN_MEMORY_MAX_SIZE
to nonzero. See more details in the section.
or
If dataset_path
is a path of a dataset dict directory, a with each split.
Loads a dataset that was previously saved using from a dataset directory, or from a filesystem using any implementation of fsspec.spec.AbstractFileSystem
.
features (, optional) — Set the features type to use for this dataset.
download_config (, optional) — Specific download configuration parameters.
download_mode ( or str
, defaults to REUSE_DATASET_IF_EXISTS
) — Download/generate mode.
revision ( or str
, optional) — Version of the dataset script to load. As datasets have their own git repository on the Datasets Hub, the default version “main” corresponds to their “main” branch. You can specify a different version than the default “main” by using a commit SHA or a git tag of the dataset repository.
**config_kwargs (additional keyword arguments) — Keyword arguments to be passed to the and used in the .
You can find the list of datasets on the or with boincai_hub.list_datasets
.
a dataset identifier on the BOINC AI Hub (list all available datasets and ids with ) e.g. 'squad'
, 'glue'
or 'openai/webtext'
download_config (, optional) — Specific download configuration parameters.
download_mode ( or str
, defaults to REUSE_DATASET_IF_EXISTS
) — Download/generate mode.
**download_kwargs (additional keyword arguments) — Optional attributes for which will override the attributes in download_config
if supplied, for example token
.
a dataset identifier on the BOINC AI Hub (list all available datasets and ids with ) e.g. 'squad'
, 'glue'
or`'openai/webtext'
download_config (, optional) — Specific download configuration parameters.
download_mode ( or str
, defaults to REUSE_DATASET_IF_EXISTS
) — Download/generate mode.
a dataset identifier on the BOINC AI Hub (list all available datasets and ids with ) e.g. 'squad'
, 'glue'
or 'openai/webtext'
download_config (, optional) — Specific download configuration parameters.
download_mode ( or str
, defaults to REUSE_DATASET_IF_EXISTS
) — Download/generate mode.
revision ( or str
, optional) — Version of the dataset script to load. As datasets have their own git repository on the Datasets Hub, the default version “main” corresponds to their “main” branch. You can specify a different version than the default “main” by using a commit SHA or a git tag of the dataset repository.
a dataset identifier on the BOINC AI Hub (list all available datasets and ids with ) e.g. 'squad'
, 'glue'
or 'openai/webtext'
.
download_config (, optional) — Specific download configuration parameters.
**download_kwargs (additional keyword arguments) — Optional arguments for which will override the attributes of download_config
if supplied.
Metrics is deprecated in 🌍 Datasets. To learn more about how to use metrics, take a look at the library 🌍 ! In addition to metrics, you can find more tools for evaluating models and datasets.
Use evaluate.list_evaluation_modules instead, from the new library 🌍 Evaluate:
download_mode ( or str
, default REUSE_DATASET_IF_EXISTS
) — Download/generate mode.
Use evaluate.load instead, from the new library 🌍 Evaluate:
Use evaluate.inspect_evaluation_module instead, from the new library 🌍 Evaluate instead:
You can pass arguments to load_dataset
to configure data loading. For example you can specify the sep
parameter to define the that is used to load the data: