Main classes
Main classes
DatasetInfo
class datasets.DatasetInfo
( description: str = <factory>citation: str = <factory>homepage: str = <factory>license: str = <factory>features: typing.Optional[datasets.features.features.Features] = Nonepost_processed: typing.Optional[datasets.info.PostProcessedInfo] = Nonesupervised_keys: typing.Optional[datasets.info.SupervisedKeysData] = Nonetask_templates: typing.Optional[typing.List[datasets.tasks.base.TaskTemplate]] = Nonebuilder_name: typing.Optional[str] = Nonedataset_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Noneversion: typing.Union[str, datasets.utils.version.Version, NoneType] = Nonesplits: typing.Optional[dict] = Nonedownload_checksums: typing.Optional[dict] = Nonedownload_size: typing.Optional[int] = Nonepost_processing_size: typing.Optional[int] = Nonedataset_size: typing.Optional[int] = Nonesize_in_bytes: typing.Optional[int] = None )
Parameters
description (
str) β A description of the dataset.citation (
str) β A BibTeX citation of the dataset.homepage (
str) β A URL to the official homepage for the dataset.license (
str) β The datasetβs license. It can be the name of the license or a paragraph containing the terms of the license.features (Features, optional) β The features used to specify the datasetβs column types.
post_processed (
PostProcessedInfo, optional) β Information regarding the resources of a possible post-processing of a dataset. For example, it can contain the information of an index.supervised_keys (
SupervisedKeysData, optional) β Specifies the input feature and the label for supervised learning if applicable for the dataset (legacy from TFDS).builder_name (
str, optional) β The name of theGeneratorBasedBuildersubclass used to create the dataset. Usually matched to the corresponding script name. It is also the snake_case version of the dataset builder class name.config_name (
str, optional) β The name of the configuration derived from BuilderConfig.version (
stror Version, optional) β The version of the dataset.splits (
dict, optional) β The mapping between split name and metadata.download_checksums (
dict, optional) β The mapping between the URL to download the datasetβs checksums and corresponding metadata.download_size (
int, optional) β The size of the files to download to generate the dataset, in bytes.post_processing_size (
int, optional) β Size of the dataset in bytes after post-processing, if any.dataset_size (
int, optional) β The combined size in bytes of the Arrow tables for all splits.size_in_bytes (
int, optional) β The combined size in bytes of all files associated with the dataset (downloaded files + Arrow files).task_templates (
List[TaskTemplate], optional) β The task templates to prepare the dataset for during training and evaluation. Each template casts the datasetβs Features to standardized column names and types as detailed indatasets.tasks.**config_kwargs (additional keyword arguments) β Keyword arguments to be passed to the BuilderConfig and used in the DatasetBuilder.
Information about a dataset.
DatasetInfo documents datasets, including its name, version, and features. See the constructor arguments and properties for a full list.
Not all fields are known on construction and may be updated later.
from_directory
( dataset_info_dir: strfs = 'deprecated'storage_options: typing.Optional[dict] = None )
Parameters
dataset_info_dir (
str) β The directory containing the metadata file. This should be the root directory of a specific dataset version.fs (
fsspec.spec.AbstractFileSystem, optional) β Instance of the remote filesystem used to download the files from.Deprecated in 2.9.0
fswas deprecated in version 2.9.0 and will be removed in 3.0.0. Please usestorage_optionsinstead, e.g.storage_options=fs.storage_options.storage_options (
dict, optional) β Key/value pairs to be passed on to the file-system backend, if any.Added in 2.9.0
Create DatasetInfo from the JSON file in dataset_info_dir.
This function updates all the dynamically generated fields (num_examples, hash, time of creation,β¦) of the DatasetInfo.
This will overwrite all previous metadata.
Example:
Copied
write_to_directory
( dataset_info_dirpretty_print = Falsefs = 'deprecated'storage_options: typing.Optional[dict] = None )
Parameters
dataset_info_dir (
str) β Destination directory.pretty_print (
bool, defaults toFalse) β IfTrue, the JSON will be pretty-printed with the indent level of 4.fs (
fsspec.spec.AbstractFileSystem, optional) β Instance of the remote filesystem used to download the files from.Deprecated in 2.9.0
fswas deprecated in version 2.9.0 and will be removed in 3.0.0. Please usestorage_optionsinstead, e.g.storage_options=fs.storage_options.storage_options (
dict, optional) β Key/value pairs to be passed on to the file-system backend, if any.Added in 2.9.0
Write DatasetInfo and license (if present) as JSON files to dataset_info_dir.
Example:
Copied
Dataset
The base class Dataset implements a Dataset backed by an Apache Arrow table.
class datasets.Dataset
( arrow_table: Tableinfo: typing.Optional[datasets.info.DatasetInfo] = Nonesplit: typing.Optional[datasets.splits.NamedSplit] = Noneindices_table: typing.Optional[datasets.table.Table] = Nonefingerprint: typing.Optional[str] = None )
A Dataset backed by an Arrow table.
add_column
( name: strcolumn: typing.Union[list, <built-in function array>]new_fingerprint: str )
Parameters
name (
str) β Column name.column (
listornp.array) β Column data to be added.
Add column to Dataset.
Added in 1.7
Example:
Copied
add_item
( item: dictnew_fingerprint: str )
Parameters
item (
dict) β Item data to be added.
Add item to Dataset.
Added in 1.7
Example:
Copied
from_file
( filename: strinfo: typing.Optional[datasets.info.DatasetInfo] = Nonesplit: typing.Optional[datasets.splits.NamedSplit] = Noneindices_filename: typing.Optional[str] = Nonein_memory: bool = False )
Parameters
filename (
str) β File name of the dataset.info (
DatasetInfo, optional) β Dataset information, like description, citation, etc.split (
NamedSplit, optional) β Name of the dataset split.indices_filename (
str, optional) β File names of the indices.in_memory (
bool, defaults toFalse) β Whether to copy the data in-memory.
Instantiate a Dataset backed by an Arrow table at filename.
from_buffer
( buffer: Bufferinfo: typing.Optional[datasets.info.DatasetInfo] = Nonesplit: typing.Optional[datasets.splits.NamedSplit] = Noneindices_buffer: typing.Optional[pyarrow.lib.Buffer] = None )
Parameters
buffer (
pyarrow.Buffer) β Arrow buffer.info (
DatasetInfo, optional) β Dataset information, like description, citation, etc.split (
NamedSplit, optional) β Name of the dataset split.indices_buffer (
pyarrow.Buffer, optional) β Indices Arrow buffer.
Instantiate a Dataset backed by an Arrow buffer.
from_pandas
( df: DataFramefeatures: typing.Optional[datasets.features.features.Features] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonesplit: typing.Optional[datasets.splits.NamedSplit] = Nonepreserve_index: typing.Optional[bool] = None )
Parameters
df (
pandas.DataFrame) β Dataframe that contains the dataset.features (Features, optional) β Dataset features.
info (
DatasetInfo, optional) β Dataset information, like description, citation, etc.split (
NamedSplit, optional) β Name of the dataset split.preserve_index (
bool, optional) β Whether to store the index as an additional column in the resulting Dataset. The default ofNonewill store the index as a column, except forRangeIndexwhich is stored as metadata only. Usepreserve_index=Trueto force it to be stored as a column.
Convert pandas.DataFrame to a pyarrow.Table to create a Dataset.
The column types in the resulting Arrow Table are inferred from the dtypes of the pandas.Series in the DataFrame. In the case of non-object Series, the NumPy dtype is translated to its Arrow equivalent. In the case of object, we need to guess the datatype by looking at the Python objects in this Series.
Be aware that Series of the object dtype donβt carry enough information to always lead to a meaningful Arrow type. In the case that we cannot infer a type, e.g. because the DataFrame is of length 0 or the Series only contains None/nan objects, the type is set to null. This behavior can be avoided by constructing explicit features and passing it to this function.
Example:
Copied
from_dict
( mapping: dictfeatures: typing.Optional[datasets.features.features.Features] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonesplit: typing.Optional[datasets.splits.NamedSplit] = None )
Parameters
mapping (
Mapping) β Mapping of strings to Arrays or Python lists.features (Features, optional) β Dataset features.
info (
DatasetInfo, optional) β Dataset information, like description, citation, etc.split (
NamedSplit, optional) β Name of the dataset split.
Convert dict to a pyarrow.Table to create a Dataset.
from_generator
( generator: typing.Callablefeatures: typing.Optional[datasets.features.features.Features] = Nonecache_dir: str = Nonekeep_in_memory: bool = Falsegen_kwargs: typing.Optional[dict] = Nonenum_proc: typing.Optional[int] = None**kwargs )
Parameters
generator ( β
Callable): A generator function thatyieldsexamples.features (Features, optional) β Dataset features.
cache_dir (
str, optional, defaults to"~/.cache/boincai/datasets") β Directory to cache data.keep_in_memory (
bool, defaults toFalse) β Whether to copy the data in-memory.gen_kwargs(
dict, optional) β Keyword arguments to be passed to thegeneratorcallable. You can define a sharded dataset by passing the list of shards ingen_kwargs.num_proc (
int, optional, defaults toNone) β Number of processes when downloading and generating the dataset locally. This is helpful if the dataset is made of multiple files. Multiprocessing is disabled by default.Added in 2.7.0
**kwargs (additional keyword arguments) β Keyword arguments to be passed to :
GeneratorConfig.
Create a Dataset from a generator.
Example:
Copied
Copied
data
( )
The Apache Arrow table backing the dataset.
Example:
Copied
cache_files
( )
The cache files containing the Apache Arrow table backing the dataset.
Example:
Copied
num_columns
( )
Number of columns in the dataset.
Example:
Copied
num_rows
( )
Number of rows in the dataset (same as Dataset.len()).
Example:
Copied
column_names
( )
Names of the columns in the dataset.
Example:
Copied
shape
( )
Shape of the dataset (number of columns, number of rows).
Example:
Copied
unique
( column: str ) β list
Parameters
column (
str) β Column name (list all the column names with column_names).
Returns
list
List of unique elements in the given column.
Return a list of the unique elements in a column.
This is implemented in the low-level backend and as such, very fast.
Example:
Copied
flatten
( new_fingerprint: typing.Optional[str] = Nonemax_depth = 16 ) β Dataset
Parameters
new_fingerprint (
str, optional) β The new fingerprint of the dataset after transform. IfNone, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments.
Returns
A copy of the dataset with flattened columns.
Flatten the table. Each column with a struct type is flattened into one column per struct field. Other columns are left unchanged.
Example:
Copied
cast
( features: Featuresbatch_size: typing.Optional[int] = 1000keep_in_memory: bool = Falseload_from_cache_file: typing.Optional[bool] = Nonecache_file_name: typing.Optional[str] = Nonewriter_batch_size: typing.Optional[int] = 1000num_proc: typing.Optional[int] = None ) β Dataset
Parameters
features (Features) β New features to cast the dataset to. The name of the fields in the features must match the current column names. The type of the data must also be convertible from one type to the other. For non-trivial conversion, e.g.
str<->ClassLabelyou should use map() to update the Dataset.batch_size (
int, defaults to1000) β Number of examples per batch provided to cast. Ifbatch_size <= 0orbatch_size == Nonethen provide the full dataset as a single batch to cast.keep_in_memory (
bool, defaults toFalse) β Whether to copy the data in-memory.load_from_cache_file (
bool, defaults toTrueif caching is enabled) β If a cache file storing the current computation fromfunctioncan be identified, use it instead of recomputing.cache_file_name (
str, optional, defaults toNone) β Provide the name of a path for the cache file. It is used to store the results of the computation instead of the automatically generated cache file name.writer_batch_size (
int, defaults to1000) β Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running map().num_proc (
int, optional, defaults toNone) β Number of processes for multiprocessing. By default it doesnβt use multiprocessing.
Returns
A copy of the dataset with casted features.
Cast the dataset to a new set of features.
Example:
Copied
cast_column
( column: strfeature: typing.Union[dict, list, tuple, datasets.features.features.Value, datasets.features.features.ClassLabel, datasets.features.translation.Translation, datasets.features.translation.TranslationVariableLanguages, datasets.features.features.Sequence, datasets.features.features.Array2D, datasets.features.features.Array3D, datasets.features.features.Array4D, datasets.features.features.Array5D, datasets.features.audio.Audio, datasets.features.image.Image]new_fingerprint: typing.Optional[str] = None )
Parameters
column (
str) β Column name.feature (
FeatureType) β Target feature.new_fingerprint (
str, optional) β The new fingerprint of the dataset after transform. IfNone, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments.
Cast column to feature for decoding.
Example:
Copied
remove_columns
( column_names: typing.Union[str, typing.List[str]]new_fingerprint: typing.Optional[str] = None ) β Dataset
Parameters
column_names (
Union[str, List[str]]) β Name of the column(s) to remove.new_fingerprint (
str, optional) β The new fingerprint of the dataset after transform. IfNone, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments.
Returns
A copy of the dataset object without the columns to remove.
Remove one or several column(s) in the dataset and the features associated to them.
You can also remove a column using map() with remove_columns but the present method is in-place (doesnβt copy the data to a new dataset) and is thus faster.
Example:
Copied
rename_column
( original_column_name: strnew_column_name: strnew_fingerprint: typing.Optional[str] = None ) β Dataset
Parameters
original_column_name (
str) β Name of the column to rename.new_column_name (
str) β New name for the column.new_fingerprint (
str, optional) β The new fingerprint of the dataset after transform. IfNone, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments.
Returns
A copy of the dataset with a renamed column.
Rename a column in the dataset, and move the features associated to the original column under the new column name.
Example:
Copied
rename_columns
( column_mapping: typing.Dict[str, str]new_fingerprint: typing.Optional[str] = None ) β Dataset
Parameters
column_mapping (
Dict[str, str]) β A mapping of columns to rename to their new namesnew_fingerprint (
str, optional) β The new fingerprint of the dataset after transform. IfNone, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments.
Returns
A copy of the dataset with renamed columns
Rename several columns in the dataset, and move the features associated to the original columns under the new column names.
Example:
Copied
select_columns
( column_names: typing.Union[str, typing.List[str]]new_fingerprint: typing.Optional[str] = None ) β Dataset
Parameters
column_names (
Union[str, List[str]]) β Name of the column(s) to keep.new_fingerprint (
str, optional) β The new fingerprint of the dataset after transform. IfNone, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments.
Returns
A copy of the dataset object which only consists of selected columns.
Select one or several column(s) in the dataset and the features associated to them.
Example:
Copied
class_encode_column
( column: strinclude_nulls: bool = False )
Parameters
column (
str) β The name of the column to cast (list all the column names with column_names)include_nulls (
bool, defaults toFalse) β Whether to include null values in the class labels. IfTrue, the null values will be encoded as the"None"class label.Added in 1.14.2
Casts the given column as ClassLabel and updates the table.
Example:
Copied
__len__
( )
Number of rows in the dataset.
Example:
Copied
__iter__
( )
Iterate through the examples.
If a formatting is set with Dataset.set_format() rows will be returned with the selected format.
iter
( batch_size: intdrop_last_batch: bool = False )
Parameters
batch_size (
int) β size of each batch to yield.drop_last_batch (
bool, default False) β Whether a last batch smaller than the batch_size should be dropped
Iterate through the batches of size batch_size.
If a formatting is set with [~datasets.Dataset.set_format] rows will be returned with the selected format.
formatted_as
( type: typing.Optional[str] = Nonecolumns: typing.Optional[typing.List] = Noneoutput_all_columns: bool = False**format_kwargs )
Parameters
type (
str, optional) β Output type selected in[None, 'numpy', 'torch', 'tensorflow', 'pandas', 'arrow', 'jax'].Nonemeans `getitemβ returns python objects (default).columns (
List[str], optional) β Columns to format in the output.Nonemeans__getitem__returns all columns (default).output_all_columns (
bool, defaults toFalse) β Keep un-formatted columns as well in the output (as python objects).**format_kwargs (additional keyword arguments) β Keywords arguments passed to the convert function like
np.array,torch.tensorortensorflow.ragged.constant.
To be used in a with statement. Set __getitem__ return format (type and columns).
set_format
( type: typing.Optional[str] = Nonecolumns: typing.Optional[typing.List] = Noneoutput_all_columns: bool = False**format_kwargs )
Parameters
type (
str, optional) β Either output type selected in[None, 'numpy', 'torch', 'tensorflow', 'pandas', 'arrow', 'jax'].Nonemeans__getitem__returns python objects (default).columns (
List[str], optional) β Columns to format in the output.Nonemeans__getitem__returns all columns (default).output_all_columns (
bool, defaults toFalse) β Keep un-formatted columns as well in the output (as python objects).**format_kwargs (additional keyword arguments) β Keywords arguments passed to the convert function like
np.array,torch.tensorortensorflow.ragged.constant.
Set __getitem__ return format (type and columns). The data formatting is applied on-the-fly. The format type (for example βnumpyβ) is used to format batches when using __getitem__. Itβs also possible to use custom transforms for formatting using set_transform().
It is possible to call map() after calling set_format. Since map may add new columns, then the list of formatted columns
gets updated. In this case, if you apply map on a dataset to add a new column, then this column will be formatted as:
Copied
Example:
Copied
set_transform
( transform: typing.Optional[typing.Callable]columns: typing.Optional[typing.List] = Noneoutput_all_columns: bool = False )
Parameters
transform (
Callable, optional) β User-defined formatting transform, replaces the format defined by set_format(). A formatting function is a callable that takes a batch (as adict) as input and returns a batch. This function is applied right before returning the objects in__getitem__.columns (
List[str], optional) β Columns to format in the output. If specified, then the input batch of the transform only contains those columns.output_all_columns (
bool, defaults toFalse) β Keep un-formatted columns as well in the output (as python objects). If set to True, then the other un-formatted columns are kept with the output of the transform.
Set __getitem__ return format using this transform. The transform is applied on-the-fly on batches when __getitem__ is called. As set_format(), this can be reset using reset_format().
Example:
Copied
reset_format
( )
Reset __getitem__ return format to python objects and all columns.
Same as self.set_format()
Example:
Copied
with_format
( type: typing.Optional[str] = Nonecolumns: typing.Optional[typing.List] = Noneoutput_all_columns: bool = False**format_kwargs )
Parameters
type (
str, optional) β Either output type selected in[None, 'numpy', 'torch', 'tensorflow', 'pandas', 'arrow', 'jax'].Nonemeans__getitem__returns python objects (default).columns (
List[str], optional) β Columns to format in the output.Nonemeans__getitem__returns all columns (default).output_all_columns (
bool, defaults toFalse) β Keep un-formatted columns as well in the output (as python objects).**format_kwargs (additional keyword arguments) β Keywords arguments passed to the convert function like
np.array,torch.tensorortensorflow.ragged.constant.
Set __getitem__ return format (type and columns). The data formatting is applied on-the-fly. The format type (for example βnumpyβ) is used to format batches when using __getitem__.
Itβs also possible to use custom transforms for formatting using with_transform().
Contrary to set_format(), with_format returns a new Dataset object.
Example:
Copied
with_transform
( transform: typing.Optional[typing.Callable]columns: typing.Optional[typing.List] = Noneoutput_all_columns: bool = False )
Parameters
transform (
Callable,optional) β User-defined formatting transform, replaces the format defined by set_format(). A formatting function is a callable that takes a batch (as adict) as input and returns a batch. This function is applied right before returning the objects in__getitem__.columns (
List[str],optional) β Columns to format in the output. If specified, then the input batch of the transform only contains those columns.output_all_columns (
bool, defaults toFalse) β Keep un-formatted columns as well in the output (as python objects). If set toTrue, then the other un-formatted columns are kept with the output of the transform.
Set __getitem__ return format using this transform. The transform is applied on-the-fly on batches when __getitem__ is called.
As set_format(), this can be reset using reset_format().
Contrary to set_transform(), with_transform returns a new Dataset object.
Example:
Copied
__getitem__
( key )
Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).
cleanup_cache_files
( ) β int
Returns
int
Number of removed files.
Clean up all cache files in the dataset cache directory, excepted the currently used cache file if there is one.
Be careful when running this command that no other process is currently using other cache files.
Example:
Copied
map
( function: typing.Optional[typing.Callable] = Nonewith_indices: bool = Falsewith_rank: bool = Falseinput_columns: typing.Union[str, typing.List[str], NoneType] = Nonebatched: bool = Falsebatch_size: typing.Optional[int] = 1000drop_last_batch: bool = Falseremove_columns: typing.Union[str, typing.List[str], NoneType] = Nonekeep_in_memory: bool = Falseload_from_cache_file: typing.Optional[bool] = Nonecache_file_name: typing.Optional[str] = Nonewriter_batch_size: typing.Optional[int] = 1000features: typing.Optional[datasets.features.features.Features] = Nonedisable_nullable: bool = Falsefn_kwargs: typing.Optional[dict] = Nonenum_proc: typing.Optional[int] = Nonesuffix_template: str = '_{rank:05d}_of_{num_proc:05d}'new_fingerprint: typing.Optional[str] = Nonedesc: typing.Optional[str] = None )
Parameters
function (
Callable) β Function with one of the following signatures:function(example: Dict[str, Any]) -> Dict[str, Any]ifbatched=Falseandwith_indices=Falseandwith_rank=Falsefunction(example: Dict[str, Any], *extra_args) -> Dict[str, Any]ifbatched=Falseandwith_indices=Trueand/orwith_rank=True(one extra arg for each)function(batch: Dict[str, List]) -> Dict[str, List]ifbatched=Trueandwith_indices=Falseandwith_rank=Falsefunction(batch: Dict[str, List], *extra_args) -> Dict[str, List]ifbatched=Trueandwith_indices=Trueand/orwith_rank=True(one extra arg for each)
For advanced usage, the function can also return a
pyarrow.Table. Moreover if your function returns nothing (None), thenmapwill run your function and return the dataset unchanged. If no function is provided, default to identity function:lambda x: x.with_indices (
bool, defaults toFalse) β Provide example indices tofunction. Note that in this case the signature offunctionshould bedef function(example, idx[, rank]): ....with_rank (
bool, defaults toFalse) β Provide process rank tofunction. Note that in this case the signature offunctionshould bedef function(example[, idx], rank): ....input_columns (
Optional[Union[str, List[str]]], defaults toNone) β The columns to be passed intofunctionas positional arguments. IfNone, adictmapping to all formatted columns is passed as one argument.batched (
bool, defaults toFalse) β Provide batch of examples tofunction.batch_size (
int, optional, defaults to1000) β Number of examples per batch provided tofunctionifbatched=True. Ifbatch_size <= 0orbatch_size == None, provide the full dataset as a single batch tofunction.drop_last_batch (
bool, defaults toFalse) β Whether a last batch smaller than the batch_size should be dropped instead of being processed by the function.remove_columns (
Optional[Union[str, List[str]]], defaults toNone) β Remove a selection of columns while doing the mapping. Columns will be removed before updating the examples with the output offunction, i.e. iffunctionis adding columns with names inremove_columns, these columns will be kept.keep_in_memory (
bool, defaults toFalse) β Keep the dataset in memory instead of writing it to a cache file.load_from_cache_file (
Optioanl[bool], defaults toTrueif caching is enabled) β If a cache file storing the current computation fromfunctioncan be identified, use it instead of recomputing.cache_file_name (
str, optional, defaults toNone) β Provide the name of a path for the cache file. It is used to store the results of the computation instead of the automatically generated cache file name.writer_batch_size (
int, defaults to1000) β Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while runningmap.features (
Optional[datasets.Features], defaults toNone) β Use a specific Features to store the cache file instead of the automatically generated one.disable_nullable (
bool, defaults toFalse) β Disallow null values in the table.fn_kwargs (
Dict, optional, defaults toNone) β Keyword arguments to be passed tofunction.num_proc (
int, optional, defaults toNone) β Max number of processes when generating cache. Already cached shards are loaded sequentially.suffix_template (
str) β Ifcache_file_nameis specified, then this suffix will be added at the end of the base name of each. Defaults to"_{rank:05d}_of_{num_proc:05d}". For example, ifcache_file_nameis βprocessed.arrowβ, then forrank=1andnum_proc=4, the resulting file would be"processed_00001_of_00004.arrow"for the default suffix.new_fingerprint (
str, optional, defaults toNone) β The new fingerprint of the dataset after transform. IfNone, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments.desc (
str, optional, defaults toNone) β Meaningful description to be displayed alongside with the progress bar while mapping examples.
Apply a function to all the examples in the table (individually or in batches) and update the table. If your function returns a column that already exists, then it overwrites it.
You can specify whether the function should be batched or not with the batched parameter:
If batched is
False, then the function takes 1 example in and should return 1 example. An example is a dictionary, e.g.{"text": "Hello there !"}.If batched is
Trueandbatch_sizeis 1, then the function takes a batch of 1 example as input and can return a batch with 1 or more examples. A batch is a dictionary, e.g. a batch of 1 example is{"text": ["Hello there !"]}.If batched is
Trueandbatch_sizeisn > 1, then the function takes a batch ofnexamples as input and can return a batch withnexamples, or with an arbitrary number of examples. Note that the last batch may have less thannexamples. A batch is a dictionary, e.g. a batch ofnexamples is{"text": ["Hello there !"] * n}.
Example:
Copied
filter
( function: typing.Optional[typing.Callable] = Nonewith_indices = Falseinput_columns: typing.Union[str, typing.List[str], NoneType] = Nonebatched: bool = Falsebatch_size: typing.Optional[int] = 1000keep_in_memory: bool = Falseload_from_cache_file: typing.Optional[bool] = Nonecache_file_name: typing.Optional[str] = Nonewriter_batch_size: typing.Optional[int] = 1000fn_kwargs: typing.Optional[dict] = Nonenum_proc: typing.Optional[int] = Nonesuffix_template: str = '_{rank:05d}_of_{num_proc:05d}'new_fingerprint: typing.Optional[str] = Nonedesc: typing.Optional[str] = None )
Parameters
function (
Callable) β Callable with one of the following signatures:function(example: Dict[str, Any]) -> boolifwith_indices=False, batched=Falsefunction(example: Dict[str, Any], indices: int) -> boolifwith_indices=True, batched=Falsefunction(example: Dict[str, List]) -> List[bool]ifwith_indices=False, batched=Truefunction(example: Dict[str, List], indices: List[int]) -> List[bool]ifwith_indices=True, batched=True
If no function is provided, defaults to an always
Truefunction:lambda x: True.with_indices (
bool, defaults toFalse) β Provide example indices tofunction. Note that in this case the signature offunctionshould bedef function(example, idx): ....input_columns (
strorList[str], optional) β The columns to be passed intofunctionas positional arguments. IfNone, adictmapping to all formatted columns is passed as one argument.batched (
bool, defaults toFalse) β Provide batch of examples tofunction.batch_size (
int, optional, defaults to1000) β Number of examples per batch provided tofunctionifbatched = True. Ifbatched = False, one example per batch is passed tofunction. Ifbatch_size <= 0orbatch_size == None, provide the full dataset as a single batch tofunction.keep_in_memory (
bool, defaults toFalse) β Keep the dataset in memory instead of writing it to a cache file.load_from_cache_file (
Optional[bool], defaults toTrueif caching is enabled) β If a cache file storing the current computation fromfunctioncan be identified, use it instead of recomputing.cache_file_name (
str, optional) β Provide the name of a path for the cache file. It is used to store the results of the computation instead of the automatically generated cache file name.writer_batch_size (
int, defaults to1000) β Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while runningmap.fn_kwargs (
dict, optional) β Keyword arguments to be passed tofunction.num_proc (
int, optional) β Number of processes for multiprocessing. By default it doesnβt use multiprocessing.suffix_template (
str) β Ifcache_file_nameis specified, then this suffix will be added at the end of the base name of each. For example, ifcache_file_nameis"processed.arrow", then forrank = 1andnum_proc = 4, the resulting file would be"processed_00001_of_00004.arrow"for the default suffix (default_{rank:05d}_of_{num_proc:05d}).new_fingerprint (
str, optional) β The new fingerprint of the dataset after transform. IfNone, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments.desc (
str, optional, defaults toNone) β Meaningful description to be displayed alongside with the progress bar while filtering examples.
Apply a filter function to all the elements in the table in batches and update the table so that the dataset only includes examples according to the filter function.
Example:
Copied
select
( indices: typing.Iterablekeep_in_memory: bool = Falseindices_cache_file_name: typing.Optional[str] = Nonewriter_batch_size: typing.Optional[int] = 1000new_fingerprint: typing.Optional[str] = None )
Parameters
indices (
range,list,iterable,ndarrayorSeries) β Range, list or 1D-array of integer indices for indexing. If the indices correspond to a contiguous range, the Arrow table is simply sliced. However passing a list of indices that are not contiguous creates indices mapping, which is much less efficient, but still faster than recreating an Arrow table made of the requested rows.keep_in_memory (
bool, defaults toFalse) β Keep the indices mapping in memory instead of writing it to a cache file.indices_cache_file_name (
str, optional, defaults toNone) β Provide the name of a path for the cache file. It is used to store the indices mapping instead of the automatically generated cache file name.writer_batch_size (
int, defaults to1000) β Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while runningmap.new_fingerprint (
str, optional, defaults toNone) β The new fingerprint of the dataset after transform. IfNone, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments.
Create a new dataset with rows selected following the list/array of indices.
Example:
Copied
sort
( column_names: typing.Union[str, typing.Sequence[str]]reverse: typing.Union[bool, typing.Sequence[bool]] = Falsekind = 'deprecated'null_placement: str = 'at_end'keep_in_memory: bool = Falseload_from_cache_file: typing.Optional[bool] = Noneindices_cache_file_name: typing.Optional[str] = Nonewriter_batch_size: typing.Optional[int] = 1000new_fingerprint: typing.Optional[str] = None )
Parameters
column_names (
Union[str, Sequence[str]]) β Column name(s) to sort by.reverse (
Union[bool, Sequence[bool]], defaults toFalse) β IfTrue, sort by descending order rather than ascending. If a single bool is provided, the value is applied to the sorting of all column names. Otherwise a list of bools with the same length and order as column_names must be provided.kind (
str, optional) β Pandas algorithm for sorting selected in{quicksort, mergesort, heapsort, stable}, The default isquicksort. Note that bothstableandmergesortusetimsortunder the covers and, in general, the actual implementation will vary with data type. Themergesortoption is retained for backwards compatibility.Deprecated in 2.8.0
kindwas deprecated in version 2.10.0 and will be removed in 3.0.0.null_placement (
str, defaults toat_end) β PutNonevalues at the beginning ifat_startorfirstor at the end ifat_endorlastAdded in 1.14.2
keep_in_memory (
bool, defaults toFalse) β Keep the sorted indices in memory instead of writing it to a cache file.load_from_cache_file (
Optional[bool], defaults toTrueif caching is enabled) β If a cache file storing the sorted indices can be identified, use it instead of recomputing.indices_cache_file_name (
str, optional, defaults toNone) β Provide the name of a path for the cache file. It is used to store the sorted indices instead of the automatically generated cache file name.writer_batch_size (
int, defaults to1000) β Number of rows per write operation for the cache file writer. Higher value gives smaller cache files, lower value consume less temporary memory.new_fingerprint (
str, optional, defaults toNone) β The new fingerprint of the dataset after transform. IfNone, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments
Create a new dataset sorted according to a single or multiple columns.
Example:
Copied
shuffle
( seed: typing.Optional[int] = Nonegenerator: typing.Optional[numpy.random._generator.Generator] = Nonekeep_in_memory: bool = Falseload_from_cache_file: typing.Optional[bool] = Noneindices_cache_file_name: typing.Optional[str] = Nonewriter_batch_size: typing.Optional[int] = 1000new_fingerprint: typing.Optional[str] = None )
Parameters
seed (
int, optional) β A seed to initialize the default BitGenerator ifgenerator=None. IfNone, then fresh, unpredictable entropy will be pulled from the OS. If anintorarray_like[ints]is passed, then it will be passed to SeedSequence to derive the initial BitGenerator state.generator (
numpy.random.Generator, optional) β Numpy random Generator to use to compute the permutation of the dataset rows. Ifgenerator=None(default), usesnp.random.default_rng(the default BitGenerator (PCG64) of NumPy).keep_in_memory (
bool, defaultFalse) β Keep the shuffled indices in memory instead of writing it to a cache file.load_from_cache_file (
Optional[bool], defaults toTrueif caching is enabled) β If a cache file storing the shuffled indices can be identified, use it instead of recomputing.indices_cache_file_name (
str, optional) β Provide the name of a path for the cache file. It is used to store the shuffled indices instead of the automatically generated cache file name.writer_batch_size (
int, defaults to1000) β Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while runningmap.new_fingerprint (
str, optional, defaults toNone) β The new fingerprint of the dataset after transform. IfNone, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments.
Create a new Dataset where the rows are shuffled.
Currently shuffling uses numpy random generators. You can either supply a NumPy BitGenerator to use, or a seed to initiate NumPyβs default random generator (PCG64).
Shuffling takes the list of indices [0:len(my_dataset)] and shuffles it to create an indices mapping. However as soon as your Dataset has an indices mapping, the speed can become 10x slower. This is because there is an extra step to get the row index to read using the indices mapping, and most importantly, you arenβt reading contiguous chunks of data anymore. To restore the speed, youβd need to rewrite the entire dataset on your disk again using Dataset.flatten_indices(), which removes the indices mapping.
This may take a lot of time depending of the size of your dataset though:
Copied
In this case, we recommend switching to an IterableDataset and leveraging its fast approximate shuffling method IterableDataset.shuffle().
It only shuffles the shards order and adds a shuffle buffer to your dataset, which keeps the speed of your dataset optimal:
Copied
Example:
Copied
train_test_split
( test_size: typing.Union[float, int, NoneType] = Nonetrain_size: typing.Union[float, int, NoneType] = Noneshuffle: bool = Truestratify_by_column: typing.Optional[str] = Noneseed: typing.Optional[int] = Nonegenerator: typing.Optional[numpy.random._generator.Generator] = Nonekeep_in_memory: bool = Falseload_from_cache_file: typing.Optional[bool] = Nonetrain_indices_cache_file_name: typing.Optional[str] = Nonetest_indices_cache_file_name: typing.Optional[str] = Nonewriter_batch_size: typing.Optional[int] = 1000train_new_fingerprint: typing.Optional[str] = Nonetest_new_fingerprint: typing.Optional[str] = None )
Parameters
test_size (
numpy.random.Generator, optional) β Size of the test split Iffloat, should be between0.0and1.0and represent the proportion of the dataset to include in the test split. Ifint, represents the absolute number of test samples. IfNone, the value is set to the complement of the train size. Iftrain_sizeis alsoNone, it will be set to0.25.train_size (
numpy.random.Generator, optional) β Size of the train split Iffloat, should be between0.0and1.0and represent the proportion of the dataset to include in the train split. Ifint, represents the absolute number of train samples. IfNone, the value is automatically set to the complement of the test size.shuffle (
bool, optional, defaults toTrue) β Whether or not to shuffle the data before splitting.stratify_by_column (
str, optional, defaults toNone) β The column name of labels to be used to perform stratified split of data.seed (
int, optional) β A seed to initialize the default BitGenerator ifgenerator=None. IfNone, then fresh, unpredictable entropy will be pulled from the OS. If anintorarray_like[ints]is passed, then it will be passed to SeedSequence to derive the initial BitGenerator state.generator (
numpy.random.Generator, optional) β Numpy random Generator to use to compute the permutation of the dataset rows. Ifgenerator=None(default), usesnp.random.default_rng(the default BitGenerator (PCG64) of NumPy).keep_in_memory (
bool, defaults toFalse) β Keep the splits indices in memory instead of writing it to a cache file.load_from_cache_file (
Optional[bool], defaults toTrueif caching is enabled) β If a cache file storing the splits indices can be identified, use it instead of recomputing.train_cache_file_name (
str, optional) β Provide the name of a path for the cache file. It is used to store the train split indices instead of the automatically generated cache file name.test_cache_file_name (
str, optional) β Provide the name of a path for the cache file. It is used to store the test split indices instead of the automatically generated cache file name.writer_batch_size (
int, defaults to1000) β Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while runningmap.train_new_fingerprint (
str, optional, defaults toNone) β The new fingerprint of the train set after transform. IfNone, the new fingerprint is computed using a hash of the previous fingerprint, and the transform argumentstest_new_fingerprint (
str, optional, defaults toNone) β The new fingerprint of the test set after transform. IfNone, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments
Return a dictionary (datasets.DatasetDict) with two random train and test subsets (train and test Dataset splits). Splits are created from the dataset according to test_size, train_size and shuffle.
This method is similar to scikit-learn train_test_split.
Example:
Copied
shard
( num_shards: intindex: intcontiguous: bool = Falsekeep_in_memory: bool = Falseindices_cache_file_name: typing.Optional[str] = Nonewriter_batch_size: typing.Optional[int] = 1000 )
Parameters
num_shards (
int) β How many shards to split the dataset into.index (
int) β Which shard to select and return. contiguous β (bool, defaults toFalse): Whether to select contiguous blocks of indices for shards.keep_in_memory (
bool, defaults toFalse) β Keep the dataset in memory instead of writing it to a cache file.indices_cache_file_name (
str, optional) β Provide the name of a path for the cache file. It is used to store the indices of each shard instead of the automatically generated cache file name.writer_batch_size (
int, defaults to1000) β Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while runningmap.
Return the index-nth shard from dataset split into num_shards pieces.
This shards deterministically. dset.shard(n, i) will contain all elements of dset whose index mod n = i.
dset.shard(n, i, contiguous=True) will instead split dset into contiguous chunks, so it can be easily concatenated back together after processing. If n % i == l, then the first l shards will have length (n // i) + 1, and the remaining shards will have length (n // i). datasets.concatenate([dset.shard(n, i, contiguous=True) for i in range(n)]) will return a dataset with the same order as the original.
Be sure to shard before using any randomizing operator (such as shuffle). It is best if the shard operator is used early in the dataset pipeline.
Example:
Copied
to_tf_dataset
( batch_size: typing.Optional[int] = Nonecolumns: typing.Union[str, typing.List[str], NoneType] = Noneshuffle: bool = Falsecollate_fn: typing.Optional[typing.Callable] = Nonedrop_remainder: bool = Falsecollate_fn_args: typing.Union[typing.Dict[str, typing.Any], NoneType] = Nonelabel_cols: typing.Union[str, typing.List[str], NoneType] = Noneprefetch: bool = Truenum_workers: int = 0num_test_batches: int = 20 )
Parameters
batch_size (
int, optional) β Size of batches to load from the dataset. Defaults toNone, which implies that the dataset wonβt be batched, but the returned dataset can be batched later withtf_dataset.batch(batch_size).columns (
List[str]orstr, optional) β Dataset column(s) to load in thetf.data.Dataset. Column names that are created by thecollate_fnand that do not exist in the original dataset can be used.shuffle(
bool, defaults toFalse) β Shuffle the dataset order when loading. RecommendedTruefor training,Falsefor validation/evaluation.drop_remainder(
bool, defaults toFalse) β Drop the last incomplete batch when loading. Ensures that all batches yielded by the dataset will have the same length on the batch dimension.collate_fn(
Callable, optional) β A function or callable object (such as aDataCollator) that will collate lists of samples into a batch.collate_fn_args (
Dict, optional) β An optionaldictof keyword arguments to be passed to thecollate_fn.label_cols (
List[str]orstr, defaults toNone) β Dataset column(s) to load as labels. Note that many models compute loss internally rather than letting Keras do it, in which case passing the labels here is optional, as long as theyβre in the inputcolumns.prefetch (
bool, defaults toTrue) β Whether to run the dataloader in a separate thread and maintain a small buffer of batches for training. Improves performance by allowing data to be loaded in the background while the model is training.num_workers (
int, defaults to0) β Number of workers to use for loading the dataset. Only supported on Python versions >= 3.8.num_test_batches (
int, defaults to20) β Number of batches to use to infer the output signature of the dataset. The higher this number, the more accurate the signature will be, but the longer it will take to create the dataset.
Create a tf.data.Dataset from the underlying Dataset. This tf.data.Dataset will load and collate batches from the Dataset, and is suitable for passing to methods like model.fit() or model.predict(). The dataset will yield dicts for both inputs and labels unless the dict would contain only a single key, in which case a raw tf.Tensor is yielded instead.
Example:
Copied
push_to_hub
( repo_id: strconfig_name: str = 'default'split: typing.Optional[str] = Noneprivate: typing.Optional[bool] = Falsetoken: typing.Optional[str] = Nonebranch: typing.Optional[str] = Nonemax_shard_size: typing.Union[str, int, NoneType] = Nonenum_shards: typing.Optional[int] = Noneembed_external_files: bool = True )
Parameters
repo_id (
str) β The ID of the repository to push to in the following format:<user>/<dataset_name>or<org>/<dataset_name>. Also accepts<dataset_name>, which will default to the namespace of the logged-in user.config_name (
str, defaults to βdefaultβ) β The configuration name of a dataset. Defaults to βdefaultβsplit (
str, optional) β The name of the split that will be given to that dataset. Defaults toself.split.private (
bool, optional, defaults toFalse) β Whether the dataset repository should be set to private or not. Only affects repository creation: a repository that already exists will not be affected by that parameter.token (
str, optional) β An optional authentication token for the BOINC AI Hub. If no token is passed, will default to the token saved locally when logging in withboincai-cli login. Will raise an error if no token is passed and the user is not logged-in.branch (
str, optional) β The git branch on which to push the dataset. This defaults to the default branch as specified in your repository, which defaults to"main".max_shard_size (
intorstr, optional, defaults to"500MB") β The maximum size of the dataset shards to be uploaded to the hub. If expressed as a string, needs to be digits followed by a unit (like"5MB").num_shards (
int, optional) β Number of shards to write. By default the number of shards depends onmax_shard_size.Added in 2.8.0
Pushes the dataset to the hub as a Parquet dataset. The dataset is pushed using HTTP requests and does not need to have neither git or git-lfs installed.
The resulting Parquet files are self-contained by default. If your dataset contains Image or Audio data, the Parquet files will store the bytes of your images or audio files. You can disable this by setting embed_external_files to False.
Example:
Copied
save_to_disk
( dataset_path: typing.Union[str, bytes, os.PathLike]fs = 'deprecated'max_shard_size: typing.Union[str, int, NoneType] = Nonenum_shards: typing.Optional[int] = Nonenum_proc: typing.Optional[int] = Nonestorage_options: typing.Optional[dict] = None )
Parameters
dataset_path (
str) β Path (e.g.dataset/train) or remote URI (e.g.s3://my-bucket/dataset/train) of the dataset directory where the dataset will be saved to.fs (
fsspec.spec.AbstractFileSystem, optional) β Instance of the remote filesystem where the dataset will be saved to.Deprecated in 2.8.0
fswas deprecated in version 2.8.0 and will be removed in 3.0.0. Please usestorage_optionsinstead, e.g.storage_options=fs.storage_optionsmax_shard_size (
intorstr, optional, defaults to"500MB") β The maximum size of the dataset shards to be uploaded to the hub. If expressed as a string, needs to be digits followed by a unit (like"50MB").num_shards (
int, optional) β Number of shards to write. By default the number of shards depends onmax_shard_sizeandnum_proc.Added in 2.8.0
num_proc (
int, optional) β Number of processes when downloading and generating the dataset locally. Multiprocessing is disabled by default.Added in 2.8.0
storage_options (
dict, optional) β Key/value pairs to be passed on to the file-system backend, if any.Added in 2.8.0
Saves a dataset to a dataset directory, or in a filesystem using any implementation of fsspec.spec.AbstractFileSystem.
All the Image() and Audio() data are stored in the arrow files. If you want to store paths or urls, please use the Value(βstringβ) type.
Example:
Copied
load_from_disk
( dataset_path: strfs = 'deprecated'keep_in_memory: typing.Optional[bool] = Nonestorage_options: typing.Optional[dict] = None ) β Dataset or DatasetDict
Parameters
dataset_path (
str) β Path (e.g."dataset/train") or remote URI (e.g."s3//my-bucket/dataset/train") of the dataset directory where the dataset will be loaded from.fs (
fsspec.spec.AbstractFileSystem, optional) β Instance of the remote filesystem where the dataset will be saved to.Deprecated in 2.8.0
fswas deprecated in version 2.8.0 and will be removed in 3.0.0. Please usestorage_optionsinstead, e.g.storage_options=fs.storage_optionskeep_in_memory (
bool, defaults toNone) β Whether to copy the dataset in-memory. IfNone, the dataset will not be copied in-memory unless explicitly enabled by settingdatasets.config.IN_MEMORY_MAX_SIZEto nonzero. See more details in the improve performance section.storage_options (
dict, optional) β Key/value pairs to be passed on to the file-system backend, if any.Added in 2.8.0
Returns
If
dataset_pathis a path of a dataset directory, the dataset requested.If
dataset_pathis a path of a dataset dict directory, adatasets.DatasetDictwith each split.
Loads a dataset that was previously saved using save_to_disk from a dataset directory, or from a filesystem using any implementation of fsspec.spec.AbstractFileSystem.
Example:
Copied
flatten_indices
( keep_in_memory: bool = Falsecache_file_name: typing.Optional[str] = Nonewriter_batch_size: typing.Optional[int] = 1000features: typing.Optional[datasets.features.features.Features] = Nonedisable_nullable: bool = Falsenum_proc: typing.Optional[int] = Nonenew_fingerprint: typing.Optional[str] = None )
Parameters
keep_in_memory (
bool, defaults toFalse) β Keep the dataset in memory instead of writing it to a cache file.cache_file_name (
str, optional, defaultNone) β Provide the name of a path for the cache file. It is used to store the results of the computation instead of the automatically generated cache file name.writer_batch_size (
int, defaults to1000) β Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while runningmap.features (
Optional[datasets.Features], defaults toNone) β Use a specific Features to store the cache file instead of the automatically generated one.disable_nullable (
bool, defaults toFalse) β Allow null values in the table.num_proc (
int, optional, defaultNone) β Max number of processes when generating cache. Already cached shards are loaded sequentiallynew_fingerprint (
str, optional, defaults toNone) β The new fingerprint of the dataset after transform. IfNone, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments
Create and cache a new Dataset by flattening the indices mapping.
to_csv
( path_or_buf: typing.Union[str, bytes, os.PathLike, typing.BinaryIO]batch_size: typing.Optional[int] = Nonenum_proc: typing.Optional[int] = None**to_csv_kwargs ) β int
Parameters
path_or_buf (
PathLikeorFileOrBuffer) β Either a path to a file or a BinaryIO.batch_size (
int, optional) β Size of the batch to load in memory and write at once. Defaults todatasets.config.DEFAULT_MAX_BATCH_SIZE.num_proc (
int, optional) β Number of processes for multiprocessing. By default it doesnβt use multiprocessing.batch_sizein this case defaults todatasets.config.DEFAULT_MAX_BATCH_SIZEbut feel free to make it 5x or 10x of the default value if you have sufficient compute power.**to_csv_kwargs (additional keyword arguments) β Parameters to pass to pandasβs
pandas.DataFrame.to_csv.Changed in 2.10.0
Now,
indexdefaults toFalseif not specified.If you would like to write the index, pass
index=Trueand also set a name for the index column by passingindex_label.
Returns
int
The number of characters or bytes written.
Exports the dataset to csv
Example:
Copied
to_pandas
( batch_size: typing.Optional[int] = Nonebatched: bool = False )
Parameters
batched (
bool) β Set toTrueto return a generator that yields the dataset as batches ofbatch_sizerows. Defaults toFalse(returns the whole datasets once).batch_size (
int, optional) β The size (number of rows) of the batches ifbatchedisTrue. Defaults todatasets.config.DEFAULT_MAX_BATCH_SIZE.
Returns the dataset as a pandas.DataFrame. Can also return a generator for large datasets.
Example:
Copied
to_dict
( batch_size: typing.Optional[int] = Nonebatched = 'deprecated' )
Parameters
batched (
bool) β Set toTrueto return a generator that yields the dataset as batches ofbatch_sizerows. Defaults toFalse(returns the whole datasets once).Deprecated in 2.11.0
Use
.iter(batch_size=batch_size)followed by.to_dict()on the individual batches instead.batch_size (
int, optional) β The size (number of rows) of the batches ifbatchedisTrue. Defaults todatasets.config.DEFAULT_MAX_BATCH_SIZE.
Returns the dataset as a Python dict. Can also return a generator for large datasets.
Example:
Copied
to_json
( path_or_buf: typing.Union[str, bytes, os.PathLike, typing.BinaryIO]batch_size: typing.Optional[int] = Nonenum_proc: typing.Optional[int] = None**to_json_kwargs ) β int
Parameters
path_or_buf (
PathLikeorFileOrBuffer) β Either a path to a file or a BinaryIO.batch_size (
int, optional) β Size of the batch to load in memory and write at once. Defaults todatasets.config.DEFAULT_MAX_BATCH_SIZE.num_proc (
int, optional) β Number of processes for multiprocessing. By default it doesnβt use multiprocessing.batch_sizein this case defaults todatasets.config.DEFAULT_MAX_BATCH_SIZEbut feel free to make it 5x or 10x of the default value if you have sufficient compute power.**to_json_kwargs (additional keyword arguments) β Parameters to pass to pandasβs
pandas.DataFrame.to_json.Changed in 2.11.0
Now,
indexdefaults toFalseiforientis"split"or"table".If you would like to write the index, pass
index=True.
Returns
int
The number of characters or bytes written.
Export the dataset to JSON Lines or JSON.
Example:
Copied
to_parquet
( path_or_buf: typing.Union[str, bytes, os.PathLike, typing.BinaryIO]batch_size: typing.Optional[int] = None**parquet_writer_kwargs ) β int
Parameters
path_or_buf (
PathLikeorFileOrBuffer) β Either a path to a file or a BinaryIO.batch_size (
int, optional) β Size of the batch to load in memory and write at once. Defaults todatasets.config.DEFAULT_MAX_BATCH_SIZE.**parquet_writer_kwargs (additional keyword arguments) β Parameters to pass to PyArrowβs
pyarrow.parquet.ParquetWriter.
Returns
int
The number of characters or bytes written.
Exports the dataset to parquet
Example:
Copied
to_sql
( name: strcon: typing.Union[str, ForwardRef('sqlalchemy.engine.Connection'), ForwardRef('sqlalchemy.engine.Engine'), ForwardRef('sqlite3.Connection')]batch_size: typing.Optional[int] = None**sql_writer_kwargs ) β int
Parameters
name (
str) β Name of SQL table.con (
strorsqlite3.Connectionorsqlalchemy.engine.Connectionorsqlalchemy.engine.Connection) β A URI string or a SQLite3/SQLAlchemy connection object used to write to a database.batch_size (
int, optional) β Size of the batch to load in memory and write at once. Defaults todatasets.config.DEFAULT_MAX_BATCH_SIZE.**sql_writer_kwargs (additional keyword arguments) β Parameters to pass to pandasβs
pandas.DataFrame.to_sql.Changed in 2.11.0
Now,
indexdefaults toFalseif not specified.If you would like to write the index, pass
index=Trueand also set a name for the index column by passingindex_label.
Returns
int
The number of records written.
Exports the dataset to a SQL database.
Example:
Copied
to_iterable_dataset
( num_shards: typing.Optional[int] = 1 )
Parameters
num_shards (
int, default to1) β Number of shards to define when instantiating the iterable dataset. This is especially useful for big datasets to be able to shuffle properly, and also to enable fast parallel loading using a PyTorch DataLoader or in distributed setups for example. Shards are defined using datasets.Dataset.shard(): it simply slices the data without writing anything on disk.
Get an datasets.IterableDataset from a map-style datasets.Dataset. This is equivalent to loading a dataset in streaming mode with datasets.load_dataset(), but much faster since the data is streamed from local files.
Contrary to map-style datasets, iterable datasets are lazy and can only be iterated over (e.g. using a for loop). Since they are read sequentially in training loops, iterable datasets are much faster than map-style datasets. All the transformations applied to iterable datasets like filtering or processing are done on-the-fly when you start iterating over the dataset.
Still, it is possible to shuffle an iterable dataset using datasets.IterableDataset.shuffle(). This is a fast approximate shuffling that works best if you have multiple shards and if you specify a buffer size that is big enough.
To get the best speed performance, make sure your dataset doesnβt have an indices mapping. If this is the case, the data are not read contiguously, which can be slow sometimes. You can use ds = ds.flatten_indices() to write your dataset in contiguous chunks of data and have optimal speed before switching to an iterable dataset.
Example:
Basic usage:
Copied
With lazy filtering and processing:
Copied
With sharding to enable efficient shuffling:
Copied
With a PyTorch DataLoader:
Copied
With a PyTorch DataLoader and shuffling:
Copied
In a distributed setup like PyTorch DDP with a PyTorch DataLoader and shuffling
Copied
With shuffling and multiple epochs:
Copied
Feel free to also use `IterableDataset.set_epoch()` when using a PyTorch DataLoader or in distributed setups.
add_faiss_index
( column: strindex_name: typing.Optional[str] = Nonedevice: typing.Optional[int] = Nonestring_factory: typing.Optional[str] = Nonemetric_type: typing.Optional[int] = Nonecustom_index: typing.Optional[ForwardRef('faiss.Index')] = Nonebatch_size: int = 1000train_size: typing.Optional[int] = Nonefaiss_verbose: bool = Falsedtype = <class 'numpy.float32'> )
Parameters
column (
str) β The column of the vectors to add to the index.index_name (
str, optional) β Theindex_name/identifier of the index. This is theindex_namethat is used to call get_nearest_examples() or search(). By default it corresponds tocolumn.device (
Union[int, List[int]], optional) β If positive integer, this is the index of the GPU to use. If negative integer, use all GPUs. If a list of positive integers is passed in, run only on those GPUs. By default it uses the CPU.string_factory (
str, optional) β This is passed to the index factory of Faiss to create the index. Default index class isIndexFlat.metric_type (
int, optional) β Type of metric. Ex:faiss.METRIC_INNER_PRODUCTorfaiss.METRIC_L2.custom_index (
faiss.Index, optional) β Custom Faiss index that you already have instantiated and configured for your needs.batch_size (
int) β Size of the batch to use while adding vectors to theFaissIndex. Default value is1000.Added in 2.4.0
train_size (
int, optional) β If the index needs a training step, specifies how many vectors will be used to train the index.faiss_verbose (
bool, defaults toFalse) β Enable the verbosity of the Faiss index.dtype (
data-type) β The dtype of the numpy arrays that are indexed. Default isnp.float32.
Add a dense index using Faiss for fast retrieval. By default the index is done over the vectors of the specified column. You can specify device if you want to run it on GPU (device must be the GPU index). You can find more information about Faiss here:
For string factory
Example:
Copied
add_faiss_index_from_external_arrays
( external_arrays: arrayindex_name: strdevice: typing.Optional[int] = Nonestring_factory: typing.Optional[str] = Nonemetric_type: typing.Optional[int] = Nonecustom_index: typing.Optional[ForwardRef('faiss.Index')] = Nonebatch_size: int = 1000train_size: typing.Optional[int] = Nonefaiss_verbose: bool = Falsedtype = <class 'numpy.float32'> )
Parameters
external_arrays (
np.array) β If you want to use arrays from outside the lib for the index, you can setexternal_arrays. It will useexternal_arraysto create the Faiss index instead of the arrays in the givencolumn.index_name (
str) β Theindex_name/identifier of the index. This is theindex_namethat is used to call get_nearest_examples() or search().device (Optional
Union[int, List[int]], optional) β If positive integer, this is the index of the GPU to use. If negative integer, use all GPUs. If a list of positive integers is passed in, run only on those GPUs. By default it uses the CPU.string_factory (
str, optional) β This is passed to the index factory of Faiss to create the index. Default index class isIndexFlat.metric_type (
int, optional) β Type of metric. Ex:faiss.faiss.METRIC_INNER_PRODUCTorfaiss.METRIC_L2.custom_index (
faiss.Index, optional) β Custom Faiss index that you already have instantiated and configured for your needs.batch_size (
int, optional) β Size of the batch to use while adding vectors to the FaissIndex. Default value is 1000.Added in 2.4.0
train_size (
int, optional) β If the index needs a training step, specifies how many vectors will be used to train the index.faiss_verbose (
bool, defaults to False) β Enable the verbosity of the Faiss index.dtype (
numpy.dtype) β The dtype of the numpy arrays that are indexed. Default is np.float32.
Add a dense index using Faiss for fast retrieval. The index is created using the vectors of external_arrays. You can specify device if you want to run it on GPU (device must be the GPU index). You can find more information about Faiss here:
For string factory
save_faiss_index
( index_name: strfile: typing.Union[str, pathlib.PurePath]storage_options: typing.Optional[typing.Dict] = None )
Parameters
index_name (
str) β The index_name/identifier of the index. This is the index_name that is used to call.get_nearestor.search.file (
str) β The path to the serialized faiss index on disk or remote URI (e.g."s3://my-bucket/index.faiss").storage_options (
dict, optional) β Key/value pairs to be passed on to the file-system backend, if any.Added in 2.11.0
Save a FaissIndex on disk.
load_faiss_index
( index_name: strfile: typing.Union[str, pathlib.PurePath]device: typing.Union[int, typing.List[int], NoneType] = Nonestorage_options: typing.Optional[typing.Dict] = None )
Parameters
index_name (
str) β The index_name/identifier of the index. This is the index_name that is used to call.get_nearestor.search.file (
str) β The path to the serialized faiss index on disk or remote URI (e.g."s3://my-bucket/index.faiss").device (Optional
Union[int, List[int]]) β If positive integer, this is the index of the GPU to use. If negative integer, use all GPUs. If a list of positive integers is passed in, run only on those GPUs. By default it uses the CPU.storage_options (
dict, optional) β Key/value pairs to be passed on to the file-system backend, if any.Added in 2.11.0
Load a FaissIndex from disk.
If you want to do additional configurations, you can have access to the faiss index object by doing .get_index(index_name).faiss_index to make it fit your needs.
add_elasticsearch_index
( column: strindex_name: typing.Optional[str] = Nonehost: typing.Optional[str] = Noneport: typing.Optional[int] = Nonees_client: typing.Optional[ForwardRef('elasticsearch.Elasticsearch')] = Nonees_index_name: typing.Optional[str] = Nonees_index_config: typing.Optional[dict] = None )
Parameters
column (
str) β The column of the documents to add to the index.index_name (
str, optional) β Theindex_name/identifier of the index. This is the index name that is used to call get_nearest_examples() or Dataset.search(). By default it corresponds tocolumn.host (
str, optional, defaults tolocalhost) β Host of where ElasticSearch is running.port (
str, optional, defaults to9200) β Port of where ElasticSearch is running.es_client (
elasticsearch.Elasticsearch, optional) β The elasticsearch client used to create the index if host and port areNone.es_index_name (
str, optional) β The elasticsearch index name used to create the index.es_index_config (
dict, optional) β The configuration of the elasticsearch index. Default config is:
Add a text index using ElasticSearch for fast retrieval. This is done in-place.
Example:
Copied
load_elasticsearch_index
( index_name: stres_index_name: strhost: typing.Optional[str] = Noneport: typing.Optional[int] = Nonees_client: typing.Optional[ForwardRef('Elasticsearch')] = Nonees_index_config: typing.Optional[dict] = None )
Parameters
index_name (
str) β Theindex_name/identifier of the index. This is the index name that is used to callget_nearestorsearch.es_index_name (
str) β The name of elasticsearch index to load.host (
str, optional, defaults tolocalhost) β Host of where ElasticSearch is running.port (
str, optional, defaults to9200) β Port of where ElasticSearch is running.es_client (
elasticsearch.Elasticsearch, optional) β The elasticsearch client used to create the index if host and port areNone.es_index_config (
dict, optional) β The configuration of the elasticsearch index. Default config is:
Load an existing text index using ElasticSearch for fast retrieval.
list_indexes
( )
List the colindex_nameumns/identifiers of all the attached indexes.
get_index
( index_name: str )
Parameters
index_name (
str) β Index name.
List the index_name/identifiers of all the attached indexes.
drop_index
( index_name: str )
Parameters
index_name (
str) β Theindex_name/identifier of the index.
Drop the index with the specified column.
search
( index_name: strquery: typing.Union[str, <built-in function array>]k: int = 10**kwargs ) β (scores, indices)
Parameters
index_name (
str) β The name/identifier of the index.query (
Union[str, np.ndarray]) β The query as a string ifindex_nameis a text index or as a numpy array ifindex_nameis a vector index.k (
int) β The number of examples to retrieve.
Returns
(scores, indices)
A tuple of (scores, indices) where:
scores (
List[List[float]): the retrieval scores from either FAISS (IndexFlatL2by default) or ElasticSearch of the retrieved examplesindices (
List[List[int]]): the indices of the retrieved examples
Find the nearest examples indices in the dataset to the query.
search_batch
( index_name: strqueries: typing.Union[typing.List[str], <built-in function array>]k: int = 10**kwargs ) β (total_scores, total_indices)
Parameters
index_name (
str) β Theindex_name/identifier of the index.queries (
Union[List[str], np.ndarray]) β The queries as a list of strings ifindex_nameis a text index or as a numpy array ifindex_nameis a vector index.k (
int) β The number of examples to retrieve per query.
Returns
(total_scores, total_indices)
A tuple of (total_scores, total_indices) where:
total_scores (
List[List[float]): the retrieval scores from either FAISS (IndexFlatL2by default) or ElasticSearch of the retrieved examples per querytotal_indices (
List[List[int]]): the indices of the retrieved examples per query
Find the nearest examples indices in the dataset to the query.
get_nearest_examples
( index_name: strquery: typing.Union[str, <built-in function array>]k: int = 10**kwargs ) β (scores, examples)
Parameters
index_name (
str) β The index_name/identifier of the index.query (
Union[str, np.ndarray]) β The query as a string ifindex_nameis a text index or as a numpy array ifindex_nameis a vector index.k (
int) β The number of examples to retrieve.
Returns
(scores, examples)
A tuple of (scores, examples) where:
scores (
List[float]): the retrieval scores from either FAISS (IndexFlatL2by default) or ElasticSearch of the retrieved examplesexamples (
dict): the retrieved examples
Find the nearest examples in the dataset to the query.
get_nearest_examples_batch
( index_name: strqueries: typing.Union[typing.List[str], <built-in function array>]k: int = 10**kwargs ) β (total_scores, total_examples)
Parameters
index_name (
str) β Theindex_name/identifier of the index.queries (
Union[List[str], np.ndarray]) β The queries as a list of strings ifindex_nameis a text index or as a numpy array ifindex_nameis a vector index.k (
int) β The number of examples to retrieve per query.
Returns
(total_scores, total_examples)
A tuple of (total_scores, total_examples) where:
total_scores (
List[List[float]): the retrieval scores from either FAISS (IndexFlatL2by default) or ElasticSearch of the retrieved examples per querytotal_examples (
List[dict]): the retrieved examples per query
Find the nearest examples in the dataset to the query.
info
( )
DatasetInfo object containing all the metadata in the dataset.
split
( )
NamedSplit object corresponding to a named dataset split.
builder_name
( )
citation
( )
config_name
( )
dataset_size
( )
description
( )
download_checksums
( )
download_size
( )
features
( )
homepage
( )
license
( )
size_in_bytes
( )
supervised_keys
( )
version
( )
from_csv
( path_or_paths: typing.Union[str, bytes, os.PathLike, typing.List[typing.Union[str, bytes, os.PathLike]]]split: typing.Optional[datasets.splits.NamedSplit] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonecache_dir: str = Nonekeep_in_memory: bool = Falsenum_proc: typing.Optional[int] = None**kwargs )
Parameters
path_or_paths (
path-likeor list ofpath-like) β Path(s) of the CSV file(s).split (NamedSplit, optional) β Split name to be assigned to the dataset.
features (Features, optional) β Dataset features.
cache_dir (
str, optional, defaults to"~/.cache/boincai/datasets") β Directory to cache data.keep_in_memory (
bool, defaults toFalse) β Whether to copy the data in-memory.num_proc (
int, optional, defaults toNone) β Number of processes when downloading and generating the dataset locally. This is helpful if the dataset is made of multiple files. Multiprocessing is disabled by default.Added in 2.8.0
**kwargs (additional keyword arguments) β Keyword arguments to be passed to
pandas.read_csv.
Create Dataset from CSV file(s).
Example:
Copied
from_json
( path_or_paths: typing.Union[str, bytes, os.PathLike, typing.List[typing.Union[str, bytes, os.PathLike]]]split: typing.Optional[datasets.splits.NamedSplit] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonecache_dir: str = Nonekeep_in_memory: bool = Falsefield: typing.Optional[str] = Nonenum_proc: typing.Optional[int] = None**kwargs )
Parameters
path_or_paths (
path-likeor list ofpath-like) β Path(s) of the JSON or JSON Lines file(s).split (NamedSplit, optional) β Split name to be assigned to the dataset.
features (Features, optional) β Dataset features.
cache_dir (
str, optional, defaults to"~/.cache/boincai/datasets") β Directory to cache data.keep_in_memory (
bool, defaults toFalse) β Whether to copy the data in-memory.field (
str, optional) β Field name of the JSON file where the dataset is contained in.num_proc (
int, optional defaults toNone) β Number of processes when downloading and generating the dataset locally. This is helpful if the dataset is made of multiple files. Multiprocessing is disabled by default.Added in 2.8.0
**kwargs (additional keyword arguments) β Keyword arguments to be passed to
JsonConfig.
Create Dataset from JSON or JSON Lines file(s).
Example:
Copied
from_parquet
( path_or_paths: typing.Union[str, bytes, os.PathLike, typing.List[typing.Union[str, bytes, os.PathLike]]]split: typing.Optional[datasets.splits.NamedSplit] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonecache_dir: str = Nonekeep_in_memory: bool = Falsecolumns: typing.Optional[typing.List[str]] = Nonenum_proc: typing.Optional[int] = None**kwargs )
Parameters
path_or_paths (
path-likeor list ofpath-like) β Path(s) of the Parquet file(s).split (
NamedSplit, optional) β Split name to be assigned to the dataset.features (
Features, optional) β Dataset features.cache_dir (
str, optional, defaults to"~/.cache/boincai/datasets") β Directory to cache data.keep_in_memory (
bool, defaults toFalse) β Whether to copy the data in-memory.columns (
List[str], optional) β If notNone, only these columns will be read from the file. A column name may be a prefix of a nested field, e.g. βaβ will select βa.bβ, βa.cβ, and βa.d.eβ.num_proc (
int, optional, defaults toNone) β Number of processes when downloading and generating the dataset locally. This is helpful if the dataset is made of multiple files. Multiprocessing is disabled by default.Added in 2.8.0
**kwargs (additional keyword arguments) β Keyword arguments to be passed to
ParquetConfig.
Create Dataset from Parquet file(s).
Example:
Copied
from_text
( path_or_paths: typing.Union[str, bytes, os.PathLike, typing.List[typing.Union[str, bytes, os.PathLike]]]split: typing.Optional[datasets.splits.NamedSplit] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonecache_dir: str = Nonekeep_in_memory: bool = Falsenum_proc: typing.Optional[int] = None**kwargs )
Parameters
path_or_paths (
path-likeor list ofpath-like) β Path(s) of the text file(s).split (
NamedSplit, optional) β Split name to be assigned to the dataset.features (
Features, optional) β Dataset features.cache_dir (
str, optional, defaults to"~/.cache/boincai/datasets") β Directory to cache data.keep_in_memory (
bool, defaults toFalse) β Whether to copy the data in-memory.num_proc (
int, optional, defaults toNone) β Number of processes when downloading and generating the dataset locally. This is helpful if the dataset is made of multiple files. Multiprocessing is disabled by default.Added in 2.8.0
**kwargs (additional keyword arguments) β Keyword arguments to be passed to
TextConfig.
Create Dataset from text file(s).
Example:
Copied
from_sql
( sql: typing.Union[str, ForwardRef('sqlalchemy.sql.Selectable')]con: typing.Union[str, ForwardRef('sqlalchemy.engine.Connection'), ForwardRef('sqlalchemy.engine.Engine'), ForwardRef('sqlite3.Connection')]features: typing.Optional[datasets.features.features.Features] = Nonecache_dir: str = Nonekeep_in_memory: bool = False**kwargs )
Parameters
sql (
strorsqlalchemy.sql.Selectable) β SQL query to be executed or a table name.con (
strorsqlite3.Connectionorsqlalchemy.engine.Connectionorsqlalchemy.engine.Connection) β A URI string used to instantiate a database connection or a SQLite3/SQLAlchemy connection object.features (Features, optional) β Dataset features.
cache_dir (
str, optional, defaults to"~/.cache/boincai/datasets") β Directory to cache data.keep_in_memory (
bool, defaults toFalse) β Whether to copy the data in-memory.**kwargs (additional keyword arguments) β Keyword arguments to be passed to
SqlConfig.
Create Dataset from SQL query or database table.
Example:
Copied
The returned dataset can only be cached if con is specified as URI string.
prepare_for_task
( task: typing.Union[str, datasets.tasks.base.TaskTemplate]id: int = 0 )
Parameters
task (
Union[str, TaskTemplate]) β The task to prepare the dataset for during training and evaluation. Ifstr, supported tasks include:"text-classification""question-answering"
If
TaskTemplate, must be one of the task templates indatasets.tasks.id (
int, defaults to0) β The id required to unambiguously identify the task template when multiple task templates of the same type are supported.
Prepare a dataset for the given task by casting the datasetβs Features to standardized column names and types as detailed in datasets.tasks.
Casts datasets.DatasetInfo.features according to a task-specific schema. Intended for single-use only, so all task templates are removed from datasets.DatasetInfo.task_templates after casting.
align_labels_with_mapping
( label2id: typing.Dictlabel_column: str )
Parameters
label2id (
dict) β The label name to ID mapping to align the dataset with.label_column (
str) β The column name of labels to align on.
Align the datasetβs label ID and label name mapping to match an input label2id mapping. This is useful when you want to ensure that a modelβs predicted labels are aligned with the dataset. The alignment in done using the lowercase label names.
Example:
Copied
datasets.concatenate_datasets
( dsets: typing.List[~DatasetType]info: typing.Optional[datasets.info.DatasetInfo] = Nonesplit: typing.Optional[datasets.splits.NamedSplit] = Noneaxis: int = 0 )
Parameters
dsets (
List[datasets.Dataset]) β List of Datasets to concatenate.info (
DatasetInfo, optional) β Dataset information, like description, citation, etc.split (
NamedSplit, optional) β Name of the dataset split.axis (
{0, 1}, defaults to0) β Axis to concatenate over, where0means over rows (vertically) and1means over columns (horizontally).Added in 1.6.0
Converts a list of Dataset with the same schema into a single Dataset.
Example:
Copied
datasets.interleave_datasets
( datasets: typing.List[~DatasetType]probabilities: typing.Optional[typing.List[float]] = Noneseed: typing.Optional[int] = Noneinfo: typing.Optional[datasets.info.DatasetInfo] = Nonesplit: typing.Optional[datasets.splits.NamedSplit] = Nonestopping_strategy: typing.Literal['first_exhausted', 'all_exhausted'] = 'first_exhausted' ) β Dataset or IterableDataset
Parameters
datasets (
List[Dataset]orList[IterableDataset]) β List of datasets to interleave.probabilities (
List[float], optional, defaults toNone) β If specified, the new dataset is constructed by sampling examples from one source at a time according to these probabilities.seed (
int, optional, defaults toNone) β The random seed used to choose a source for each example.info (DatasetInfo, optional) β Dataset information, like description, citation, etc.
Added in 2.4.0
split (NamedSplit, optional) β Name of the dataset split.
Added in 2.4.0
stopping_strategy (
str, defaults tofirst_exhausted) β Two strategies are proposed right now,first_exhaustedandall_exhausted. By default,first_exhaustedis an undersampling strategy, i.e the dataset construction is stopped as soon as one dataset has ran out of samples. If the strategy isall_exhausted, we use an oversampling strategy, i.e the dataset construction is stopped as soon as every samples of every dataset has been added at least once. Note that if the strategy isall_exhausted, the interleaved dataset size can get enormous:with no probabilities, the resulting dataset will have
max_length_datasets*nb_datasetsamples.with given probabilities, the resulting dataset will have more samples if some datasets have really low probability of visiting.
Returns
Return type depends on the input datasets parameter. Dataset if the input is a list of Dataset, IterableDataset if the input is a list of IterableDataset.
Interleave several datasets (sources) into a single dataset. The new dataset is constructed by alternating between the sources to get the examples.
You can use this function on a list of Dataset objects, or on a list of IterableDataset objects.
If
probabilitiesisNone(default) the new dataset is constructed by cycling between each source to get the examples.If
probabilitiesis notNone, the new dataset is constructed by getting examples from a random source at a time according to the provided probabilities.
The resulting dataset ends when one of the source datasets runs out of examples except when oversampling is True, in which case, the resulting dataset ends when all datasets have ran out of examples at least one time.
Note for iterable datasets:
In a distributed setup or in PyTorch DataLoader workers, the stopping strategy is applied per process. Therefore the βfirst_exhaustedβ strategy on an sharded iterable dataset can generate less samples in total (up to 1 missing sample per subdataset per worker).
Example:
For regular datasets (map-style):
Copied
datasets.distributed.split_dataset_by_node
( dataset: DatasetTyperank: intworld_size: int ) β Dataset or IterableDataset
Parameters
dataset (Dataset or IterableDataset) β The dataset to split by node.
rank (
int) β Rank of the current node.world_size (
int) β Total number of nodes.
Returns
The dataset to be used on the node at rank rank.
Split a dataset for the node at rank rank in a pool of nodes of size world_size.
For map-style datasets:
Each node is assigned a chunk of data, e.g. rank 0 is given the first chunk of the dataset. To maximize data loading throughput, chunks are made of contiguous data on disk if possible.
For iterable datasets:
If the dataset has a number of shards that is a factor of world_size (i.e. if dataset.n_shards % world_size == 0), then the shards are evenly assigned across the nodes, which is the most optimized. Otherwise, each node keeps 1 example out of world_size, skipping the other examples.
datasets.enable_caching
( )
When applying transforms on a dataset, the data are stored in cache files. The caching mechanism allows to reload an existing cache file if itβs already been computed.
Reloading a dataset is possible since the cache files are named using the dataset fingerprint, which is updated after each transform.
If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets. More precisely, if the caching is disabled:
cache files are always recreated
cache files are written to a temporary directory that is deleted when session closes
cache files are named using a random hash instead of the dataset fingerprint
use save_to_disk() to save a transformed dataset or it will be deleted when session closes
caching doesnβt affect load_dataset(). If you want to regenerate a dataset from scratch you should use the
download_modeparameter in load_dataset().
datasets.disable_caching
( )
When applying transforms on a dataset, the data are stored in cache files. The caching mechanism allows to reload an existing cache file if itβs already been computed.
Reloading a dataset is possible since the cache files are named using the dataset fingerprint, which is updated after each transform.
If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets. More precisely, if the caching is disabled:
cache files are always recreated
cache files are written to a temporary directory that is deleted when session closes
cache files are named using a random hash instead of the dataset fingerprint
use save_to_disk() to save a transformed dataset or it will be deleted when session closes
caching doesnβt affect load_dataset(). If you want to regenerate a dataset from scratch you should use the
download_modeparameter in load_dataset().
datasets.is_caching_enabled
( )
When applying transforms on a dataset, the data are stored in cache files. The caching mechanism allows to reload an existing cache file if itβs already been computed.
Reloading a dataset is possible since the cache files are named using the dataset fingerprint, which is updated after each transform.
If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets. More precisely, if the caching is disabled:
cache files are always recreated
cache files are written to a temporary directory that is deleted when session closes
cache files are named using a random hash instead of the dataset fingerprint
use save_to_disk()] to save a transformed dataset or it will be deleted when session closes
caching doesnβt affect load_dataset(). If you want to regenerate a dataset from scratch you should use the
download_modeparameter in load_dataset().
DatasetDict
Dictionary with split names as keys (βtrainβ, βtestβ for example), and Dataset objects as values. It also has dataset transform methods like map or filter, to process all the splits at once.
class datasets.DatasetDict
( )
A dictionary (dict of str: datasets.Dataset) with dataset transforms methods (map, filter, etc.)
data
( )
The Apache Arrow tables backing each split.
Example:
Copied
cache_files
( )
The cache files containing the Apache Arrow table backing each split.
Example:
Copied
num_columns
( )
Number of columns in each split of the dataset.
Example:
Copied
num_rows
( )
Number of rows in each split of the dataset (same as datasets.Dataset.len()).
Example:
Copied
column_names
( )
Names of the columns in each split of the dataset.
Example:
Copied
shape
( )
Shape of each split of the dataset (number of columns, number of rows).
Example:
Copied
unique
( column: str ) β Dict[str, list]
Parameters
column (
str) β column name (list all the column names with column_names)
Returns
Dict[str, list]
Dictionary of unique elements in the given column.
Return a list of the unique elements in a column for each split.
This is implemented in the low-level backend and as such, very fast.
Example:
Copied
cleanup_cache_files
( )
Clean up all cache files in the dataset cache directory, excepted the currently used cache file if there is one. Be careful when running this command that no other process is currently using other cache files.
Example:
Copied
map
( function: typing.Optional[typing.Callable] = Nonewith_indices: bool = Falsewith_rank: bool = Falseinput_columns: typing.Union[str, typing.List[str], NoneType] = Nonebatched: bool = Falsebatch_size: typing.Optional[int] = 1000drop_last_batch: bool = Falseremove_columns: typing.Union[str, typing.List[str], NoneType] = Nonekeep_in_memory: bool = Falseload_from_cache_file: typing.Optional[bool] = Nonecache_file_names: typing.Union[typing.Dict[str, typing.Optional[str]], NoneType] = Nonewriter_batch_size: typing.Optional[int] = 1000features: typing.Optional[datasets.features.features.Features] = Nonedisable_nullable: bool = Falsefn_kwargs: typing.Optional[dict] = Nonenum_proc: typing.Optional[int] = Nonedesc: typing.Optional[str] = None )
Parameters
function (
callable) β with one of the following signature:function(example: Dict[str, Any]) -> Dict[str, Any]ifbatched=Falseandwith_indices=Falsefunction(example: Dict[str, Any], indices: int) -> Dict[str, Any]ifbatched=Falseandwith_indices=Truefunction(batch: Dict[str, List]) -> Dict[str, List]ifbatched=Trueandwith_indices=Falsefunction(batch: Dict[str, List], indices: List[int]) -> Dict[str, List]ifbatched=Trueandwith_indices=True
For advanced usage, the function can also return a
pyarrow.Table. Moreover if your function returns nothing (None), thenmapwill run your function and return the dataset unchanged.with_indices (
bool, defaults toFalse) β Provide example indices tofunction. Note that in this case the signature offunctionshould bedef function(example, idx): ....with_rank (
bool, defaults toFalse) β Provide process rank tofunction. Note that in this case the signature offunctionshould bedef function(example[, idx], rank): ....input_columns (
[Union[str, List[str]]], optional, defaults toNone) β The columns to be passed intofunctionas positional arguments. IfNone, a dict mapping to all formatted columns is passed as one argument.batched (
bool, defaults toFalse) β Provide batch of examples tofunction.batch_size (
int, optional, defaults to1000) β Number of examples per batch provided tofunctionifbatched=True,batch_size <= 0orbatch_size == Nonethen provide the full dataset as a single batch tofunction.drop_last_batch (
bool, defaults toFalse) β Whether a last batch smaller than the batch_size should be dropped instead of being processed by the function.remove_columns (
[Union[str, List[str]]], optional, defaults toNone) β Remove a selection of columns while doing the mapping. Columns will be removed before updating the examples with the output offunction, i.e. iffunctionis adding columns with names inremove_columns, these columns will be kept.keep_in_memory (
bool, defaults toFalse) β Keep the dataset in memory instead of writing it to a cache file.load_from_cache_file (
Optional[bool], defaults toTrueif caching is enabled) β If a cache file storing the current computation fromfunctioncan be identified, use it instead of recomputing.cache_file_names (
[Dict[str, str]], optional, defaults toNone) β Provide the name of a path for the cache file. It is used to store the results of the computation instead of the automatically generated cache file name. You have to provide onecache_file_nameper dataset in the dataset dictionary.writer_batch_size (
int, default1000) β Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while runningmap.features (
[datasets.Features], optional, defaults toNone) β Use a specific Features to store the cache file instead of the automatically generated one.disable_nullable (
bool, defaults toFalse) β Disallow null values in the table.fn_kwargs (
Dict, optional, defaults toNone) β Keyword arguments to be passed tofunctionnum_proc (
int, optional, defaults toNone) β Number of processes for multiprocessing. By default it doesnβt use multiprocessing.desc (
str, optional, defaults toNone) β Meaningful description to be displayed alongside with the progress bar while mapping examples.
Apply a function to all the elements in the table (individually or in batches) and update the table (if function does updated examples). The transformation is applied to all the datasets of the dataset dictionary.
Example:
Copied
filter
( functionwith_indices = Falseinput_columns: typing.Union[str, typing.List[str], NoneType] = Nonebatched: bool = Falsebatch_size: typing.Optional[int] = 1000keep_in_memory: bool = Falseload_from_cache_file: typing.Optional[bool] = Nonecache_file_names: typing.Union[typing.Dict[str, typing.Optional[str]], NoneType] = Nonewriter_batch_size: typing.Optional[int] = 1000fn_kwargs: typing.Optional[dict] = Nonenum_proc: typing.Optional[int] = Nonedesc: typing.Optional[str] = None )
Parameters
function (
callable) β With one of the following signature:function(example: Dict[str, Any]) -> boolifwith_indices=False, batched=Falsefunction(example: Dict[str, Any], indices: int) -> boolifwith_indices=True, batched=Falsefunction(example: Dict[str, List]) -> List[bool]ifwith_indices=False, batched=Truefunction(example: Dict[str, List], indices: List[int]) -> List[bool]if `with_indices=True, batched=True
with_indices (
bool, defaults toFalse) β Provide example indices tofunction. Note that in this case the signature offunctionshould bedef function(example, idx): ....input_columns (
[Union[str, List[str]]], optional, defaults toNone) β The columns to be passed intofunctionas positional arguments. IfNone, a dict mapping to all formatted columns is passed as one argument.batched (
bool, defaults toFalse) β Provide batch of examples tofunction.batch_size (
int, optional, defaults to1000) β Number of examples per batch provided tofunctionifbatched=Truebatch_size <= 0orbatch_size == Nonethen provide the full dataset as a single batch tofunction.keep_in_memory (
bool, defaults toFalse) β Keep the dataset in memory instead of writing it to a cache file.load_from_cache_file (
Optional[bool], defaults toTrueif chaching is enabled) β If a cache file storing the current computation fromfunctioncan be identified, use it instead of recomputing.cache_file_names (
[Dict[str, str]], optional, defaults toNone) β Provide the name of a path for the cache file. It is used to store the results of the computation instead of the automatically generated cache file name. You have to provide onecache_file_nameper dataset in the dataset dictionary.writer_batch_size (
int, defaults to1000) β Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while runningmap.fn_kwargs (
Dict, optional, defaults toNone) β Keyword arguments to be passed tofunctionnum_proc (
int, optional, defaults toNone) β Number of processes for multiprocessing. By default it doesnβt use multiprocessing.desc (
str, optional, defaults toNone) β Meaningful description to be displayed alongside with the progress bar while filtering examples.
Apply a filter function to all the elements in the table in batches and update the table so that the dataset only includes examples according to the filter function. The transformation is applied to all the datasets of the dataset dictionary.
Example:
Copied
sort
( column_names: typing.Union[str, typing.Sequence[str]]reverse: typing.Union[bool, typing.Sequence[bool]] = Falsekind = 'deprecated'null_placement: str = 'at_end'keep_in_memory: bool = Falseload_from_cache_file: typing.Optional[bool] = Noneindices_cache_file_names: typing.Union[typing.Dict[str, typing.Optional[str]], NoneType] = Nonewriter_batch_size: typing.Optional[int] = 1000 )
Parameters
column_names (
Union[str, Sequence[str]]) β Column name(s) to sort by.reverse (
Union[bool, Sequence[bool]], defaults toFalse) β IfTrue, sort by descending order rather than ascending. If a single bool is provided, the value is applied to the sorting of all column names. Otherwise a list of bools with the same length and order as column_names must be provided.kind (
str, optional) β Pandas algorithm for sorting selected in{quicksort, mergesort, heapsort, stable}, The default isquicksort. Note that bothstableandmergesortuse timsort under the covers and, in general, the actual implementation will vary with data type. Themergesortoption is retained for backwards compatibility.Deprecated in 2.8.0
kindwas deprecated in version 2.10.0 and will be removed in 3.0.0.null_placement (
str, defaults toat_end) β PutNonevalues at the beginning ifat_startorfirstor at the end ifat_endorlastkeep_in_memory (
bool, defaults toFalse) β Keep the sorted indices in memory instead of writing it to a cache file.load_from_cache_file (
Optional[bool], defaults toTrueif caching is enabled) β If a cache file storing the sorted indices can be identified, use it instead of recomputing.indices_cache_file_names (
[Dict[str, str]], optional, defaults toNone) β Provide the name of a path for the cache file. It is used to store the indices mapping instead of the automatically generated cache file name. You have to provide onecache_file_nameper dataset in the dataset dictionary.writer_batch_size (
int, defaults to1000) β Number of rows per write operation for the cache file writer. Higher value gives smaller cache files, lower value consume less temporary memory.
Create a new dataset sorted according to a single or multiple columns.
Example:
Copied
shuffle
( seeds: typing.Union[int, typing.Dict[str, typing.Optional[int]], NoneType] = Noneseed: typing.Optional[int] = Nonegenerators: typing.Union[typing.Dict[str, numpy.random._generator.Generator], NoneType] = Nonekeep_in_memory: bool = Falseload_from_cache_file: typing.Optional[bool] = Noneindices_cache_file_names: typing.Union[typing.Dict[str, typing.Optional[str]], NoneType] = Nonewriter_batch_size: typing.Optional[int] = 1000 )
Parameters
seeds (
Dict[str, int]orint, optional) β A seed to initialize the default BitGenerator ifgenerator=None. IfNone, then fresh, unpredictable entropy will be pulled from the OS. If anintorarray_like[ints]is passed, then it will be passed to SeedSequence to derive the initial BitGenerator state. You can provide oneseedper dataset in the dataset dictionary.seed (
int, optional) β A seed to initialize the default BitGenerator ifgenerator=None. Alias for seeds (aValueErroris raised if both are provided).generators (
Dict[str, *optional*, np.random.Generator]) β Numpy random Generator to use to compute the permutation of the dataset rows. Ifgenerator=None(default), usesnp.random.default_rng(the default BitGenerator (PCG64) of NumPy). You have to provide onegeneratorper dataset in the dataset dictionary.keep_in_memory (
bool, defaults toFalse) β Keep the dataset in memory instead of writing it to a cache file.load_from_cache_file (
Optional[bool], defaults toTrueif caching is enabled) β If a cache file storing the current computation fromfunctioncan be identified, use it instead of recomputing.indices_cache_file_names (
Dict[str, str], optional) β Provide the name of a path for the cache file. It is used to store the indices mappings instead of the automatically generated cache file name. You have to provide onecache_file_nameper dataset in the dataset dictionary.writer_batch_size (
int, defaults to1000) β Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while runningmap.
Create a new Dataset where the rows are shuffled.
The transformation is applied to all the datasets of the dataset dictionary.
Currently shuffling uses numpy random generators. You can either supply a NumPy BitGenerator to use, or a seed to initiate NumPyβs default random generator (PCG64).
Example:
Copied
set_format
( type: typing.Optional[str] = Nonecolumns: typing.Optional[typing.List] = Noneoutput_all_columns: bool = False**format_kwargs )
Parameters
type (
str, optional) β Output type selected in[None, 'numpy', 'torch', 'tensorflow', 'pandas', 'arrow', 'jax'].Nonemeans__getitem__returns python objects (default).columns (
List[str], optional) β Columns to format in the output.Nonemeans__getitem__returns all columns (default).output_all_columns (
bool, defaults to False) β Keep un-formatted columns as well in the output (as python objects),**format_kwargs (additional keyword arguments) β Keywords arguments passed to the convert function like
np.array,torch.tensorortensorflow.ragged.constant.
Set __getitem__ return format (type and columns). The format is set for every dataset in the dataset dictionary.
It is possible to call map after calling set_format. Since map may add new columns, then the list of formatted columns gets updated. In this case, if you apply map on a dataset to add a new column, then this column will be formatted:
new formatted columns = (all columns - previously unformatted columns)
Example:
Copied
reset_format
( )
Reset __getitem__ return format to python objects and all columns. The transformation is applied to all the datasets of the dataset dictionary.
Same as self.set_format()
Example:
Copied
formatted_as
( type: typing.Optional[str] = Nonecolumns: typing.Optional[typing.List] = Noneoutput_all_columns: bool = False**format_kwargs )
Parameters
type (
str, optional) β Output type selected in[None, 'numpy', 'torch', 'tensorflow', 'pandas', 'arrow', 'jax'].Nonemeans__getitem__returns python objects (default).columns (
List[str], optional) β Columns to format in the output.Nonemeans__getitem__returns all columns (default).output_all_columns (
bool, defaults to False) β Keep un-formatted columns as well in the output (as python objects).**format_kwargs (additional keyword arguments) β Keywords arguments passed to the convert function like
np.array,torch.tensorortensorflow.ragged.constant.
To be used in a with statement. Set __getitem__ return format (type and columns). The transformation is applied to all the datasets of the dataset dictionary.
with_format
( type: typing.Optional[str] = Nonecolumns: typing.Optional[typing.List] = Noneoutput_all_columns: bool = False**format_kwargs )
Parameters
type (
str, optional) β Output type selected in[None, 'numpy', 'torch', 'tensorflow', 'pandas', 'arrow', 'jax'].Nonemeans__getitem__returns python objects (default).columns (
List[str], optional) β Columns to format in the output.Nonemeans__getitem__returns all columns (default).output_all_columns (
bool, defaults toFalse) β Keep un-formatted columns as well in the output (as python objects).**format_kwargs (additional keyword arguments) β Keywords arguments passed to the convert function like
np.array,torch.tensorortensorflow.ragged.constant.
Set __getitem__ return format (type and columns). The data formatting is applied on-the-fly. The format type (for example βnumpyβ) is used to format batches when using __getitem__. The format is set for every dataset in the dataset dictionary.
Itβs also possible to use custom transforms for formatting using with_transform().
Contrary to set_format(), with_format returns a new DatasetDict object with new Dataset objects.
Example:
Copied
with_transform
( transform: typing.Optional[typing.Callable]columns: typing.Optional[typing.List] = Noneoutput_all_columns: bool = False )
Parameters
transform (
Callable, optional) β User-defined formatting transform, replaces the format defined by set_format(). A formatting function is a callable that takes a batch (as a dict) as input and returns a batch. This function is applied right before returning the objects in__getitem__.columns (
List[str], optional) β Columns to format in the output. If specified, then the input batch of the transform only contains those columns.output_all_columns (
bool, defaults to False) β Keep un-formatted columns as well in the output (as python objects). If set toTrue, then the other un-formatted columns are kept with the output of the transform.
Set __getitem__ return format using this transform. The transform is applied on-the-fly on batches when __getitem__ is called. The transform is set for every dataset in the dataset dictionary
As set_format(), this can be reset using reset_format().
Contrary to set_transform(), with_transform returns a new DatasetDict object with new Dataset objects.
Example:
Copied
flatten
( max_depth = 16 )
Flatten the Apache Arrow Table of each split (nested features are flatten). Each column with a struct type is flattened into one column per struct field. Other columns are left unchanged.
Example:
Copied
cast
( features: Features )
Parameters
features (Features) β New features to cast the dataset to. The name and order of the fields in the features must match the current column names. The type of the data must also be convertible from one type to the other. For non-trivial conversion, e.g.
string<->ClassLabelyou should use map() to update the Dataset.
Cast the dataset to a new set of features. The transformation is applied to all the datasets of the dataset dictionary.
You can also remove a column using Dataset.map() with feature but cast is in-place (doesnβt copy the data to a new dataset) and is thus faster.
Example:
Copied
cast_column
( column: strfeature )
Parameters
column (
str) β Column name.feature (
Feature) β Target feature.
Cast column to feature for decoding.
Example:
Copied
remove_columns
( column_names: typing.Union[str, typing.List[str]] )
Parameters
column_names (
Union[str, List[str]]) β Name of the column(s) to remove.
Remove one or several column(s) from each split in the dataset and the features associated to the column(s).
The transformation is applied to all the splits of the dataset dictionary.
You can also remove a column using Dataset.map() with remove_columns but the present method is in-place (doesnβt copy the data to a new dataset) and is thus faster.
Example:
Copied
rename_column
( original_column_name: strnew_column_name: str )
Parameters
original_column_name (
str) β Name of the column to rename.new_column_name (
str) β New name for the column.
Rename a column in the dataset and move the features associated to the original column under the new column name. The transformation is applied to all the datasets of the dataset dictionary.
You can also rename a column using map() with remove_columns but the present method:
takes care of moving the original features under the new column name.
doesnβt copy the data to a new dataset and is thus much faster.
Example:
Copied
rename_columns
( column_mapping: typing.Dict[str, str] ) β DatasetDict
Parameters
column_mapping (
Dict[str, str]) β A mapping of columns to rename to their new names.
Returns
A copy of the dataset with renamed columns.
Rename several columns in the dataset, and move the features associated to the original columns under the new column names. The transformation is applied to all the datasets of the dataset dictionary.
Example:
Copied
select_columns
( column_names: typing.Union[str, typing.List[str]] )
Parameters
column_names (
Union[str, List[str]]) β Name of the column(s) to keep.
Select one or several column(s) from each split in the dataset and the features associated to the column(s).
The transformation is applied to all the splits of the dataset dictionary.
Example:
Copied
class_encode_column
( column: strinclude_nulls: bool = False )
Parameters
column (
str) β The name of the column to cast.include_nulls (
bool, defaults toFalse) β Whether to include null values in the class labels. IfTrue, the null values will be encoded as the"None"class label.Added in 1.14.2
Casts the given column as ClassLabel and updates the tables.
Example:
Copied
push_to_hub
( repo_idconfig_name: str = 'default'private: typing.Optional[bool] = Falsetoken: typing.Optional[str] = Nonebranch: NoneType = Nonemax_shard_size: typing.Union[str, int, NoneType] = Nonenum_shards: typing.Union[typing.Dict[str, int], NoneType] = Noneembed_external_files: bool = True )
Parameters
repo_id (
str) β The ID of the repository to push to in the following format:<user>/<dataset_name>or<org>/<dataset_name>. Also accepts<dataset_name>, which will default to the namespace of the logged-in user.private (
bool, optional) β Whether the dataset repository should be set to private or not. Only affects repository creation: a repository that already exists will not be affected by that parameter.config_name (
str) β Configuration name of a dataset. Defaults to βdefaultβ.token (
str, optional) β An optional authentication token for the BOINC AI Hub. If no token is passed, will default to the token saved locally when logging in withboincai-cli login. Will raise an error if no token is passed and the user is not logged-in.branch (
str, optional) β The git branch on which to push the dataset.max_shard_size (
intorstr, optional, defaults to"500MB") β The maximum size of the dataset shards to be uploaded to the hub. If expressed as a string, needs to be digits followed by a unit (like"500MB"or"1GB").num_shards (
Dict[str, int], optional) β Number of shards to write. By default the number of shards depends onmax_shard_size. Use a dictionary to define a different num_shards for each split.Added in 2.8.0
Pushes the DatasetDict to the hub as a Parquet dataset. The DatasetDict is pushed using HTTP requests and does not need to have neither git or git-lfs installed.
Each dataset split will be pushed independently. The pushed dataset will keep the original split names.
The resulting Parquet files are self-contained by default: if your dataset contains Image or Audio data, the Parquet files will store the bytes of your images or audio files. You can disable this by setting embed_external_files to False.
Example:
Copied
save_to_disk
( dataset_dict_path: typing.Union[str, bytes, os.PathLike]fs = 'deprecated'max_shard_size: typing.Union[str, int, NoneType] = Nonenum_shards: typing.Union[typing.Dict[str, int], NoneType] = Nonenum_proc: typing.Optional[int] = Nonestorage_options: typing.Optional[dict] = None )
Parameters
dataset_dict_path (
str) β Path (e.g.dataset/train) or remote URI (e.g.s3://my-bucket/dataset/train) of the dataset dict directory where the dataset dict will be saved to.fs (
fsspec.spec.AbstractFileSystem, optional) β Instance of the remote filesystem where the dataset will be saved to.Deprecated in 2.8.0
fswas deprecated in version 2.8.0 and will be removed in 3.0.0. Please usestorage_optionsinstead, e.g.storage_options=fs.storage_optionsmax_shard_size (
intorstr, optional, defaults to"500MB") β The maximum size of the dataset shards to be uploaded to the hub. If expressed as a string, needs to be digits followed by a unit (like"50MB").num_shards (
Dict[str, int], optional) β Number of shards to write. By default the number of shards depends onmax_shard_sizeandnum_proc. You need to provide the number of shards for each dataset in the dataset dictionary. Use a dictionary to define a different num_shards for each split.Added in 2.8.0
num_proc (
int, optional, defaultNone) β Number of processes when downloading and generating the dataset locally. Multiprocessing is disabled by default.Added in 2.8.0
storage_options (
dict, optional) β Key/value pairs to be passed on to the file-system backend, if any.Added in 2.8.0
Saves a dataset dict to a filesystem using fsspec.spec.AbstractFileSystem.
All the Image() and Audio() data are stored in the arrow files. If you want to store paths or urls, please use the Value(βstringβ) type.
Example:
Copied
load_from_disk
( dataset_dict_path: typing.Union[str, bytes, os.PathLike]fs = 'deprecated'keep_in_memory: typing.Optional[bool] = Nonestorage_options: typing.Optional[dict] = None )
Parameters
dataset_dict_path (
str) β Path (e.g."dataset/train") or remote URI (e.g."s3//my-bucket/dataset/train") of the dataset dict directory where the dataset dict will be loaded from.fs (
fsspec.spec.AbstractFileSystem, optional) β Instance of the remote filesystem where the dataset will be saved to.Deprecated in 2.8.0
fswas deprecated in version 2.8.0 and will be removed in 3.0.0. Please usestorage_optionsinstead, e.g.storage_options=fs.storage_optionskeep_in_memory (
bool, defaults toNone) β Whether to copy the dataset in-memory. IfNone, the dataset will not be copied in-memory unless explicitly enabled by settingdatasets.config.IN_MEMORY_MAX_SIZEto nonzero. See more details in the improve performance section.storage_options (
dict, optional) β Key/value pairs to be passed on to the file-system backend, if any.Added in 2.8.0
Load a dataset that was previously saved using save_to_disk from a filesystem using fsspec.spec.AbstractFileSystem.
Example:
Copied
from_csv
( path_or_paths: typing.Dict[str, typing.Union[str, bytes, os.PathLike]]features: typing.Optional[datasets.features.features.Features] = Nonecache_dir: str = Nonekeep_in_memory: bool = False**kwargs )
Parameters
path_or_paths (
dictof path-like) β Path(s) of the CSV file(s).features (Features, optional) β Dataset features.
cache_dir (str, optional, defaults to
"~/.cache/boincai/datasets") β Directory to cache data.keep_in_memory (
bool, defaults toFalse) β Whether to copy the data in-memory.**kwargs (additional keyword arguments) β Keyword arguments to be passed to
pandas.read_csv.
Create DatasetDict from CSV file(s).
Example:
Copied
from_json
( path_or_paths: typing.Dict[str, typing.Union[str, bytes, os.PathLike]]features: typing.Optional[datasets.features.features.Features] = Nonecache_dir: str = Nonekeep_in_memory: bool = False**kwargs )
Parameters
path_or_paths (
path-likeor list ofpath-like) β Path(s) of the JSON Lines file(s).features (Features, optional) β Dataset features.
cache_dir (str, optional, defaults to
"~/.cache/boincai/datasets") β Directory to cache data.keep_in_memory (
bool, defaults toFalse) β Whether to copy the data in-memory.**kwargs (additional keyword arguments) β Keyword arguments to be passed to
JsonConfig.
Create DatasetDict from JSON Lines file(s).
Example:
Copied
from_parquet
( path_or_paths: typing.Dict[str, typing.Union[str, bytes, os.PathLike]]features: typing.Optional[datasets.features.features.Features] = Nonecache_dir: str = Nonekeep_in_memory: bool = Falsecolumns: typing.Optional[typing.List[str]] = None**kwargs )
Parameters
path_or_paths (
dictof path-like) β Path(s) of the CSV file(s).features (Features, optional) β Dataset features.
cache_dir (
str, optional, defaults to"~/.cache/boincai/datasets") β Directory to cache data.keep_in_memory (
bool, defaults toFalse) β Whether to copy the data in-memory.columns (
List[str], optional) β If notNone, only these columns will be read from the file. A column name may be a prefix of a nested field, e.g. βaβ will select βa.bβ, βa.cβ, and βa.d.eβ.**kwargs (additional keyword arguments) β Keyword arguments to be passed to
ParquetConfig.
Create DatasetDict from Parquet file(s).
Example:
Copied
from_text
( path_or_paths: typing.Dict[str, typing.Union[str, bytes, os.PathLike]]features: typing.Optional[datasets.features.features.Features] = Nonecache_dir: str = Nonekeep_in_memory: bool = False**kwargs )
Parameters
path_or_paths (
dictof path-like) β Path(s) of the text file(s).features (Features, optional) β Dataset features.
cache_dir (
str, optional, defaults to"~/.cache/boincai/datasets") β Directory to cache data.keep_in_memory (
bool, defaults toFalse) β Whether to copy the data in-memory.**kwargs (additional keyword arguments) β Keyword arguments to be passed to
TextConfig.
Create DatasetDict from text file(s).
Example:
Copied
prepare_for_task
( task: typing.Union[str, datasets.tasks.base.TaskTemplate]id: int = 0 )
Parameters
task (
Union[str, TaskTemplate]) β The task to prepare the dataset for during training and evaluation. Ifstr, supported tasks include:"text-classification""question-answering"
If
TaskTemplate, must be one of the task templates indatasets.tasks.id (
int, defaults to0) β The id required to unambiguously identify the task template when multiple task templates of the same type are supported.
Prepare a dataset for the given task by casting the datasetβs Features to standardized column names and types as detailed in datasets.tasks.
Casts datasets.DatasetInfo.features according to a task-specific schema. Intended for single-use only, so all task templates are removed from datasets.DatasetInfo.task_templates after casting.
IterableDataset
The base class IterableDataset implements an iterable Dataset backed by python generators.
class datasets.IterableDataset
( ex_iterable: _BaseExamplesIterableinfo: typing.Optional[datasets.info.DatasetInfo] = Nonesplit: typing.Optional[datasets.splits.NamedSplit] = Noneformatting: typing.Optional[datasets.iterable_dataset.FormattingConfig] = Noneshuffling: typing.Optional[datasets.iterable_dataset.ShufflingConfig] = Nonedistributed: typing.Optional[datasets.iterable_dataset.DistributedConfig] = Nonetoken_per_repo_id: typing.Union[typing.Dict[str, typing.Union[str, bool, NoneType]], NoneType] = Noneformat_type = 'deprecated' )
A Dataset backed by an iterable.
from_generator
( generator: typing.Callablefeatures: typing.Optional[datasets.features.features.Features] = Nonegen_kwargs: typing.Optional[dict] = None ) β IterableDataset
Parameters
generator (
Callable) β A generator function thatyieldsexamples.features (
Features, optional) β Dataset features.gen_kwargs(
dict, optional) β Keyword arguments to be passed to thegeneratorcallable. You can define a sharded iterable dataset by passing the list of shards ingen_kwargs. This can be used to improve shuffling and when iterating over the dataset with multiple workers.
Returns
IterableDataset
Create an Iterable Dataset from a generator.
Example:
Copied
Copied
remove_columns
( column_names: typing.Union[str, typing.List[str]] ) β IterableDataset
Parameters
column_names (
Union[str, List[str]]) β Name of the column(s) to remove.
Returns
IterableDataset
A copy of the dataset object without the columns to remove.
Remove one or several column(s) in the dataset and the features associated to them. The removal is done on-the-fly on the examples when iterating over the dataset.
Example:
Copied
select_columns
( column_names: typing.Union[str, typing.List[str]] ) β IterableDataset
Parameters
column_names (
Union[str, List[str]]) β Name of the column(s) to select.
Returns
IterableDataset
A copy of the dataset object with selected columns.
Select one or several column(s) in the dataset and the features associated to them. The selection is done on-the-fly on the examples when iterating over the dataset.
Example:
Copied
cast_column
( column: strfeature: typing.Union[dict, list, tuple, datasets.features.features.Value, datasets.features.features.ClassLabel, datasets.features.translation.Translation, datasets.features.translation.TranslationVariableLanguages, datasets.features.features.Sequence, datasets.features.features.Array2D, datasets.features.features.Array3D, datasets.features.features.Array4D, datasets.features.features.Array5D, datasets.features.audio.Audio, datasets.features.image.Image] ) β IterableDataset
Parameters
column (
str) β Column name.feature (
Feature) β Target feature.
Returns
IterableDataset
Cast column to feature for decoding.
Example:
Copied
cast
( features: Features ) β IterableDataset
Parameters
features (Features) β New features to cast the dataset to. The name of the fields in the features must match the current column names. The type of the data must also be convertible from one type to the other. For non-trivial conversion, e.g.
string<->ClassLabelyou should use map() to update the Dataset.
Returns
IterableDataset
A copy of the dataset with casted features.
Cast the dataset to a new set of features.
Example:
Copied
__iter__
( )
iter
( batch_size: intdrop_last_batch: bool = False )
Parameters
batch_size (
int) β size of each batch to yield.drop_last_batch (
bool, default False) β Whether a last batch smaller than the batch_size should be dropped
Iterate through the batches of size batch_size.
map
( function: typing.Optional[typing.Callable] = Nonewith_indices: bool = Falseinput_columns: typing.Union[str, typing.List[str], NoneType] = Nonebatched: bool = Falsebatch_size: typing.Optional[int] = 1000drop_last_batch: bool = Falseremove_columns: typing.Union[str, typing.List[str], NoneType] = Nonefeatures: typing.Optional[datasets.features.features.Features] = Nonefn_kwargs: typing.Optional[dict] = None )
Parameters
function (
Callable, optional, defaults toNone) β Function applied on-the-fly on the examples when you iterate on the dataset. It must have one of the following signatures:function(example: Dict[str, Any]) -> Dict[str, Any]ifbatched=Falseandwith_indices=Falsefunction(example: Dict[str, Any], idx: int) -> Dict[str, Any]ifbatched=Falseandwith_indices=Truefunction(batch: Dict[str, List]) -> Dict[str, List]ifbatched=Trueandwith_indices=Falsefunction(batch: Dict[str, List], indices: List[int]) -> Dict[str, List]ifbatched=Trueandwith_indices=True
For advanced usage, the function can also return a
pyarrow.Table. Moreover if your function returns nothing (None), thenmapwill run your function and return the dataset unchanged. If no function is provided, default to identity function:lambda x: x.with_indices (
bool, defaults toFalse) β Provide example indices tofunction. Note that in this case the signature offunctionshould bedef function(example, idx[, rank]): ....input_columns (
Optional[Union[str, List[str]]], defaults toNone) β The columns to be passed intofunctionas positional arguments. IfNone, a dict mapping to all formatted columns is passed as one argument.batched (
bool, defaults toFalse) β Provide batch of examples tofunction.batch_size (
int, optional, defaults to1000) β Number of examples per batch provided tofunctionifbatched=True.batch_size <= 0orbatch_size == Nonethen provide the full dataset as a single batch tofunction.drop_last_batch (
bool, defaults toFalse) β Whether a last batch smaller than the batch_size should be dropped instead of being processed by the function.remove_columns (
[List[str]], optional, defaults toNone) β Remove a selection of columns while doing the mapping. Columns will be removed before updating the examples with the output offunction, i.e. iffunctionis adding columns with names inremove_columns, these columns will be kept.features (
[Features], optional, defaults toNone) β Feature types of the resulting dataset.fn_kwargs (
Dict, optional, defaultNone) β Keyword arguments to be passed tofunction.
Apply a function to all the examples in the iterable dataset (individually or in batches) and update them. If your function returns a column that already exists, then it overwrites it. The function is applied on-the-fly on the examples when iterating over the dataset.
You can specify whether the function should be batched or not with the batched parameter:
If batched is
False, then the function takes 1 example in and should return 1 example. An example is a dictionary, e.g.{"text": "Hello there !"}.If batched is
Trueandbatch_sizeis 1, then the function takes a batch of 1 example as input and can return a batch with 1 or more examples. A batch is a dictionary, e.g. a batch of 1 example is {βtextβ: [βHello there !β]}.If batched is
Trueandbatch_sizeisn> 1, then the function takes a batch ofnexamples as input and can return a batch withnexamples, or with an arbitrary number of examples. Note that the last batch may have less thannexamples. A batch is a dictionary, e.g. a batch ofnexamples is{"text": ["Hello there !"] * n}.
Example:
Copied
rename_column
( original_column_name: strnew_column_name: str ) β IterableDataset
Parameters
original_column_name (
str) β Name of the column to rename.new_column_name (
str) β New name for the column.
Returns
IterableDataset
A copy of the dataset with a renamed column.
Rename a column in the dataset, and move the features associated to the original column under the new column name.
Example:
Copied
filter
( function: typing.Optional[typing.Callable] = Nonewith_indices = Falseinput_columns: typing.Union[str, typing.List[str], NoneType] = Nonebatched: bool = Falsebatch_size: typing.Optional[int] = 1000fn_kwargs: typing.Optional[dict] = None )
Parameters
function (
Callable) β Callable with one of the following signatures:function(example: Dict[str, Any]) -> boolifwith_indices=False, batched=Falsefunction(example: Dict[str, Any], indices: int) -> boolifwith_indices=True, batched=Falsefunction(example: Dict[str, List]) -> List[bool]ifwith_indices=False, batched=Truefunction(example: Dict[str, List], indices: List[int]) -> List[bool]ifwith_indices=True, batched=True
If no function is provided, defaults to an always True function:
lambda x: True.with_indices (
bool, defaults toFalse) β Provide example indices tofunction. Note that in this case the signature offunctionshould bedef function(example, idx): ....input_columns (
strorList[str], optional) β The columns to be passed intofunctionas positional arguments. IfNone, a dict mapping to all formatted columns is passed as one argument.batched (
bool, defaults toFalse) β Provide batch of examples tofunction.batch_size (
int, optional, default1000) β Number of examples per batch provided tofunctionifbatched=True.fn_kwargs (
Dict, optional, defaultNone) β Keyword arguments to be passed tofunction.
Apply a filter function to all the elements so that the dataset only includes examples according to the filter function. The filtering is done on-the-fly when iterating over the dataset.
Example:
Copied
shuffle
( seed = Nonegenerator: typing.Optional[numpy.random._generator.Generator] = Nonebuffer_size: int = 1000 )
Parameters
seed (
int, optional, defaults toNone) β Random seed that will be used to shuffle the dataset. It is used to sample from the shuffle buffe and also to shuffle the data shards.generator (
numpy.random.Generator, optional) β Numpy random Generator to use to compute the permutation of the dataset rows. Ifgenerator=None(default), usesnp.random.default_rng(the default BitGenerator (PCG64) of NumPy).buffer_size (
int, defaults to1000) β Size of the buffer.
Randomly shuffles the elements of this dataset.
This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required.
For instance, if your dataset contains 10,000 elements but buffer_size is set to 1000, then shuffle will initially select a random element from only the first 1000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1000 element buffer.
If the dataset is made of several shards, it also does shuffle the order of the shards. However if the order has been fixed by using skip() or take() then the order of the shards is kept unchanged.
Example:
Copied
skip
( n )
Parameters
n (
int) β Number of elements to skip.
Create a new IterableDataset that skips the first n elements.
Example:
Copied
take
( n )
Parameters
n (
int) β Number of elements to take.
Create a new IterableDataset with only the first n elements.
Example:
Copied
info
( )
DatasetInfo object containing all the metadata in the dataset.
split
( )
NamedSplit object corresponding to a named dataset split.
builder_name
( )
citation
( )
config_name
( )
dataset_size
( )
description
( )
download_checksums
( )
download_size
( )
features
( )
homepage
( )
license
( )
size_in_bytes
( )
supervised_keys
( )
version
( )
IterableDatasetDict
Dictionary with split names as keys (βtrainβ, βtestβ for example), and IterableDataset objects as values.
class datasets.IterableDatasetDict
( )
map
( function: typing.Optional[typing.Callable] = Nonewith_indices: bool = Falseinput_columns: typing.Union[str, typing.List[str], NoneType] = Nonebatched: bool = Falsebatch_size: int = 1000drop_last_batch: bool = Falseremove_columns: typing.Union[str, typing.List[str], NoneType] = Nonefn_kwargs: typing.Optional[dict] = None )
Parameters
function (
Callable, optional, defaults toNone) β Function applied on-the-fly on the examples when you iterate on the dataset. It must have one of the following signatures:function(example: Dict[str, Any]) -> Dict[str, Any]ifbatched=Falseandwith_indices=Falsefunction(example: Dict[str, Any], idx: int) -> Dict[str, Any]ifbatched=Falseandwith_indices=Truefunction(batch: Dict[str, List]) -> Dict[str, List]ifbatched=Trueandwith_indices=Falsefunction(batch: Dict[str, List], indices: List[int]) -> Dict[str, List]ifbatched=Trueandwith_indices=True
For advanced usage, the function can also return a
pyarrow.Table. Moreover if your function returns nothing (None), thenmapwill run your function and return the dataset unchanged. If no function is provided, default to identity function:lambda x: x.with_indices (
bool, defaults toFalse) β Provide example indices tofunction. Note that in this case the signature offunctionshould bedef function(example, idx[, rank]): ....input_columns (
[Union[str, List[str]]], optional, defaults toNone) β The columns to be passed intofunctionas positional arguments. IfNone, a dict mapping to all formatted columns is passed as one argument.batched (
bool, defaults toFalse) β Provide batch of examples tofunction.batch_size (
int, optional, defaults to1000) β Number of examples per batch provided tofunctionifbatched=True.drop_last_batch (
bool, defaults toFalse) β Whether a last batch smaller than thebatch_sizeshould be dropped instead of being processed by the function.remove_columns (
[List[str]], optional, defaults toNone) β Remove a selection of columns while doing the mapping. Columns will be removed before updating the examples with the output offunction, i.e. iffunctionis adding columns with names inremove_columns, these columns will be kept.fn_kwargs (
Dict, optional, defaults toNone) β Keyword arguments to be passed tofunction
Apply a function to all the examples in the iterable dataset (individually or in batches) and update them. If your function returns a column that already exists, then it overwrites it. The function is applied on-the-fly on the examples when iterating over the dataset. The transformation is applied to all the datasets of the dataset dictionary.
You can specify whether the function should be batched or not with the batched parameter:
If batched is
False, then the function takes 1 example in and should return 1 example. An example is a dictionary, e.g.{"text": "Hello there !"}.If batched is
Trueandbatch_sizeis 1, then the function takes a batch of 1 example as input and can return a batch with 1 or more examples. A batch is a dictionary, e.g. a batch of 1 example is{"text": ["Hello there !"]}.If batched is
Trueandbatch_sizeisn> 1, then the function takes a batch ofnexamples as input and can return a batch withnexamples, or with an arbitrary number of examples. Note that the last batch may have less thannexamples. A batch is a dictionary, e.g. a batch ofnexamples is{"text": ["Hello there !"] * n}.
Example:
Copied
filter
( function: typing.Optional[typing.Callable] = Nonewith_indices = Falseinput_columns: typing.Union[str, typing.List[str], NoneType] = Nonebatched: bool = Falsebatch_size: typing.Optional[int] = 1000fn_kwargs: typing.Optional[dict] = None )
Parameters
function (
Callable) β Callable with one of the following signatures:function(example: Dict[str, Any]) -> boolifwith_indices=False, batched=Falsefunction(example: Dict[str, Any], indices: int) -> boolifwith_indices=True, batched=Falsefunction(example: Dict[str, List]) -> List[bool]ifwith_indices=False, batched=Truefunction(example: Dict[str, List], indices: List[int]) -> List[bool]ifwith_indices=True, batched=True
If no function is provided, defaults to an always True function:
lambda x: True.with_indices (
bool, defaults toFalse) β Provide example indices tofunction. Note that in this case the signature offunctionshould bedef function(example, idx): ....input_columns (
strorList[str], optional) β The columns to be passed intofunctionas positional arguments. IfNone, a dict mapping to all formatted columns is passed as one argument.batched (
bool, defaults toFalse) β Provide batch of examples tofunctionbatch_size (
int, optional, defaults to1000) β Number of examples per batch provided tofunctionifbatched=True.fn_kwargs (
Dict, optional, defaults toNone) β Keyword arguments to be passed tofunction
Apply a filter function to all the elements so that the dataset only includes examples according to the filter function. The filtering is done on-the-fly when iterating over the dataset. The filtering is applied to all the datasets of the dataset dictionary.
Example:
Copied
shuffle
( seed = Nonegenerator: typing.Optional[numpy.random._generator.Generator] = Nonebuffer_size: int = 1000 )
Parameters
seed (
int, optional, defaults toNone) β Random seed that will be used to shuffle the dataset. It is used to sample from the shuffle buffe and als oto shuffle the data shards.generator (
numpy.random.Generator, optional) β Numpy random Generator to use to compute the permutation of the dataset rows. Ifgenerator=None(default), usesnp.random.default_rng(the default BitGenerator (PCG64) of NumPy).buffer_size (
int, defaults to1000) β Size of the buffer.
Randomly shuffles the elements of this dataset. The shuffling is applied to all the datasets of the dataset dictionary.
This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required.
For instance, if your dataset contains 10,000 elements but buffer_size is set to 1000, then shuffle will initially select a random element from only the first 1000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1000 element buffer.
If the dataset is made of several shards, it also does shuffle the order of the shards. However if the order has been fixed by using skip() or take() then the order of the shards is kept unchanged.
Example:
Copied
with_format
( type: typing.Optional[str] = None )
Parameters
type (
str, optional, defaults toNone) β If set to βtorchβ, the returned dataset will be a subclass oftorch.utils.data.IterableDatasetto be used in aDataLoader.
Return a dataset with the specified format. This method only supports the βtorchβ format for now. The format is set to all the datasets of the dataset dictionary.
Example:
Copied
cast
( features: Features ) β IterableDatasetDict
Parameters
features (
Features) β New features to cast the dataset to. The name of the fields in the features must match the current column names. The type of the data must also be convertible from one type to the other. For non-trivial conversion, e.g.string<->ClassLabelyou should usemapto update the Dataset.
Returns
A copy of the dataset with casted features.
Cast the dataset to a new set of features. The type casting is applied to all the datasets of the dataset dictionary.
Example:
Copied
cast_column
( column: strfeature: typing.Union[dict, list, tuple, datasets.features.features.Value, datasets.features.features.ClassLabel, datasets.features.translation.Translation, datasets.features.translation.TranslationVariableLanguages, datasets.features.features.Sequence, datasets.features.features.Array2D, datasets.features.features.Array3D, datasets.features.features.Array4D, datasets.features.features.Array5D, datasets.features.audio.Audio, datasets.features.image.Image] )
Parameters
column (
str) β Column name.feature (
Feature) β Target feature.
Cast column to feature for decoding. The type casting is applied to all the datasets of the dataset dictionary.
Example:
Copied
remove_columns
( column_names: typing.Union[str, typing.List[str]] ) β IterableDatasetDict
Parameters
column_names (
Union[str, List[str]]) β Name of the column(s) to remove.
Returns
A copy of the dataset object without the columns to remove.
Remove one or several column(s) in the dataset and the features associated to them. The removal is done on-the-fly on the examples when iterating over the dataset. The removal is applied to all the datasets of the dataset dictionary.
Example:
Copied
rename_column
( original_column_name: strnew_column_name: str ) β IterableDatasetDict
Parameters
original_column_name (
str) β Name of the column to rename.new_column_name (
str) β New name for the column.
Returns
A copy of the dataset with a renamed column.
Rename a column in the dataset, and move the features associated to the original column under the new column name. The renaming is applied to all the datasets of the dataset dictionary.
Example:
Copied
rename_columns
( column_mapping: typing.Dict[str, str] ) β IterableDatasetDict
Parameters
column_mapping (
Dict[str, str]) β A mapping of columns to rename to their new names.
Returns
A copy of the dataset with renamed columns
Rename several columns in the dataset, and move the features associated to the original columns under the new column names. The renaming is applied to all the datasets of the dataset dictionary.
Example:
Copied
select_columns
( column_names: typing.Union[str, typing.List[str]] ) β IterableDatasetDict
Parameters
column_names (
Union[str, List[str]]) β Name of the column(s) to keep.
Returns
A copy of the dataset object with only selected columns.
Select one or several column(s) in the dataset and the features associated to them. The selection is done on-the-fly on the examples when iterating over the dataset. The selection is applied to all the datasets of the dataset dictionary.
Example:
Copied
Features
class datasets.Features
( *args**kwargs )
A special dictionary that defines the internal structure of a dataset.
Instantiated with a dictionary of type dict[str, FieldType], where keys are the desired column names, and values are the type of that column.
FieldType can be one of the following:
a Value feature specifies a single typed value, e.g.
int64orstring.a ClassLabel feature specifies a field with a predefined set of classes which can have labels associated to them and will be stored as integers in the dataset.
a python
dictwhich specifies that the field is a nested field containing a mapping of sub-fields to sub-fields features. Itβs possible to have nested fields of nested fields in an arbitrary manner.a python
listor a Sequence specifies that the field contains a list of objects. The pythonlistor Sequence should be provided with a single sub-feature as an example of the feature type hosted in this list.A Sequence with a internal dictionary feature will be automatically converted into a dictionary of lists. This behavior is implemented to have a compatilbity layer with the TensorFlow Datasets library but may be un-wanted in some cases. If you donβt want this behavior, you can use a python
listinstead of the Sequence.an Audio feature to store the absolute path to an audio file or a dictionary with the relative path to an audio file (βpathβ key) and its bytes content (βbytesβ key). This feature extracts the audio data.
an Image feature to store the absolute path to an image file, an
np.ndarrayobject, aPIL.Image.Imageobject or a dictionary with the relative path to an image file (βpathβ key) and its bytes content (βbytesβ key). This feature extracts the image data.Translation and TranslationVariableLanguages, the two features specific to Machine Translation.
copy
( )
Make a deep copy of Features.
Example:
Copied
decode_batch
( batch: dicttoken_per_repo_id: typing.Union[typing.Dict[str, typing.Union[str, bool, NoneType]], NoneType] = None )
Parameters
batch (
dict[str, list[Any]]) β Dataset batch data.token_per_repo_id (
dict, optional) β To access and decode audio or image files from private repositories on the Hub, you can pass a dictionary repo_id (str) -> token (bool or str)
Decode batch with custom feature decoding.
decode_column
( column: listcolumn_name: str )
Parameters
column (
list[Any]) β Dataset column data.column_name (
str) β Dataset column name.
Decode column with custom feature decoding.
decode_example
( example: dicttoken_per_repo_id: typing.Union[typing.Dict[str, typing.Union[str, bool, NoneType]], NoneType] = None )
Parameters
example (
dict[str, Any]) β Dataset row data.token_per_repo_id (
dict, optional) β To access and decode audio or image files from private repositories on the Hub, you can pass a dictionaryrepo_id (str) -> token (bool or str).
Decode example with custom feature decoding.
encode_batch
( batch )
Parameters
batch (
dict[str, list[Any]]) β Data in a Dataset batch.
Encode batch into a format for Arrow.
encode_column
( columncolumn_name: str )
Parameters
column (
list[Any]) β Data in a Dataset column.column_name (
str) β Dataset column name.
Encode column into a format for Arrow.
encode_example
( example )
Parameters
example (
dict[str, Any]) β Data in a Dataset row.
Encode example into a format for Arrow.
flatten
( max_depth = 16 ) β Features
Returns
The flattened features.
Flatten the features. Every dictionary column is removed and is replaced by all the subfields it contains. The new fields are named by concatenating the name of the original column and the subfield name like this: <original>.<subfield>.
If a column contains nested dictionaries, then all the lower-level subfields names are also concatenated to form new columns: <original>.<subfield>.<subsubfield>, etc.
Example:
Copied
from_arrow_schema
( pa_schema: Schema )
Parameters
pa_schema (
pyarrow.Schema) β Arrow Schema.
Construct Features from Arrow Schema. It also checks the schema metadata for BOINC AI Datasets features. Non-nullable fields are not supported and set to nullable.
from_dict
( dic ) β Features
Parameters
dic (dict[str, Any]) β Python dictionary.
Returns
Features
Construct [Features] from dict.
Regenerate the nested feature object from a deserialized dict. We use the _type key to infer the dataclass name of the feature FieldType.
It allows for a convenient constructor syntax to define features from deserialized JSON dictionaries. This function is used in particular when deserializing a [DatasetInfo] that was dumped to a JSON object. This acts as an analogue to [Features.from_arrow_schema] and handles the recursive field-by-field instantiation, but doesnβt require any mapping to/from pyarrow, except for the fact that it takes advantage of the mapping of pyarrow primitive dtypes that [Value] automatically performs.
Example:
Copied
reorder_fields_as
( other: Features )
Parameters
other ([Features]) β The other [Features] to align with.
Reorder Features fields to match the field order of other [Features].
The order of the fields is important since it matters for the underlying arrow data. Re-ordering the fields allows to make the underlying arrow data type match.
Example:
Copied
class datasets.Sequence
( feature: typing.Anylength: int = -1id: typing.Optional[str] = None )
Parameters
length (
int) β Length of the sequence.
Construct a list of feature from a single type or a dict of types. Mostly here for compatiblity with tfds.
Example:
Copied
class datasets.ClassLabel
( num_classes: dataclasses.InitVar[typing.Optional[int]] = Nonenames: typing.List[str] = Nonenames_file: dataclasses.InitVar[typing.Optional[str]] = Noneid: typing.Optional[str] = None )
Parameters
num_classes (
int, optional) β Number of classes. All labels must be <num_classes.names (
listofstr, optional) β String names for the integer classes. The order in which the names are provided is kept.names_file (
str, optional) β Path to a file with names for the integer classes, one per line.
Feature type for integer class labels.
There are 3 ways to define a ClassLabel, which correspond to the 3 arguments:
num_classes: Create 0 to (num_classes-1) labels.names: List of label strings.names_file: File containing the list of labels.
Under the hood the labels are stored as integers. You can use negative integers to represent unknown/missing labels.
Example:
Copied
cast_storage
( storage: typing.Union[pyarrow.lib.StringArray, pyarrow.lib.IntegerArray] ) β pa.Int64Array
Parameters
storage (
Union[pa.StringArray, pa.IntegerArray]) β PyArrow array to cast.
Returns
pa.Int64Array
Array in the ClassLabel arrow storage type.
Cast an Arrow array to the ClassLabel arrow storage type. The Arrow types that can be converted to the ClassLabel pyarrow storage type are:
pa.string()pa.int()
int2str
( values: typing.Union[int, collections.abc.Iterable] )
Conversion integer => class name string.
Regarding unknown/missing labels: passing negative integers raises ValueError.
Example:
Copied
str2int
( values: typing.Union[str, collections.abc.Iterable] )
Conversion class name string => integer.
Example:
Copied
class datasets.Value
( dtype: strid: typing.Optional[str] = None )
The Value dtypes are as follows:
nullboolint8int16int32int64uint8uint16uint32uint64float16float32(alias float)float64(alias double)time32[(s|ms)]time64[(us|ns)]timestamp[(s|ms|us|ns)]timestamp[(s|ms|us|ns), tz=(tzstring)]date32date64duration[(s|ms|us|ns)]decimal128(precision, scale)decimal256(precision, scale)binarylarge_binarystringlarge_string
Example:
Copied
class datasets.Translation
( languages: typing.List[str]id: typing.Optional[str] = None )
Parameters
languages (
dict) β A dictionary for each example mapping string language codes to string translations.
FeatureConnector for translations with fixed languages per example. Here for compatiblity with tfds.
Example:
Copied
flatten
( )
Flatten the Translation feature into a dictionary.
class datasets.TranslationVariableLanguages
( languages: typing.Optional[typing.List] = Nonenum_languages: typing.Optional[int] = Noneid: typing.Optional[str] = None ) β
languageortranslation(variable-length 1Dtf.Tensoroftf.string)
Parameters
languages (
dict) β A dictionary for each example mapping string language codes to one or more string translations. The languages present may vary from example to example.
Returns
languageortranslation(variable-length 1Dtf.Tensoroftf.string)
Language codes sorted in ascending order or plain text translations, sorted to align with language codes.
FeatureConnector for translations with variable languages per example. Here for compatiblity with tfds.
Example:
Copied
flatten
( )
Flatten the TranslationVariableLanguages feature into a dictionary.
class datasets.Array2D
( shape: tupledtype: strid: typing.Optional[str] = None )
Parameters
shape (
tuple) β The size of each dimension.dtype (
str) β The value of the data type.
Create a two-dimensional array.
Example:
Copied
class datasets.Array3D
( shape: tupledtype: strid: typing.Optional[str] = None )
Parameters
shape (
tuple) β The size of each dimension.dtype (
str) β The value of the data type.
Create a three-dimensional array.
Example:
Copied
class datasets.Array4D
( shape: tupledtype: strid: typing.Optional[str] = None )
Parameters
shape (
tuple) β The size of each dimension.dtype (
str) β The value of the data type.
Create a four-dimensional array.
Example:
Copied
class datasets.Array5D
( shape: tupledtype: strid: typing.Optional[str] = None )
Parameters
shape (
tuple) β The size of each dimension.dtype (
str) β The value of the data type.
Create a five-dimensional array.
Example:
Copied
class datasets.Audio
( sampling_rate: typing.Optional[int] = Nonemono: bool = Truedecode: bool = Trueid: typing.Optional[str] = None )
Parameters
sampling_rate (
int, optional) β Target sampling rate. IfNone, the native sampling rate is used.mono (
bool, defaults toTrue) β Whether to convert the audio signal to mono by averaging samples across channels.decode (
bool, defaults toTrue) β Whether to decode the audio data. IfFalse, returns the underlying dictionary in the format{"path": audio_path, "bytes": audio_bytes}.
Audio Feature to extract audio data from an audio file.
Input: The Audio feature accepts as input:
A
str: Absolute path to the audio file (i.e. random access is allowed).A
dictwith the keys:path: String with relative path of the audio file to the archive file.bytes: Bytes content of the audio file.
This is useful for archived files with sequential access.
A
dictwith the keys:path: String with relative path of the audio file to the archive file.array: Array containing the audio samplesampling_rate: Integer corresponding to the sampling rate of the audio sample.
This is useful for archived files with sequential access.
Example:
Copied
cast_storage
( storage: typing.Union[pyarrow.lib.StringArray, pyarrow.lib.StructArray] ) β pa.StructArray
Parameters
storage (
Union[pa.StringArray, pa.StructArray]) β PyArrow array to cast.
Returns
pa.StructArray
Array in the Audio arrow storage type, that is pa.struct({"bytes": pa.binary(), "path": pa.string()})
Cast an Arrow array to the Audio arrow storage type. The Arrow types that can be converted to the Audio pyarrow storage type are:
pa.string()- it must contain the βpathβ datapa.binary()- it must contain the audio bytespa.struct({"bytes": pa.binary()})pa.struct({"path": pa.string()})pa.struct({"bytes": pa.binary(), "path": pa.string()})- order doesnβt matter
decode_example
( value: dicttoken_per_repo_id: typing.Union[typing.Dict[str, typing.Union[str, bool, NoneType]], NoneType] = None ) β dict
Parameters
value (
dict) β A dictionary with keys:path: String with relative audio file path.bytes: Bytes of the audio file.
token_per_repo_id (
dict, optional) β To access and decode audio files from private repositories on the Hub, you can pass a dictionary repo_id (str) -> token (boolorstr)
Returns
dict
Decode example audio file into audio data.
embed_storage
( storage: StructArray ) β pa.StructArray
Parameters
storage (
pa.StructArray) β PyArrow array to embed.
Returns
pa.StructArray
Array in the Audio arrow storage type, that is pa.struct({"bytes": pa.binary(), "path": pa.string()}).
Embed audio files into the Arrow array.
encode_example
( value: typing.Union[str, bytes, dict] ) β dict
Parameters
value (
strordict) β Data passed as input to Audio feature.
Returns
dict
Encode example into a format for Arrow.
flatten
( )
If in the decodable state, raise an error, otherwise flatten the feature into a dictionary.
class datasets.Image
( decode: bool = Trueid: typing.Optional[str] = None )
Parameters
decode (
bool, defaults toTrue) β Whether to decode the image data. IfFalse, returns the underlying dictionary in the format{"path": image_path, "bytes": image_bytes}.
Image Feature to read image data from an image file.
Input: The Image feature accepts as input:
A
str: Absolute path to the image file (i.e. random access is allowed).A
dictwith the keys:path: String with relative path of the image file to the archive file.bytes: Bytes of the image file.
This is useful for archived files with sequential access.
An
np.ndarray: NumPy array representing an image.A
PIL.Image.Image: PIL image object.
Examples:
Copied
cast_storage
( storage: typing.Union[pyarrow.lib.StringArray, pyarrow.lib.StructArray, pyarrow.lib.ListArray] ) β pa.StructArray
Parameters
storage (
Union[pa.StringArray, pa.StructArray, pa.ListArray]) β PyArrow array to cast.
Returns
pa.StructArray
Array in the Image arrow storage type, that is pa.struct({"bytes": pa.binary(), "path": pa.string()}).
Cast an Arrow array to the Image arrow storage type. The Arrow types that can be converted to the Image pyarrow storage type are:
pa.string()- it must contain the βpathβ datapa.binary()- it must contain the image bytespa.struct({"bytes": pa.binary()})pa.struct({"path": pa.string()})pa.struct({"bytes": pa.binary(), "path": pa.string()})- order doesnβt matterpa.list(*)- it must contain the image array data
decode_example
( value: dicttoken_per_repo_id = None )
Parameters
value (
strordict) β A string with the absolute image file path, a dictionary with keys:path: String with absolute or relative image file path.bytes: The bytes of the image file.
token_per_repo_id (
dict, optional) β To access and decode image files from private repositories on the Hub, you can pass a dictionary repo_id (str) -> token (boolorstr).
Decode example image file into image data.
embed_storage
( storage: StructArray ) β pa.StructArray
Parameters
storage (
pa.StructArray) β PyArrow array to embed.
Returns
pa.StructArray
Array in the Image arrow storage type, that is pa.struct({"bytes": pa.binary(), "path": pa.string()}).
Embed image files into the Arrow array.
encode_example
( value: typing.Union[str, bytes, dict, numpy.ndarray, ForwardRef('PIL.Image.Image')] )
Parameters
value (
str,np.ndarray,PIL.Image.Imageordict) β Data passed as input to Image feature.
Encode example into a format for Arrow.
flatten
( )
If in the decodable state, return the feature itself, otherwise flatten the feature into a dictionary.
MetricInfo
class datasets.MetricInfo
( description: strcitation: strfeatures: Featuresinputs_description: str = <factory>homepage: str = <factory>license: str = <factory>codebase_urls: typing.List[str] = <factory>reference_urls: typing.List[str] = <factory>streamable: bool = Falseformat: typing.Optional[str] = Nonemetric_name: typing.Optional[str] = Noneconfig_name: typing.Optional[str] = Noneexperiment_id: typing.Optional[str] = None )
Information about a metric.
MetricInfo documents a metric, including its name, version, and features. See the constructor arguments and properties for a full list.
Note: Not all fields are known on construction and may be updated later.
from_directory
( metric_info_dir )
Create MetricInfo from the JSON file in metric_info_dir.
Example:
Copied
write_to_directory
( metric_info_dirpretty_print = False )
Write MetricInfo as JSON to metric_info_dir. Also save the license separately in LICENCE. If pretty_print is True, the JSON will be pretty-printed with the indent level of 4.
Example:
Copied
Metric
The base class Metric implements a Metric backed by one or several Dataset.
class datasets.Metric
( config_name: typing.Optional[str] = Nonekeep_in_memory: bool = Falsecache_dir: typing.Optional[str] = Nonenum_process: int = 1process_id: int = 0seed: typing.Optional[int] = Noneexperiment_id: typing.Optional[str] = Nonemax_concurrent_cache_files: int = 10000timeout: typing.Union[int, float] = 100**kwargs )
Parameters
config_name (
str) β This is used to define a hash specific to a metrics computation script and prevents the metricβs data to be overridden when the metric loading script is modified.keep_in_memory (
bool) β keep all predictions and references in memory. Not possible in distributed settings.cache_dir (
str) β Path to a directory in which temporary prediction/references data will be stored. The data directory should be located on a shared file-system in distributed setups.num_process (
int) β specify the total number of nodes in a distributed settings. This is useful to compute metrics in distributed setups (in particular non-additive metrics like F1).process_id (
int) β specify the id of the current process in a distributed setup (between 0 and num_process-1) This is useful to compute metrics in distributed setups (in particular non-additive metrics like F1).seed (
int, optional) β If specified, this will temporarily set numpyβs random seed when datasets.Metric.compute() is run.experiment_id (
str) β A specific experiment id. This is used if several distributed evaluations share the same file system. This is useful to compute metrics in distributed setups (in particular non-additive metrics like F1).max_concurrent_cache_files (
int) β Max number of concurrent metrics cache files (default 10000).timeout (
Union[int, float]) β Timeout in second for distributed setting synchronization.
A Metric is the base class and common API for all metrics.
Deprecated in 2.5.0
Use the new library π Evaluate instead: https://boincai.com/docs/evaluate
add
( prediction = Nonereference = None**kwargs )
Parameters
prediction (list/array/tensor, optional) β Predictions.
reference (list/array/tensor, optional) β References.
Add one prediction and reference for the metricβs stack.
Example:
Copied
add_batch
( predictions = Nonereferences = None**kwargs )
Parameters
predictions (list/array/tensor, optional) β Predictions.
references (list/array/tensor, optional) β References.
Add a batch of predictions and references for the metricβs stack.
Example:
Copied
compute
( predictions = Nonereferences = None**kwargs )
Parameters
predictions (list/array/tensor, optional) β Predictions.
references (list/array/tensor, optional) β References.
**kwargs (optional) β Keyword arguments that will be forwarded to the metrics
_computemethod (see details in the docstring).
Compute the metrics.
Usage of positional arguments is not allowed to prevent mistakes.
Example:
Copied
download_and_prepare
( download_config: typing.Optional[datasets.download.download_config.DownloadConfig] = Nonedl_manager: typing.Optional[datasets.download.download_manager.DownloadManager] = None )
Parameters
download_config (DownloadConfig, optional) β Specific download configuration parameters.
dl_manager (DownloadManager, optional) β Specific download manager to use.
Downloads and prepares dataset for reading.
Filesystems
class datasets.filesystems.S3FileSystem
( *args**kwargs )
Parameters
anon (
bool, default toFalse) β Whether to use anonymous connection (public buckets only). IfFalse, uses the key/secret given, or botoβs credential resolver (client_kwargs, environment, variables, config files, EC2 IAM server, in that order).key (
str) β If not anonymous, use this access key ID, if specified.secret (
str) β If not anonymous, use this secret access key, if specified.token (
str) β If not anonymous, use this security token, if specified.use_ssl (
bool, defaults toTrue) β Whether to use SSL in connections to S3; may be faster without, but insecure. Ifuse_sslis also set inclient_kwargs, the value set inclient_kwargswill take priority.s3_additional_kwargs (
dict) β Parameters that are used when calling S3 API methods. Typically used for things like ServerSideEncryption.client_kwargs (
dict) β Parameters for the botocore client.requester_pays (
bool, defaults toFalse) β WhetherRequesterPaysbuckets are supported.default_block_size (
int) β If given, the default block size value used foropen(), if no specific value is given at all time. The built-in default is 5MB.default_fill_cache (
bool, defaults toTrue) β Whether to use cache filling with open by default. Refer toS3File.open.default_cache_type (
str, defaults tobytes) β If given, the defaultcache_typevalue used foropen(). Set tononeif no caching is desired. See fsspecβs documentation for other availablecache_typevalues.version_aware (
bool, defaults toFalse) β Whether to support bucket versioning. If enable this will require the user to have the necessary IAM permissions for dealing with versioned objects.cache_regions (
bool, defaults toFalse) β Whether to cache bucket regions. Whenever a new bucket is used, it will first find out which region it belongs to and then use the client for that region.asynchronous (
bool, defaults toFalse) β Whether this instance is to be used from inside coroutines.config_kwargs (
dict) β Parameters passed tobotocore.client.Config. **kwargs β Other parameters for core session.session (
aiobotocore.session.AioSession) β Session to be used for all connections. This session will be used inplace of creating a new session inside S3FileSystem. For example:aiobotocore.session.AioSession(profile='test_user').skip_instance_cache (
bool) β Control reuse of instances. Passed on tofsspec.use_listings_cache (
bool) β Control reuse of directory listings. Passed on tofsspec.listings_expiry_time (
intorfloat) β Control reuse of directory listings. Passed on tofsspec.max_paths (
int) β Control reuse of directory listings. Passed on tofsspec.
datasets.filesystems.S3FileSystem is a subclass of s3fs.S3FileSystem.
Users can use this class to access S3 as if it were a file system. It exposes a filesystem-like API (ls, cp, open, etc.) on top of S3 storage. Provide credentials either explicitly (key=, secret=) or with botoβs credential methods. See botocore documentation for more information. If no credentials are available, use anon=True.
Examples:
Listing files from public S3 bucket.
Copied
Listing files from private S3 bucket using aws_access_key_id and aws_secret_access_key.
Copied
Using S3Filesystem with botocore.session.Session and custom aws_profile.
Copied
Loading dataset from S3 using S3Filesystem and load_from_disk().
Copied
Saving dataset to S3 using S3Filesystem and Dataset.save_to_disk().
Copied
datasets.filesystems.extract_path_from_uri
( dataset_path: str )
Parameters
dataset_path (
str) β Path (e.g.dataset/train) or remote uri (e.g.s3://my-bucket/dataset/train) of the dataset directory.
Preprocesses dataset_path and removes remote filesystem (e.g. removing s3://).
datasets.filesystems.is_remote_filesystem
( fs: AbstractFileSystem )
Parameters
fs (
fsspec.spec.AbstractFileSystem) β An abstract super-class for pythonic file-systems, e.g.fsspec.filesystem('file')or datasets.filesystems.S3FileSystem.
Validates if filesystem has remote protocol.
Fingerprint
class datasets.fingerprint.Hasher
( )
Hasher that accepts python objects as inputs.
Last updated