π Datasets supports access to cloud storage providers through a fsspec FileSystem implementations. You can save and load datasets from any cloud storage in a Pythonic way. Take a look at the following table for some example of supported cloud storage providers:
Storage provider
Filesystem implementation
Amazon S3
Google Cloud Storage
Azure Blob/DataLake
Dropbox
Google Drive
Oracle Cloud Storage
This guide will show you how to save and load datasets with any cloud storage. Here are examples for S3, Google Cloud Storage, Azure Blob Storage, and Oracle Cloud Object Storage.
Set up your cloud storage FileSystem
Amazon S3
Install the S3 FileSystem implementation:
Copied
>>> pip install s3fs
Define your credentials
To use an anonymous connection, use anon=True. Otherwise, include your aws_access_key_id and aws_secret_access_key whenever you are interacting with a private S3 bucket.
Copied
>>> storage_options = {"anon": True} # for anonymous connection
# or use your credentials
>>> storage_options = {"key": aws_access_key_id, "secret": aws_secret_access_key} # for private buckets
# or use a botocore session
>>> import aiobotocore.session
>>> s3_session = aiobotocore.session.AioSession(profile="my_profile_name")
>>> storage_options = {"session": s3_session}
>>> conda install -c conda-forge gcsfs
# or install with pip
>>> pip install gcsfs
Define your credentials
Copied
>>> storage_options={"token": "anon"} # for anonymous connection
# or use your credentials of your default gcloud credentials or from the google metadata service
>>> storage_options={"project": "my-google-project"}
# or use your credentials from elsewhere, see the documentation at https://gcsfs.readthedocs.io/
>>> storage_options={"project": "my-google-project", "token": TOKEN}
>>> conda install -c conda-forge adlfs
# or install with pip
>>> pip install adlfs
Define your credentials
Copied
>>> storage_options = {"anon": True} # for anonymous connection
# or use your credentials
>>> storage_options = {"account_name": ACCOUNT_NAME, "account_key": ACCOUNT_KEY} # gen 2 filesystem
# or use your credentials with the gen 1 filesystem
>>> storage_options={"tenant_id": TENANT_ID, "client_id": CLIENT_ID, "client_secret": CLIENT_SECRET}
Load and Save your datasets using your cloud storage FileSystem
Download and prepare a dataset into a cloud storage
You can download and prepare a dataset into your cloud storage by specifying a remote output_dir in download_and_prepare. Donβt forget to use the previously defined storage_options containing your credentials to write into a private cloud storage.
The download_and_prepare method works in two steps:
then it generates the dataset in Arrow or Parquet format in your cloud storage by iterating over the raw data files.
It is highly recommended to save the files as compressed Parquet files to optimize I/O by specifying file_format="parquet". Otherwise the dataset is saved as an uncompressed Arrow file.
You can also specify the size of the shards using max_shard_size (default is 500MB):
Dask is a parallel computing library and it has a pandas-like API for working with larger than memory Parquet datasets in parallel. Dask can use multiple threads or processes on a single machine, or a cluster of machines to process data in parallel. Dask supports local data but also data from a cloud storage.
Therefore you can load a dataset saved as sharded Parquet files in Dask with
Copied
import dask.dataframe as dd
df = dd.read_parquet(output_dir, storage_options=storage_options)
# or if your dataset is split into train/valid/test
df_train = dd.read_parquet(output_dir + f"/{builder.name}-train-*.parquet", storage_options=storage_options)
df_valid = dd.read_parquet(output_dir + f"/{builder.name}-validation-*.parquet", storage_options=storage_options)
df_test = dd.read_parquet(output_dir + f"/{builder.name}-test-*.parquet", storage_options=storage_options)
Saving serialized datasets
Copied
# saves encoded_dataset to amazon s3
>>> encoded_dataset.save_to_disk("s3://my-private-datasets/imdb/train", storage_options=storage_options)
# saves encoded_dataset to google cloud storage
>>> encoded_dataset.save_to_disk("gcs://my-private-datasets/imdb/train", storage_options=storage_options)
# saves encoded_dataset to microsoft azure blob/datalake
>>> encoded_dataset.save_to_disk("adl://my-private-datasets/imdb/train", storage_options=storage_options)
Listing serialized datasets
List files from a cloud storage with your FileSystem instance fs, using fs.ls: