Datasets
  • 🌍GET STARTED
    • Datasets
    • Quickstart
    • Installation
  • 🌍TUTORIALS
    • Overview
    • Load a dataset from the Hub
    • Know your dataset
    • Preprocess
    • Evaluate predictions
    • Create a data
    • Share a dataset to the Hub
  • 🌍HOW-TO GUIDES
    • Overview
    • 🌍GENERAL USAGE
      • Load
      • Process
      • Stream
      • Use with TensorFlow
      • Use with PyTorch
      • Use with JAX
      • Use with Spark
      • Cache management
      • Cloud storage
      • Search index
      • Metrics
      • Beam Datasets
    • 🌍AUDIO
      • Load audio data
      • Process audio data
      • Create an audio dataset
    • 🌍VISION
      • Load image data
      • Process image data
      • Create an image dataset
      • Depth estimation
      • Image classification
      • Semantic segmentation
      • Object detection
    • 🌍TEXT
      • Load text data
      • Process text data
    • 🌍TABULAR
      • Load tabular data
    • 🌍DATASET REPOSITORY
      • Share
      • Create a dataset card
      • Structure your repository
      • Create a dataset loading script
  • 🌍CONCEPTUAL GUIDES
    • Datasets with Arrow
    • The cache
    • Dataset or IterableDataset
    • Dataset features
    • Build and load
    • Batch mapping
    • All about metrics
  • 🌍REFERENCE
    • Main classes
    • Builder classes
    • Loading methods
    • Table Classes
    • Logging methods
    • Task templates
Powered by GitBook
On this page
  • Cloud storage
  • Set up your cloud storage FileSystem
  • Load and Save your datasets using your cloud storage FileSystem
  • Saving serialized datasets
  • Listing serialized datasets
  1. HOW-TO GUIDES
  2. GENERAL USAGE

Cloud storage

Cloud storage

🌍 Datasets supports access to cloud storage providers through a fsspec FileSystem implementations. You can save and load datasets from any cloud storage in a Pythonic way. Take a look at the following table for some example of supported cloud storage providers:

Storage provider
Filesystem implementation

Amazon S3

Google Cloud Storage

Azure Blob/DataLake

Dropbox

Google Drive

Oracle Cloud Storage

This guide will show you how to save and load datasets with any cloud storage. Here are examples for S3, Google Cloud Storage, Azure Blob Storage, and Oracle Cloud Object Storage.

Set up your cloud storage FileSystem

Amazon S3

  1. Install the S3 FileSystem implementation:

Copied

>>> pip install s3fs
  1. Define your credentials

To use an anonymous connection, use anon=True. Otherwise, include your aws_access_key_id and aws_secret_access_key whenever you are interacting with a private S3 bucket.

Copied

>>> storage_options = {"anon": True}  # for anonymous connection
# or use your credentials
>>> storage_options = {"key": aws_access_key_id, "secret": aws_secret_access_key}  # for private buckets
# or use a botocore session
>>> import aiobotocore.session
>>> s3_session = aiobotocore.session.AioSession(profile="my_profile_name")
>>> storage_options = {"session": s3_session}
  1. Create your FileSystem instance

Copied

>>> import s3fs
>>> fs = s3fs.S3FileSystem(**storage_options)

Google Cloud Storage

  1. Install the Google Cloud Storage implementation:

Copied

>>> conda install -c conda-forge gcsfs
# or install with pip
>>> pip install gcsfs
  1. Define your credentials

Copied

>>> storage_options={"token": "anon"}  # for anonymous connection
# or use your credentials of your default gcloud credentials or from the google metadata service
>>> storage_options={"project": "my-google-project"}
# or use your credentials from elsewhere, see the documentation at https://gcsfs.readthedocs.io/
>>> storage_options={"project": "my-google-project", "token": TOKEN}
  1. Create your FileSystem instance

Copied

>>> import gcsfs
>>> fs = gcsfs.GCSFileSystem(**storage_options)

Azure Blob Storage

  1. Install the Azure Blob Storage implementation:

Copied

>>> conda install -c conda-forge adlfs
# or install with pip
>>> pip install adlfs
  1. Define your credentials

Copied

>>> storage_options = {"anon": True}  # for anonymous connection
# or use your credentials
>>> storage_options = {"account_name": ACCOUNT_NAME, "account_key": ACCOUNT_KEY}  # gen 2 filesystem
# or use your credentials with the gen 1 filesystem
>>> storage_options={"tenant_id": TENANT_ID, "client_id": CLIENT_ID, "client_secret": CLIENT_SECRET}
  1. Create your FileSystem instance

Copied

>>> import adlfs
>>> fs = adlfs.AzureBlobFileSystem(**storage_options)

Oracle Cloud Object Storage

  1. Install the OCI FileSystem implementation:

Copied

>>> pip install ocifs
  1. Define your credentials

Copied

>>> storage_options = {"config": "~/.oci/config", "region": "us-ashburn-1"} 
  1. Create your FileSystem instance

Copied

>>> import ocifs
>>> fs = ocifs.OCIFileSystem(**storage_options)

Load and Save your datasets using your cloud storage FileSystem

Download and prepare a dataset into a cloud storage

You can download and prepare a dataset into your cloud storage by specifying a remote output_dir in download_and_prepare. Don’t forget to use the previously defined storage_options containing your credentials to write into a private cloud storage.

The download_and_prepare method works in two steps:

  1. then it generates the dataset in Arrow or Parquet format in your cloud storage by iterating over the raw data files.

Copied

>>> output_dir = "s3://my-bucket/imdb"
>>> builder = load_dataset_builder("imdb")
>>> builder.download_and_prepare(output_dir, storage_options=storage_options, file_format="parquet")

Copied

>>> output_dir = "s3://my-bucket/imdb"
>>> builder = load_dataset_builder("path/to/local/loading_script/loading_script.py")
>>> builder.download_and_prepare(output_dir, storage_options=storage_options, file_format="parquet")

Copied

>>> data_files = {"train": ["path/to/train.csv"]}
>>> output_dir = "s3://my-bucket/imdb"
>>> builder = load_dataset_builder("csv", data_files=data_files)
>>> builder.download_and_prepare(output_dir, storage_options=storage_options, file_format="parquet")

It is highly recommended to save the files as compressed Parquet files to optimize I/O by specifying file_format="parquet". Otherwise the dataset is saved as an uncompressed Arrow file.

You can also specify the size of the shards using max_shard_size (default is 500MB):

Copied

>>> builder.download_and_prepare(output_dir, storage_options=storage_options, file_format="parquet", max_shard_size="1GB")

Dask

Dask is a parallel computing library and it has a pandas-like API for working with larger than memory Parquet datasets in parallel. Dask can use multiple threads or processes on a single machine, or a cluster of machines to process data in parallel. Dask supports local data but also data from a cloud storage.

Therefore you can load a dataset saved as sharded Parquet files in Dask with

Copied

import dask.dataframe as dd

df = dd.read_parquet(output_dir, storage_options=storage_options)

# or if your dataset is split into train/valid/test
df_train = dd.read_parquet(output_dir + f"/{builder.name}-train-*.parquet", storage_options=storage_options)
df_valid = dd.read_parquet(output_dir + f"/{builder.name}-validation-*.parquet", storage_options=storage_options)
df_test = dd.read_parquet(output_dir + f"/{builder.name}-test-*.parquet", storage_options=storage_options)

Saving serialized datasets

Copied

# saves encoded_dataset to amazon s3
>>> encoded_dataset.save_to_disk("s3://my-private-datasets/imdb/train", storage_options=storage_options)
# saves encoded_dataset to google cloud storage
>>> encoded_dataset.save_to_disk("gcs://my-private-datasets/imdb/train", storage_options=storage_options)
# saves encoded_dataset to microsoft azure blob/datalake
>>> encoded_dataset.save_to_disk("adl://my-private-datasets/imdb/train", storage_options=storage_options)

Listing serialized datasets

List files from a cloud storage with your FileSystem instance fs, using fs.ls:

Copied

>>> fs.ls("my-private-datasets/imdb/train", detail=False)
["dataset_info.json.json","dataset.arrow","state.json"]

Load serialized datasets

Copied

>>> from datasets import load_from_disk
# load encoded_dataset from cloud storage
>>> dataset = load_from_disk("s3://a-public-datasets/imdb/train", storage_options=storage_options)  
>>> print(len(dataset))
25000
PreviousCache managementNextSearch index

Last updated 1 year ago

it first downloads the raw data files (if any) in your local cache. You can set your cache directory by passing cache_dir to

Load a dataset builder from the BOINC AI Hub (see ):

Load a dataset builder using a loading script (see ):

Use your own data files (see ):

You can find more about dask dataframes in their .

After you have processed your dataset, you can save it to your cloud storage with :

Remember to define your credentials in your fs whenever you are interacting with a private cloud storage.

When you are ready to use your dataset again, reload it with :

🌍
🌍
load_dataset_builder()
how to load from the BOINC AI Hub
how to load a local loading script
how to load local and remote files
documentation
Dataset.save_to_disk()
FileSystem instance
Dataset.load_from_disk()
s3fs
gcsfs
adlfs
dropboxdrivefs
gdrivefs
ocifs