Datasets
  • 🌍GET STARTED
    • Datasets
    • Quickstart
    • Installation
  • 🌍TUTORIALS
    • Overview
    • Load a dataset from the Hub
    • Know your dataset
    • Preprocess
    • Evaluate predictions
    • Create a data
    • Share a dataset to the Hub
  • 🌍HOW-TO GUIDES
    • Overview
    • 🌍GENERAL USAGE
      • Load
      • Process
      • Stream
      • Use with TensorFlow
      • Use with PyTorch
      • Use with JAX
      • Use with Spark
      • Cache management
      • Cloud storage
      • Search index
      • Metrics
      • Beam Datasets
    • 🌍AUDIO
      • Load audio data
      • Process audio data
      • Create an audio dataset
    • 🌍VISION
      • Load image data
      • Process image data
      • Create an image dataset
      • Depth estimation
      • Image classification
      • Semantic segmentation
      • Object detection
    • 🌍TEXT
      • Load text data
      • Process text data
    • 🌍TABULAR
      • Load tabular data
    • 🌍DATASET REPOSITORY
      • Share
      • Create a dataset card
      • Structure your repository
      • Create a dataset loading script
  • 🌍CONCEPTUAL GUIDES
    • Datasets with Arrow
    • The cache
    • Dataset or IterableDataset
    • Dataset features
    • Build and load
    • Batch mapping
    • All about metrics
  • 🌍REFERENCE
    • Main classes
    • Builder classes
    • Loading methods
    • Table Classes
    • Logging methods
    • Task templates
Powered by GitBook
On this page
  • Installation
  • Virtual environment
  • pip
  • Audio
  • Vision
  • source
  • conda
  1. GET STARTED

Installation

PreviousQuickstartNextTUTORIALS

Last updated 1 year ago

Installation

Before you start, you’ll need to setup your environment and install the appropriate packages. 🌍 Datasets is tested on Python 3.7+.

If you want to use 🌍 Datasets with TensorFlow or PyTorch, you’ll need to install them separately. Refer to the or the for the specific install command for your framework.

Virtual environment

You should install 🌍 Datasets in a to keep things tidy and avoid dependency conflicts.

  1. Create and navigate to your project directory:

    Copied

    mkdir ~/my-project
    cd ~/my-project
  2. Start a virtual environment inside your directory:

    Copied

    python -m venv .env
  3. Activate and deactivate the virtual environment with the following commands:

    Copied

    # Activate the virtual environment
    source .env/bin/activate
    
    # Deactivate the virtual environment
    source .env/bin/deactivate

Once you’ve created your virtual environment, you can install 🌍 Datasets in it.

pip

The most straightforward way to install 🌍 Datasets is with pip:

Copied

pip install datasets

Run the following command to check if 🌍 Datasets has been properly installed:

Copied

python -c "from datasets import load_dataset; print(load_dataset('squad', split='train')[0])"

Copied

{'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']}, 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.', 'id': '5733be284776f41900661182', 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?', 'title': 'University_of_Notre_Dame'}

Audio

Copied

pip install datasets[audio]

Copied

python -c "import soundfile; print(soundfile.__libsndfile_version__)"

Vision

Copied

pip install datasets[vision]

source

Building 🌍 Datasets from source lets you make changes to the code base. To install from the source, clone the repository and install with the following commands:

Copied

git clone https://github.com/boincai/datasets.git
cd datasets
pip install -e .

Again, you can check if 🌍 Datasets was properly installed with the following command:

Copied

python -c "from datasets import load_dataset; print(load_dataset('squad', split='train')[0])"

conda

🌍 Datasets can also be installed from conda, a package management system:

Copied

conda install -c boincai -c conda-forge datasets

This command downloads version 1 of the , loads the training split, and prints the first training example. You should see:

To work with audio datasets, you need to install the feature as an extra dependency:

To decode mp3 files, you need to have at least version 1.1.0 of the libsndfile system library. Usually, it’s bundled with the python package, which is installed as an extra audio dependency for 🌍 Datasets. For Linux, the required version of libsndfile is bundled with soundfile starting from version 0.12.0. You can run the following command to determine which version of libsndfile is being used by soundfile:

To work with image datasets, you need to install the feature as an extra dependency:

🌍
TensorFlow installation page
PyTorch installation page
virtual environment
Stanford Question Answering Dataset (SQuAD)
Audio
soundfile
Image