BOINC AI Hub
  • 🌍BOINC AI Hub
  • 🌍Repositories
  • Getting Started with Repositories
  • Repository Settings
  • Pull Requests & Discussions
  • Notifications
  • Collections
  • 🌍Webhooks
    • How-to: Automatic fine-tuning with Auto-Train
    • How-to: Build a Discussion bot based on BLOOM
    • How-to: Create automatic metadata quality reports
  • Repository size recommendations
  • Next Steps
  • Licenses
  • 🌍Models
  • The Model Hub
  • 🌍Model Cards
    • Annotated Model Card
    • Carbon Emissions
    • Model Card Guidebook
    • Landscape Analysis
  • Gated Models
  • Uploading Models
  • Downloading Models
  • 🌍Integrated Libraries
    • Adapter Transformers
    • AllenNLP
    • Asteroid
    • Diffusers
    • ESPnet
    • fastai
    • Flair
    • Keras
    • ML-Agents
    • PaddleNLP
    • RL-Baselines3-Zoo
    • Sample Factory
    • Sentence Transformers
    • spaCy
    • SpanMarker
    • SpeechBrain
    • Stable-Baselines3
    • Stanza
    • TensorBoard
    • timm
    • Transformers
    • Transformers.js
  • 🌍Model Widgets
    • Widget Examples
  • Inference API docs
  • Frequently Asked Questions
  • 🌍Advanced Topics
    • Integrate a library with the Hub
    • Tasks
  • 🌍Datasets
  • Datasets Overview
  • Dataset Cards
  • Gated Datasets
  • Dataset Viewer
  • Using Datasets
  • Adding New Datasets
  • 🌍Spaces
  • 🌍Spaces Overview
    • Handling Spaces Dependencies
    • Spaces Settings
    • Using Spaces for Organization Cards
  • Spaces GPU Upgrades
  • Spaces Persistent Storage
  • Gradio Spaces
  • Streamlit Spaces
  • Static HTML Spaces
  • 🌍Docker Spaces
    • Your first Docker Spaces
    • Example Docker Spaces
    • Argilla on Spaces
    • Label Studio on Spaces
    • Aim on Space
    • Livebook on Spaces
    • Shiny on Spaces
    • ZenML on Spaces
    • Panel on Spaces
    • ChatUI on Spaces
    • Tabby on Spaces
  • Embed your Space
  • Run Spaces with Docker
  • Spaces Configuration Reference
  • Sign-In with BA button
  • Spaces Changelog
  • 🌍Advanced Topics
    • Using OpenCV in Spaces
    • More ways to create Spaces
    • Managing Spaces with Github Actions
    • Custom Python Spaces
    • How to Add a Space to ArXiv
    • Cookie limitations in Spaces
  • 🌍Other
  • 🌍Organizations
    • Managing Organizations
    • Organization Cards
    • Access Control in Organizations
  • Billing
  • 🌍Security
    • User Access Tokens
    • Git over SSH
    • Signing Commits with GPG
    • Single Sign-On (SSO)
    • Malware Scanning
    • Pickle Scanning
    • Secrets Scanning
  • Moderation
  • Paper Pages
  • Search
  • Digital Object Identifier (DOI)
  • Hub API Endpoints
  • Sign-In with BA
Powered by GitBook
On this page
  • Using sample-factory at BOINC AI
  • Exploring sample-factory in the Hub
  • Install the library
  • Loading models from the Hub
  • Using Downloaded Models with Sample-Factory
  • Sharing your models
  1. Integrated Libraries

Sample Factory

PreviousRL-Baselines3-ZooNextSentence Transformers

Last updated 1 year ago

Using sample-factory at BOINC AI

is a codebase for high throughput asynchronous reinforcement learning. It has integrations with the BOINC AI Hub to share models with evaluation results and training metrics.

Exploring sample-factory in the Hub

You can find sample-factory models by filtering at the left of the .

All models on the Hub come up with useful features:

  1. An automatically generated model card with a description, a training configuration, and more.

  2. Metadata tags that help for discoverability.

  3. Evaluation results to compare with other models.

  4. A video widget where you can watch your agent performing.

Install the library

To install the sample-factory library, you need to install the package:

pip install sample-factory

SF is known to work on Linux and MacOS. There is no Windows support at this time.

Loading models from the Hub

Using load_from_hub

To download a model from the BOINC AI Hub to use with Sample-Factory, use the load_from_hub script:

Copied

python -m sample_factory.huggingface.load_from_hub -r <HuggingFace_repo_id> -d <train_dir_path>

The command line arguments are:

  • -r: The repo ID for the HF repository to download from. The repo ID should be in the format <username>/<repo_name>

  • -d: An optional argument to specify the directory to save the experiment to. Defaults to ./train_dir which will save the repo to ./train_dir/<repo_name>

Download Model Repository Directly

BOINC AI repositories can be downloaded directly using git clone:

Copied

git clone git@hf.co:<Name of HuggingFace Repo> # example: git clone git@hf.co:bigscience/bloom

Using Downloaded Models with Sample-Factory

After downloading the model, you can run the models in the repo with the enjoy script corresponding to your environment. For example, if you are downloading a mujoco-ant model, it can be run with:

Copied

python -m sf_examples.mujoco.enjoy_mujoco --algo=APPO --env=mujoco_ant --experiment=<repo_name> --train_dir=./train_dir

Note, you may have to specify the --train_dir if your local train_dir has a different path than the one in the cfg.json

Sharing your models

Using push_to_hub

If you want to upload without generating evaluation metrics or a replay video, you can use the push_to_hub script:

Copied

python -m sample_factory.huggingface.push_to_hub -r <hf_username>/<hf_repo_name> -d <experiment_dir_path>

The command line arguments are:

  • -r: The repo_id to save on HF Hub. This is the same as hf_repository in the enjoy script and must be in the form <hf_username>/<hf_repo_name>

  • -d: The full path to your experiment directory to upload

Using enjoy.py

You can upload your models to the Hub using your environment’s enjoy script with the --push_to_hub flag. Uploading using enjoy can also generate evaluation metrics and a replay video.

The evaluation metrics are generated by running your model on the specified environment for a number of episodes and reporting the mean and std reward of those runs.

Other relevant command line arguments are:

  • --hf_repository: The repository to push to. Must be of the form <username>/<repo_name>. The model will be saved to https://huggingface.co/<username>/<repo_name>

  • --max_num_episodes: Number of episodes to evaluate on before uploading. Used to generate evaluation metrics. It is recommended to use multiple episodes to generate an accurate mean and std.

  • --max_num_frames: Number of frames to evaluate on before uploading. An alternative to max_num_episodes

  • --no_render: A flag that disables rendering and showing the environment steps. It is recommended to set this flag to speed up the evaluation process.

You can also save a video of the model during evaluation to upload to the hub with the --save_video flag

  • --video_frames: The number of frames to be rendered in the video. Defaults to -1 which renders an entire episode

  • --video_name: The name of the video to save as. If None, will save to replay.mp4 in your experiment directory

For example:

Copied

python -m sf_examples.mujoco_examples.enjoy_mujoco --algo=APPO --env=mujoco_ant --experiment=<repo_name> --train_dir=./train_dir --max_num_episodes=10 --push_to_hub --hf_username=<username> --hf_repository=<hf_repo_name> --save_video --no_render

🌍
sample-factory
models page