BOINC AI Hub
  • 🌍BOINC AI Hub
  • 🌍Repositories
  • Getting Started with Repositories
  • Repository Settings
  • Pull Requests & Discussions
  • Notifications
  • Collections
  • 🌍Webhooks
    • How-to: Automatic fine-tuning with Auto-Train
    • How-to: Build a Discussion bot based on BLOOM
    • How-to: Create automatic metadata quality reports
  • Repository size recommendations
  • Next Steps
  • Licenses
  • 🌍Models
  • The Model Hub
  • 🌍Model Cards
    • Annotated Model Card
    • Carbon Emissions
    • Model Card Guidebook
    • Landscape Analysis
  • Gated Models
  • Uploading Models
  • Downloading Models
  • 🌍Integrated Libraries
    • Adapter Transformers
    • AllenNLP
    • Asteroid
    • Diffusers
    • ESPnet
    • fastai
    • Flair
    • Keras
    • ML-Agents
    • PaddleNLP
    • RL-Baselines3-Zoo
    • Sample Factory
    • Sentence Transformers
    • spaCy
    • SpanMarker
    • SpeechBrain
    • Stable-Baselines3
    • Stanza
    • TensorBoard
    • timm
    • Transformers
    • Transformers.js
  • 🌍Model Widgets
    • Widget Examples
  • Inference API docs
  • Frequently Asked Questions
  • 🌍Advanced Topics
    • Integrate a library with the Hub
    • Tasks
  • 🌍Datasets
  • Datasets Overview
  • Dataset Cards
  • Gated Datasets
  • Dataset Viewer
  • Using Datasets
  • Adding New Datasets
  • 🌍Spaces
  • 🌍Spaces Overview
    • Handling Spaces Dependencies
    • Spaces Settings
    • Using Spaces for Organization Cards
  • Spaces GPU Upgrades
  • Spaces Persistent Storage
  • Gradio Spaces
  • Streamlit Spaces
  • Static HTML Spaces
  • 🌍Docker Spaces
    • Your first Docker Spaces
    • Example Docker Spaces
    • Argilla on Spaces
    • Label Studio on Spaces
    • Aim on Space
    • Livebook on Spaces
    • Shiny on Spaces
    • ZenML on Spaces
    • Panel on Spaces
    • ChatUI on Spaces
    • Tabby on Spaces
  • Embed your Space
  • Run Spaces with Docker
  • Spaces Configuration Reference
  • Sign-In with BA button
  • Spaces Changelog
  • 🌍Advanced Topics
    • Using OpenCV in Spaces
    • More ways to create Spaces
    • Managing Spaces with Github Actions
    • Custom Python Spaces
    • How to Add a Space to ArXiv
    • Cookie limitations in Spaces
  • 🌍Other
  • 🌍Organizations
    • Managing Organizations
    • Organization Cards
    • Access Control in Organizations
  • Billing
  • 🌍Security
    • User Access Tokens
    • Git over SSH
    • Signing Commits with GPG
    • Single Sign-On (SSO)
    • Malware Scanning
    • Pickle Scanning
    • Secrets Scanning
  • Moderation
  • Paper Pages
  • Search
  • Digital Object Identifier (DOI)
  • Hub API Endpoints
  • Sign-In with BA
Powered by GitBook
On this page
  • Using ML-Agents at BOINC AI
  • Exploring ML-Agents in the Hub
  • Install the library
  • Using existing models
  • Visualize an agent playing
  • Sharing your models
  • Additional resources
  1. Integrated Libraries

ML-Agents

PreviousKerasNextPaddleNLP

Last updated 1 year ago

Using ML-Agents at BOINC AI

ml-agents is an open-source toolkit that enables games and simulations made with Unity to serve as environments for training intelligent agents.

Exploring ML-Agents in the Hub

You can find ml-agents models by filtering at the left of the .

All models on the Hub come up with useful features:

  1. An automatically generated model card with a description, a training configuration, and more.

  2. Metadata tags that help for discoverability.

  3. Tensorboard summary files to visualize the training metrics.

  4. A link to the Spaces web demo where you can visualize your agent playing in your browser.

Install the library

To install the ml-agents library, you need to clone the repo:

Copied

# Clone the repository
git clone https://github.com/Unity-Technologies/ml-agents

# Go inside the repository and install the package
cd ml-agents
pip3 install -e ./ml-agents-envs
pip3 install -e ./ml-agents

Using existing models

You can simply download a model from the Hub using mlagents-load-from-hf.

Copied

mlagents-load-from-hf --repo-id="ThomasSimonini/MLAgents-Pyramids" --local-dir="./downloads"

You need to define two parameters:

  • --repo-id: the name of the BOINC AI repo you want to download.

  • --local-dir: the path to download the model.

Visualize an agent playing

You can easily watch any model playing directly in your browser:

  1. Go to your model repo.

  2. In the Watch Your Agent Play section, click on the link.

  3. In the demo, on step 1, choose your model repository, which is the model id.

  4. In step 2, choose what model you want to replay.

Sharing your models

You can easily upload your models using mlagents-push-to-hf:

Copied

mlagents-push-to-hf --run-id="First Training" --local-dir="results/First Training" --repo-id="ThomasSimonini/MLAgents-Pyramids" --commit-message="Pyramids"

You need to define four parameters:

  • --run-id: the name of the training run id.

  • --local-dir: where the model was saved.

  • --repo-id: the name of the BOINC AI repo you want to create or update. It’s <your huggingface username>/<the repo name>.

  • --commit-message.

Additional resources

ML-Agents

Official Unity ML-Agents Spaces

🌍
documentation
demos
models page