BOINC AI Hub
  • 🌍BOINC AI Hub
  • 🌍Repositories
  • Getting Started with Repositories
  • Repository Settings
  • Pull Requests & Discussions
  • Notifications
  • Collections
  • 🌍Webhooks
    • How-to: Automatic fine-tuning with Auto-Train
    • How-to: Build a Discussion bot based on BLOOM
    • How-to: Create automatic metadata quality reports
  • Repository size recommendations
  • Next Steps
  • Licenses
  • 🌍Models
  • The Model Hub
  • 🌍Model Cards
    • Annotated Model Card
    • Carbon Emissions
    • Model Card Guidebook
    • Landscape Analysis
  • Gated Models
  • Uploading Models
  • Downloading Models
  • 🌍Integrated Libraries
    • Adapter Transformers
    • AllenNLP
    • Asteroid
    • Diffusers
    • ESPnet
    • fastai
    • Flair
    • Keras
    • ML-Agents
    • PaddleNLP
    • RL-Baselines3-Zoo
    • Sample Factory
    • Sentence Transformers
    • spaCy
    • SpanMarker
    • SpeechBrain
    • Stable-Baselines3
    • Stanza
    • TensorBoard
    • timm
    • Transformers
    • Transformers.js
  • 🌍Model Widgets
    • Widget Examples
  • Inference API docs
  • Frequently Asked Questions
  • 🌍Advanced Topics
    • Integrate a library with the Hub
    • Tasks
  • 🌍Datasets
  • Datasets Overview
  • Dataset Cards
  • Gated Datasets
  • Dataset Viewer
  • Using Datasets
  • Adding New Datasets
  • 🌍Spaces
  • 🌍Spaces Overview
    • Handling Spaces Dependencies
    • Spaces Settings
    • Using Spaces for Organization Cards
  • Spaces GPU Upgrades
  • Spaces Persistent Storage
  • Gradio Spaces
  • Streamlit Spaces
  • Static HTML Spaces
  • 🌍Docker Spaces
    • Your first Docker Spaces
    • Example Docker Spaces
    • Argilla on Spaces
    • Label Studio on Spaces
    • Aim on Space
    • Livebook on Spaces
    • Shiny on Spaces
    • ZenML on Spaces
    • Panel on Spaces
    • ChatUI on Spaces
    • Tabby on Spaces
  • Embed your Space
  • Run Spaces with Docker
  • Spaces Configuration Reference
  • Sign-In with BA button
  • Spaces Changelog
  • 🌍Advanced Topics
    • Using OpenCV in Spaces
    • More ways to create Spaces
    • Managing Spaces with Github Actions
    • Custom Python Spaces
    • How to Add a Space to ArXiv
    • Cookie limitations in Spaces
  • 🌍Other
  • 🌍Organizations
    • Managing Organizations
    • Organization Cards
    • Access Control in Organizations
  • Billing
  • 🌍Security
    • User Access Tokens
    • Git over SSH
    • Signing Commits with GPG
    • Single Sign-On (SSO)
    • Malware Scanning
    • Pickle Scanning
    • Secrets Scanning
  • Moderation
  • Paper Pages
  • Search
  • Digital Object Identifier (DOI)
  • Hub API Endpoints
  • Sign-In with BA
Powered by GitBook
On this page
  • Aim on Spaces
  • Deploy Aim on Spaces
  • Compare your experiments with Aim on Spaces
  • More on HF Spaces
  • Feedback and Support
  1. Docker Spaces

Aim on Space

PreviousLabel Studio on SpacesNextLivebook on Spaces

Last updated 1 year ago

Aim on Spaces

Aim is an easy-to-use & supercharged open-source experiment tracker. Aim logs your training runs and enables a beautiful UI to compare them and an API to query them programmatically. ML engineers and researchers use Aim explorers to compare 1000s of training runs in a few clicks.

Check out the to learn more about Aim. If you have an idea for a new feature or have noticed a bug, feel free to .

In the following sections, you’ll learn how to deploy Aim on the Hugging Face Hub Spaces and explore your training runs directly from the Hub.

Deploy Aim on Spaces

You can deploy Aim on Spaces with a single click!

Once you have created the Space, you’ll see the Building status, and once it becomes Running, your Space is ready to go!

Now, when you navigate to your Space’s App section, you can access the Aim UI.

Compare your experiments with Aim on Spaces

Let’s use a quick example of a PyTorch CNN trained on MNIST to demonstrate end-to-end Aim on Spaces deployment. The full example is in the .

Copied

from aim import Run
from aim.pytorch import track_gradients_dists, track_params_dists

# Initialize a new Run
aim_run = Run()
...
items = {'accuracy': acc, 'loss': loss}
aim_run.track(items, epoch=epoch, context={'subset': 'train'})

# Track weights and gradients distributions
track_params_dists(model, aim_run)
track_gradients_dists(model, aim_run)

The experiments tracked by Aim are stored in the .aim folder. To display the logs with the Aim UI in your Space, you need to compress the .aim folder to a tar.gz file and upload it to your Space using git or the Files and Versions sections of your Space.

Here’s a bash command for that:

Copied

tar -czvf aim_repo.tar.gz .aim

That’s it! Now open the App section of your Space and the Aim UI is available with your logs. Here is what to expect:

More on HF Spaces

Feedback and Support

Aim UI on HF Hub Spaces

Filter your runs using Aim’s Pythonic search. You can write pythonic EVERYTHING you have tracked - metrics, hyperparams etc. Check out some on HF Hub Spaces.

Note that if your logs are in TensorBoard format, you can easily convert and use the many advanced and high-performant training run comparison features available.

If you have improvement suggestions or need support, please open an issue on .

The is also available for community discussions.

🌍
queries against
examples
them to Aim with one command
HF Docker spaces
HF Docker space examples
Aim GitHub repo
Aim community Discord
Aim docs
open a feature request or report a bug
Aim repo examples folder