Sample Factory
Using sample-factory at BOINC AI
sample-factory
is a codebase for high throughput asynchronous reinforcement learning. It has integrations with the BOINC AI Hub to share models with evaluation results and training metrics.
Exploring sample-factory in the Hub
You can find sample-factory
models by filtering at the left of the models page.
All models on the Hub come up with useful features:
An automatically generated model card with a description, a training configuration, and more.
Metadata tags that help for discoverability.
Evaluation results to compare with other models.
A video widget where you can watch your agent performing.
Install the library
To install the sample-factory
library, you need to install the package:
pip install sample-factory
SF is known to work on Linux and MacOS. There is no Windows support at this time.
Loading models from the Hub
Using load_from_hub
To download a model from the BOINC AI Hub to use with Sample-Factory, use the load_from_hub
script:
Copied
The command line arguments are:
-r
: The repo ID for the HF repository to download from. The repo ID should be in the format<username>/<repo_name>
-d
: An optional argument to specify the directory to save the experiment to. Defaults to./train_dir
which will save the repo to./train_dir/<repo_name>
Download Model Repository Directly
BOINC AI repositories can be downloaded directly using git clone
:
Copied
Using Downloaded Models with Sample-Factory
After downloading the model, you can run the models in the repo with the enjoy script corresponding to your environment. For example, if you are downloading a mujoco-ant
model, it can be run with:
Copied
Note, you may have to specify the --train_dir
if your local train_dir has a different path than the one in the cfg.json
Sharing your models
Using push_to_hub
If you want to upload without generating evaluation metrics or a replay video, you can use the push_to_hub
script:
Copied
The command line arguments are:
-r
: The repo_id to save on HF Hub. This is the same ashf_repository
in the enjoy script and must be in the form<hf_username>/<hf_repo_name>
-d
: The full path to your experiment directory to upload
Using enjoy.py
You can upload your models to the Hub using your environmentβs enjoy
script with the --push_to_hub
flag. Uploading using enjoy
can also generate evaluation metrics and a replay video.
The evaluation metrics are generated by running your model on the specified environment for a number of episodes and reporting the mean and std reward of those runs.
Other relevant command line arguments are:
--hf_repository
: The repository to push to. Must be of the form<username>/<repo_name>
. The model will be saved tohttps://huggingface.co/<username>/<repo_name>
--max_num_episodes
: Number of episodes to evaluate on before uploading. Used to generate evaluation metrics. It is recommended to use multiple episodes to generate an accurate mean and std.--max_num_frames
: Number of frames to evaluate on before uploading. An alternative tomax_num_episodes
--no_render
: A flag that disables rendering and showing the environment steps. It is recommended to set this flag to speed up the evaluation process.
You can also save a video of the model during evaluation to upload to the hub with the --save_video
flag
--video_frames
: The number of frames to be rendered in the video. Defaults to -1 which renders an entire episode--video_name
: The name of the video to save as. IfNone
, will save toreplay.mp4
in your experiment directory
For example:
Copied
Last updated