BOINC AI Hub
  • 🌍BOINC AI Hub
  • 🌍Repositories
  • Getting Started with Repositories
  • Repository Settings
  • Pull Requests & Discussions
  • Notifications
  • Collections
  • 🌍Webhooks
    • How-to: Automatic fine-tuning with Auto-Train
    • How-to: Build a Discussion bot based on BLOOM
    • How-to: Create automatic metadata quality reports
  • Repository size recommendations
  • Next Steps
  • Licenses
  • 🌍Models
  • The Model Hub
  • 🌍Model Cards
    • Annotated Model Card
    • Carbon Emissions
    • Model Card Guidebook
    • Landscape Analysis
  • Gated Models
  • Uploading Models
  • Downloading Models
  • 🌍Integrated Libraries
    • Adapter Transformers
    • AllenNLP
    • Asteroid
    • Diffusers
    • ESPnet
    • fastai
    • Flair
    • Keras
    • ML-Agents
    • PaddleNLP
    • RL-Baselines3-Zoo
    • Sample Factory
    • Sentence Transformers
    • spaCy
    • SpanMarker
    • SpeechBrain
    • Stable-Baselines3
    • Stanza
    • TensorBoard
    • timm
    • Transformers
    • Transformers.js
  • 🌍Model Widgets
    • Widget Examples
  • Inference API docs
  • Frequently Asked Questions
  • 🌍Advanced Topics
    • Integrate a library with the Hub
    • Tasks
  • 🌍Datasets
  • Datasets Overview
  • Dataset Cards
  • Gated Datasets
  • Dataset Viewer
  • Using Datasets
  • Adding New Datasets
  • 🌍Spaces
  • 🌍Spaces Overview
    • Handling Spaces Dependencies
    • Spaces Settings
    • Using Spaces for Organization Cards
  • Spaces GPU Upgrades
  • Spaces Persistent Storage
  • Gradio Spaces
  • Streamlit Spaces
  • Static HTML Spaces
  • 🌍Docker Spaces
    • Your first Docker Spaces
    • Example Docker Spaces
    • Argilla on Spaces
    • Label Studio on Spaces
    • Aim on Space
    • Livebook on Spaces
    • Shiny on Spaces
    • ZenML on Spaces
    • Panel on Spaces
    • ChatUI on Spaces
    • Tabby on Spaces
  • Embed your Space
  • Run Spaces with Docker
  • Spaces Configuration Reference
  • Sign-In with BA button
  • Spaces Changelog
  • 🌍Advanced Topics
    • Using OpenCV in Spaces
    • More ways to create Spaces
    • Managing Spaces with Github Actions
    • Custom Python Spaces
    • How to Add a Space to ArXiv
    • Cookie limitations in Spaces
  • 🌍Other
  • 🌍Organizations
    • Managing Organizations
    • Organization Cards
    • Access Control in Organizations
  • Billing
  • 🌍Security
    • User Access Tokens
    • Git over SSH
    • Signing Commits with GPG
    • Single Sign-On (SSO)
    • Malware Scanning
    • Pickle Scanning
    • Secrets Scanning
  • Moderation
  • Paper Pages
  • Search
  • Digital Object Identifier (DOI)
  • Hub API Endpoints
  • Sign-In with BA
Powered by GitBook
On this page
  • Model Cards
  • What are Model Cards?
  • Model card metadata
  • FAQ

Model Cards

PreviousThe Model HubNextAnnotated Model Card

Last updated 1 year ago

Model Cards

What are Model Cards?

Model cards are files that accompany the models and provide handy information. Under the hood, model cards are simple Markdown files with additional metadata. Model cards are essential for discoverability, reproducibility, and sharing! You can find a model card as the README.md file in any model repo.

The model card should describe:

  • the model

  • its intended uses & potential limitations, including biases and ethical considerations as detailed in

  • the training params and experimental info (you can embed or link to an experiment tracking platform for reference)

  • which datasets were used to train your model

  • your evaluation results

The model card template is available .

Model card metadata

A model repo will render its README.md as a model card. To control how the Hub displays the card, you should create a YAML section in the README file to define some metadata. Start by adding three --- at the top, then include all of the relevant metadata, and close the section with another group of --- like the example below:

Copied

---
language: 
  - "List of ISO 639-1 code for your language"
  - lang1
  - lang2
thumbnail: "url to a thumbnail used in social sharing"
tags:
- tag1
- tag2
license: "any valid license identifier"
datasets:
- dataset1
- dataset2
metrics:
- metric1
- metric2
---

The metadata that you add to the model card enables certain interactions on the Hub. For example:

  • Adding datasets to the metadata will add a message reading Datasets used to train: to your model card and link the relevant datasets, if they’re available on the Hub.

Specifying a library

  1. Specifying library_name in the model card (recommended if your model is not a transformers model)

Copied

library_name: flair
  1. Having a tag with the name of a library that is supported

Copied

tags:
- flair

If it’s not specified, the Hub will try to automatically detect the library type. Unless your model is from transformers, this approach is discouraged and repo creators should use the explicit library_name as much as possible.

  1. By looking into the presence of files such as *.nemo or *saved_model.pb*, the Hub can determine if a model is from NeMo or Keras.

  2. If nothing is detected and there is a config.json file, it’s assumed the library is transformers.

Evaluation Results

Here is a partial example (omitting the eval results part):

Copied

---
language:
- ru
- en
tags:
- translation
license: apache-2.0
datasets:
- wmt19
metrics:
- bleu
- sacrebleu
---

If a model includes valid eval results, they will be displayed like this:

CO <sub> 2 </sub> Emissions

Linking a Paper

If the model card includes a link to a paper on arXiv, the Hugging Face Hub will extract the arXiv ID and include it in the model tags with the format arxiv:<PAPER ID>. Clicking on the tag will let you:

  • Visit the Paper page

  • Filter for other models on the Hub that cite the same paper.

FAQ

How are model tags determined?

Can I write LaTeX in my model card?

You have to use the following delimiters:

  • $$ ... $$ for display mode

  • \\(...\\) for inline mode (no space between the slashes and the parenthesis).

Allow users to filter and discover models at .

If you choose a license using the keywords listed in the right column of , the license will be displayed on the model page.

Dataset, metric, and language identifiers are those listed on the , and pages and in the repository.

See the detailed model card metadata specification .

You can also specify the supported libraries in the model card metadata section. Find more about our supported libraries . The library can be specified with the following order of priority

You can even specify your model’s eval results in a structured way, which will allow the Hub to parse, display, and even link them to Papers With Code leaderboards. See how to format this data .

The model card is also a great place to show information about the CO2 impact of your model. Visit our to learn more.

Read more about Paper pages .

Each model page lists all the model’s tags in the page header, below the model name. These are primarily computed from the model card metadata, although some are added automatically, as described in .

Yes! The Hub uses the math typesetting library to render math formulas server-side before parsing the Markdown.

🌍
New! Try our experimental Model Card Creator App
Mitchell, 2018
here
https://boincai.com/models
this table
Datasets
Metrics
Languages
datasets
here
here
in the metadata spec
guide on tracking and reporting CO2 emissions
here
Creating a Widget
KaTeX