BOINC AI Hub
  • 🌍BOINC AI Hub
  • 🌍Repositories
  • Getting Started with Repositories
  • Repository Settings
  • Pull Requests & Discussions
  • Notifications
  • Collections
  • 🌍Webhooks
    • How-to: Automatic fine-tuning with Auto-Train
    • How-to: Build a Discussion bot based on BLOOM
    • How-to: Create automatic metadata quality reports
  • Repository size recommendations
  • Next Steps
  • Licenses
  • 🌍Models
  • The Model Hub
  • 🌍Model Cards
    • Annotated Model Card
    • Carbon Emissions
    • Model Card Guidebook
    • Landscape Analysis
  • Gated Models
  • Uploading Models
  • Downloading Models
  • 🌍Integrated Libraries
    • Adapter Transformers
    • AllenNLP
    • Asteroid
    • Diffusers
    • ESPnet
    • fastai
    • Flair
    • Keras
    • ML-Agents
    • PaddleNLP
    • RL-Baselines3-Zoo
    • Sample Factory
    • Sentence Transformers
    • spaCy
    • SpanMarker
    • SpeechBrain
    • Stable-Baselines3
    • Stanza
    • TensorBoard
    • timm
    • Transformers
    • Transformers.js
  • 🌍Model Widgets
    • Widget Examples
  • Inference API docs
  • Frequently Asked Questions
  • 🌍Advanced Topics
    • Integrate a library with the Hub
    • Tasks
  • 🌍Datasets
  • Datasets Overview
  • Dataset Cards
  • Gated Datasets
  • Dataset Viewer
  • Using Datasets
  • Adding New Datasets
  • 🌍Spaces
  • 🌍Spaces Overview
    • Handling Spaces Dependencies
    • Spaces Settings
    • Using Spaces for Organization Cards
  • Spaces GPU Upgrades
  • Spaces Persistent Storage
  • Gradio Spaces
  • Streamlit Spaces
  • Static HTML Spaces
  • 🌍Docker Spaces
    • Your first Docker Spaces
    • Example Docker Spaces
    • Argilla on Spaces
    • Label Studio on Spaces
    • Aim on Space
    • Livebook on Spaces
    • Shiny on Spaces
    • ZenML on Spaces
    • Panel on Spaces
    • ChatUI on Spaces
    • Tabby on Spaces
  • Embed your Space
  • Run Spaces with Docker
  • Spaces Configuration Reference
  • Sign-In with BA button
  • Spaces Changelog
  • 🌍Advanced Topics
    • Using OpenCV in Spaces
    • More ways to create Spaces
    • Managing Spaces with Github Actions
    • Custom Python Spaces
    • How to Add a Space to ArXiv
    • Cookie limitations in Spaces
  • 🌍Other
  • 🌍Organizations
    • Managing Organizations
    • Organization Cards
    • Access Control in Organizations
  • Billing
  • 🌍Security
    • User Access Tokens
    • Git over SSH
    • Signing Commits with GPG
    • Single Sign-On (SSO)
    • Malware Scanning
    • Pickle Scanning
    • Secrets Scanning
  • Moderation
  • Paper Pages
  • Search
  • Digital Object Identifier (DOI)
  • Hub API Endpoints
  • Sign-In with BA
Powered by GitBook
On this page
  • Annotated Model Card Template
  • Directions
  • Model Name
  • Table of Contents
  • Model Details
  • Model Description
  • Model Sources optional
  • Uses
  • Direct Use
  • Downstream Use optional
  • Out-of-Scope Use
  • Bias, Risks, and Limitations
  • Recommendations
  • Training Details
  • Training Data
  • Training Procedure optional
  • Evaluation
  • Testing Data, Factors & Metrics
  • Results
  • Model Examination optional
  • Environmental Impact
  • Technical Specifications optional
  • Model Architecture and Objective
  • Compute Infrastructure
  • Citation optional
  • Glossary optional
  • More Information optional
  • Model Card Authors optional
  • Model Card Contact
  • How to Get Started with the Model
  1. Model Cards

Annotated Model Card

PreviousModel CardsNextCarbon Emissions

Last updated 1 year ago

Annotated Model Card Template

Directions

Fully filling out a model card requires input from a few different roles. (One person may have more than one role.) We’ll refer to these roles as the developer, who writes the code and runs training; the sociotechnic, who is skilled at analyzing the interaction of technology and society long-term (this includes lawyers, ethicists, sociologists, or rights advocates); and the project organizer, who understands the overall scope and reach of the model, can roughly fill out each part of the card, and who serves as a contact person for model card updates.

  • The developer is necessary for filling out and . They are also particularly useful for the “Limitations” section of . They are responsible for providing for the Evaluation, and ideally work with the other roles to define the rest of the Evaluation: .

  • The sociotechnic is necessary for filling out “Bias” and “Risks” within , and particularly useful for “Out of Scope Use” within .

  • The project organizer is necessary for filling out and . They might also fill out . Project organizers could also be in charge of , , , , and .

Instructions are provided below, in italics.

Template variable names appear in monospace.

Model Name

Section Overview: Provide the model name and a 1-2 sentence summary of what the model is.

model_id

model_summary

Table of Contents

Section Overview: Provide this with links to each section, to enable people to easily jump around/use the file in other locations with the preserved TOC/print out the content/etc.

Model Details

Section Overview: This section provides basic information about what the model is, its current status, and where it came from. It should be useful for anyone who wants to reference the model.

Model Description

model_description

Provide basic details about the model. This includes the architecture, version, if it was introduced in a paper, if an original implementation is available, and the creators. Any copyright should be attributed here. General information about training procedures, parameters, and important disclaimers can also be mentioned in this section.

  • Developed by: developers

List (and ideally link to) the people who built the model.

  • Shared by [optional]: shared_by

List (and ideally link to) the people/organization making the model available online.

  • Model type: model_type

You can name the “type” as:

1. Supervision/Learning Method

2. Machine Learning Type

3. Modality

  • Language(s) [NLP]: language

Use this field when the system uses or processes natural (human) language.

  • License: license

Name and link to the license being used.

  • Finetuned From Model [optional]: finetuned_from

If this model has another model as its base, link to that model here.

Model Sources optional

  • Repository: repo

  • Paper [optional]: paper

  • Demo [optional]: demo

Uses

Section Overview: This section addresses questions around how the model is intended to be used in different applied contexts, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. Note this section is not intended to include the license usage details. For that, link directly to the license.

Direct Use

direct_use

Explain how the model can be used without fine-tuning, post-processing, or plugging into a pipeline. An example code snippet is recommended.

Downstream Use optional

downstream_use

Explain how this model can be used when fine-tuned for a task or when plugged into a larger ecosystem or app. An example code snippet is recommended.

Out-of-Scope Use

out_of_scope_use

List how the model may foreseeably be misused and address what users ought not do with the model.

Bias, Risks, and Limitations

Section Overview: This section identifies foreseeable harms, misunderstandings, and technical and sociotechnical limitations. It also provides information on warnings and potential mitigations.

bias_risks_limitations

What are the known or foreseeable issues stemming from this model?

Recommendations

bias_recommendations

What are recommendations with respect to the foreseeable issues? This can include everything from “downsample your image” to filtering explicit content.

Training Details

Training Data

training_data

Training Procedure optional

Preprocessing

preprocessing

Detail tokenization, resizing/rewriting (depending on the modality), etc.

Speeds, Sizes, Times

speeds_sizes_times

Detail throughput, start/end time, checkpoint sizes, etc.

Evaluation

Section Overview: This section describes the evaluation protocols, what is being measured in the evaluation, and provides the results. Evaluation is ideally constructed with factors, such as domain and demographic subgroup, and metrics, such as accuracy, which are prioritized in light of foreseeable error contexts and groups. Target fairness metrics should be decided based on which errors are more likely to be problematic in light of the model use.

Testing Data, Factors & Metrics

Testing Data

testing_data

Ideally this links to a Dataset Card for the testing data.

Factors

testing_factors

What are the foreseeable characteristics that will influence how the model behaves? This includes domain and context, as well as population subgroups. Evaluation should ideally be disaggregated across factors in order to uncover disparities in performance.

Metrics

testing_metrics

What metrics will be used for evaluation in light of tradeoffs between different errors?

Results

results

Results should be based on the Factors and Metrics defined above.

Summary

results_summary

What do the results say? This can function as a kind of tl;dr for general audiences.

Model Examination optional

Section Overview: This is an experimental section some developers are beginning to add, where work on explainability/interpretability may go.

model_examination

Environmental Impact

Section Overview: Summarizes the information necessary to calculate environmental impacts such as electricity usage and carbon emissions.

  • Hardware Type: hardware

  • Hours used: hours_used

  • Cloud Provider: cloud_provider

  • Compute Region: cloud_region

  • Carbon Emitted: co2_emitted

Technical Specifications optional

Section Overview: This section includes details about the model objective and architecture, and the compute infrastructure. It is useful for people interested in model development. Writing this section usually requires the model developer to be directly involved.

Model Architecture and Objective

model_specs

Compute Infrastructure

compute_infrastructure

Hardware

hardware

Software

software

Citation optional

Section Overview: The developers’ preferred citation for this model. This is often a paper.

BibTeX

citation_bibtex

APA

citation_apa

Glossary optional

Section Overview: This section defines common terms and how metrics are calculated.

glossary

Clearly define terms in order to be accessible across audiences.

More Information optional

Section Overview: This section provides links to writing on dataset creation, technical specifications, lessons learned, and initial results.

more_information

Model Card Authors optional

Section Overview: This section lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.

model_card_authors

Model Card Contact

Section Overview: Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors

model_card_contact

How to Get Started with the Model

Section Overview: Provides a code snippet to show how to use the model.

get_started_code

Provide sources for the user to directly see the model and its details. Additional kinds of resources – training logs, lessons learned, etc. – belong in the section. If you include one thing for this section, link to the repository.

Section Overview: This section provides information to describe and replicate training, including the training data, the speed and size of training elements, and the environmental impact of training. This relates heavily to the as well, and content here should link to that section when it is relevant to the training procedure. It is useful for people who want to learn more about the model inputs and training footprint. It is relevant for anyone who wants to know the basics of what the model is learning.

Write 1-2 sentences on what the training data is. Ideally this links to a Dataset Card for further information. Links to documentation related to data pre-processing or additional filtering may go here as well as in .

Carbon emissions can be estimated using the presented in .

🌍
Training Procedure
Technical Specifications
Bias, Risks, and Limitations
Results
Testing Data, Factors & Metrics
Bias, Risks, and Limitations
Uses
Model Details
Uses
Training Data
Citation
Glossary
Model Card Contact
Model Card Authors
More Information
More Information
Technical Specifications
More Information
Machine Learning Impact calculator
Lacoste et al. (2019)