BOINC AI Hub
  • 🌍BOINC AI Hub
  • 🌍Repositories
  • Getting Started with Repositories
  • Repository Settings
  • Pull Requests & Discussions
  • Notifications
  • Collections
  • 🌍Webhooks
    • How-to: Automatic fine-tuning with Auto-Train
    • How-to: Build a Discussion bot based on BLOOM
    • How-to: Create automatic metadata quality reports
  • Repository size recommendations
  • Next Steps
  • Licenses
  • 🌍Models
  • The Model Hub
  • 🌍Model Cards
    • Annotated Model Card
    • Carbon Emissions
    • Model Card Guidebook
    • Landscape Analysis
  • Gated Models
  • Uploading Models
  • Downloading Models
  • 🌍Integrated Libraries
    • Adapter Transformers
    • AllenNLP
    • Asteroid
    • Diffusers
    • ESPnet
    • fastai
    • Flair
    • Keras
    • ML-Agents
    • PaddleNLP
    • RL-Baselines3-Zoo
    • Sample Factory
    • Sentence Transformers
    • spaCy
    • SpanMarker
    • SpeechBrain
    • Stable-Baselines3
    • Stanza
    • TensorBoard
    • timm
    • Transformers
    • Transformers.js
  • 🌍Model Widgets
    • Widget Examples
  • Inference API docs
  • Frequently Asked Questions
  • 🌍Advanced Topics
    • Integrate a library with the Hub
    • Tasks
  • 🌍Datasets
  • Datasets Overview
  • Dataset Cards
  • Gated Datasets
  • Dataset Viewer
  • Using Datasets
  • Adding New Datasets
  • 🌍Spaces
  • 🌍Spaces Overview
    • Handling Spaces Dependencies
    • Spaces Settings
    • Using Spaces for Organization Cards
  • Spaces GPU Upgrades
  • Spaces Persistent Storage
  • Gradio Spaces
  • Streamlit Spaces
  • Static HTML Spaces
  • 🌍Docker Spaces
    • Your first Docker Spaces
    • Example Docker Spaces
    • Argilla on Spaces
    • Label Studio on Spaces
    • Aim on Space
    • Livebook on Spaces
    • Shiny on Spaces
    • ZenML on Spaces
    • Panel on Spaces
    • ChatUI on Spaces
    • Tabby on Spaces
  • Embed your Space
  • Run Spaces with Docker
  • Spaces Configuration Reference
  • Sign-In with BA button
  • Spaces Changelog
  • 🌍Advanced Topics
    • Using OpenCV in Spaces
    • More ways to create Spaces
    • Managing Spaces with Github Actions
    • Custom Python Spaces
    • How to Add a Space to ArXiv
    • Cookie limitations in Spaces
  • 🌍Other
  • 🌍Organizations
    • Managing Organizations
    • Organization Cards
    • Access Control in Organizations
  • Billing
  • 🌍Security
    • User Access Tokens
    • Git over SSH
    • Signing Commits with GPG
    • Single Sign-On (SSO)
    • Malware Scanning
    • Pickle Scanning
    • Secrets Scanning
  • Moderation
  • Paper Pages
  • Search
  • Digital Object Identifier (DOI)
  • Hub API Endpoints
  • Sign-In with BA
Powered by GitBook
On this page
  • Widgets
  • What’s a widget?
  • Enabling a widget
  • Example outputs
  • What are all the possible task/widget types?
  • How can I control my model’s widget Inference API parameters?

Model Widgets

PreviousTransformers.jsNextWidget Examples

Last updated 1 year ago

Widgets

What’s a widget?

Many model repos have a widget that allows anyone to run inferences directly in the browser!

Here are some examples:

  • using .

  • using BOINC AI

  • using .

  • using .

You can try out all the widgets .

Enabling a widget

A widget is automatically created for your model when you upload it to the Hub. To determine which pipeline and widget to display (text-classification, token-classification, translation, etc.), we analyze information in the repo, such as the metadata provided in the model card and configuration files. This information is mapped to a single pipeline_tag. We choose to expose only one widget per model for simplicity.

For most use cases, we determine the model type from the tags. For example, if there is tag: text-classification in the , the inferred pipeline_tag will be text-classification.

For some libraries, such as πŸ€— Transformers, the model type should be inferred automatically based from configuration files (config.json). The architecture can determine the type: for example, AutoModelForTokenClassification corresponds to token-classification. If you’re interested in this, you can see pseudo-code in .

You can always manually override your pipeline type with pipeline_tag: xxx in your . (You can also use the metadata GUI editor to do this).

How can I control my model’s widget example input?

You can specify the widget input in the model card metadata section:

Copied

widget:
- text: "Jens Peter Hansen kommer fra Danmark"

You can provide more than one example input. In the examples dropdown menu of the widget, they will appear as Example 1, Example 2, etc. Optionally, you can supply example_title as well.

widget:
- text: "Is this review positive or negative? Review: Best cast iron skillet you will ever buy."
  example_title: "Sentiment analysis"
- text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had ..."
  example_title: "Coreference resolution"
- text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book ..."
  example_title: "Logic puzzles"
- text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night ..."
  example_title: "Reading comprehension"

For example, allow users to choose from two sample audio files for automatic speech recognition tasks by:

Copied

widget:
- src: https://example.org/somewhere/speech_samples/sample1.flac
  example_title: Speech sample 1
- src: https://example.org/somewhere/speech_samples/sample2.flac
  example_title: Speech sample 2

Note that you can also include example files in your model repository and use them as:

Copied

widget:
  - src: https://huggingface.co/username/model_repo/resolve/main/sample1.flac
    example_title: Custom Speech Sample 1

But even more convenient, if the file lives in the corresponding model repo, you can just use the filename or file path inside the repo:

Copied

widget:
  - src: sample1.flac
    example_title: Custom Speech Sample 1

or if it was nested inside the repo:

Copied

widget:
  - src: nested/directory/sample1.flac

Example outputs

As an extension to example inputs, for each widget example, you can also optionally describe the corresponding model output, directly in the output property.

This is useful when the model is not yet supported by the Inference API (for instance, the model library is not yet supported or the model is too large) so that the model page can still showcase how the model works and what results it gives.

Copied

widget:
  - src: sample1.flac
    output:
      text: "Hello my name is Julien"

The output property should be a YAML dictionary that represents the Inference API output.

For a model that outputs text, see the example above.

Copied

widget:
  - text: "I liked this movie"
    output:
      - label: POSITIVE
        score: 0.8
      - label: NEGATIVE
        score: 0.2

Finally, for a model that outputs an image, audio, or any other kind of asset, the output should include a url property linking to either a file name or path inside the repo or a remote URL. For example, for a text-to-image model:

Copied

widget:
  - text: "picture of a futuristic tiger, artstation"
    output:
      url: images/tiger.jpg

We can also surface the example outputs in the BOINC AI UI, for instance, for a text-to-image model to display a gallery of cool image generations.

What are all the possible task/widget types?

Here are some links to examples:

How can I control my model’s widget Inference API parameters?

For example, if you want to specify an aggregation strategy for a NER task in the widget:

Copied

inference:
  parameters:
    aggregation_strategy: "none"

Or if you’d like to change the temperature for a summarization task in the widget:

Copied

inference:
  parameters:
    temperature: 0.7

Copied

Moreover, you can specify non-text example inputs in the model card metadata. Refer for a complete list of sample input formats for all widget types. For vision & audio widget types, provide example inputs with src rather than text.

We provide example inputs for some languages and most widget types in . If some examples are missing, we welcome PRs from the community to add them!

For instance, for an model:

For a model that outputs labels (like a model for instance), output should look like this:

You can find all the supported tasks .

text-classification, for instance

token-classification, for instance

question-answering, for instance

translation, for instance

summarization, for instance

conversational, for instance

text-generation, for instance

fill-mask, for instance

zero-shot-classification (implemented on top of a nli text-classification model), for instance

table-question-answering, for instance

sentence-similarity, for instance

Generally, the Inference API for a model uses the default pipeline settings associated with each task. But if you’d like to change the pipeline’s default settings and specify additional inference parameters, you can configure the parameters directly through the model card metadata. Refer for some of the most commonly used parameters associated with each task.

The Inference API allows you to send HTTP requests to models in the Hugging Face Hub, and it’s 2x to 10x faster than the widgets! ⚑⚑ Learn more about it by reading the . Finally, you can also deploy all those models to dedicated .

🌍
Named Entity Recognition
spaCy
Image Classification
Transformers
Text to Speech
ESPnet
Sentence Similarity
Sentence Transformers
here
model card metadata
this gist
model card metadata
here
the DefaultWidget.ts file
automatic-speech-recognition
text-classification
here
roberta-large-mnli
dbmdz/bert-large-cased-finetuned-conll03-english
distilbert-base-uncased-distilled-squad
t5-base
facebook/bart-large-cnn
facebook/blenderbot-400M-distill
gpt2
distilroberta-base
facebook/bart-large-mnli
google/tapas-base-finetuned-wtq
osanseviero/full-sentence-distillroberta2
here
Inference API documentation
Inference Endpoints