Community resources

Community

This page regroups resources around 🌍 Transformers developed by the community.

Community resources:

ResourceDescriptionAuthor

A set of flashcards based on the Transformers Docs Glossary that has been put into a form which can be easily learned/revised using Anki an open source, cross platform app specifically designed for long term knowledge retention. See this Introductory video on how to use the flashcards.

Community notebooks:

NotebookDescriptionAuthor

How to generate lyrics in the style of your favorite artist by fine-tuning a GPT-2 model

How to train T5 for any task using Tensorflow 2. This notebook demonstrates a Question & Answer task implemented in Tensorflow 2 using SQUAD

How to train T5 on SQUAD with Transformers and Nlp

How to fine-tune T5 for classification and multiple choice tasks using a text-to-text format with PyTorch Lightning

How to fine-tune the DialoGPT model on a new dataset for open-dialog conversational chatbots

How to train on sequences as long as 500,000 tokens with Reformer

How to fine-tune BART for summarization with fastai using blurr

How to generate tweets in the style of your favorite Twitter account by fine-tuning a GPT-2 model

A complete tutorial showcasing W&B integration with BOINC AI

How to build a β€œlong” version of existing pretrained models

How to fine-tune longformer model for QA task

How to evaluate longformer on TriviaQA with nlp

How to fine-tune T5 for sentiment span extraction using a text-to-text format with PyTorch Lightning

How to fine-tune DistilBert for multiclass classification with PyTorch

How to fine-tune BERT for multi-label classification using PyTorch

How to fine-tune T5 for summarization in PyTorch and track experiments with WandB

How to speed up fine-tuning by a factor of 2 using dynamic padding / bucketing

How to train a Reformer model with bi-directional self-attention layers

How to increase vocabulary of a pretrained SciBERT model from AllenAI on the CORD dataset and pipeline it.

How to fine-tune BlenderBotSmall for summarization on a custom dataset, using the Trainer API.

How to fine-tune Electra for sentiment analysis and interpret predictions with Captum Integrated Gradients

How to fine-tune a non-English GPT-2 Model with Trainer class

How to fine-tune a DistilBERT Model for Multi Label Classification task

How to fine-tune an ALBERT model or another BERT-based model for the sentence-pair classification task

How to fine-tune a Roberta model for sentiment analysis

How accurate are the answers to questions generated by your seq2seq transformer model?

How to fine-tune DistilBERT for text classification in TensorFlow

How to warm-start a EncoderDecoderModel with a bert-base-uncased checkpoint for summarization on CNN/Dailymail

How to warm-start a shared EncoderDecoderModel with a roberta-base checkpoint for summarization on BBC/XSum

How to fine-tune TapasForQuestionAnswering with a tapas-base checkpoint on the Sequential Question Answering (SQA) dataset

How to evaluate a fine-tuned TapasForSequenceClassification with a tapas-base-finetuned-tabfact checkpoint using a combination of the 🌍 datasets and 🌍 transformers libraries

How to fine-tune mBART using Seq2SeqTrainer for Hindi to English translation

How to fine-tune LayoutLMForTokenClassification on the FUNSD dataset for information extraction from scanned documents

How to fine-tune DistilGPT2 and generate text

How to fine-tune LED on pubmed for long-range summarization

How to effectively evaluate LED on long-range summarization

How to fine-tune LayoutLMForSequenceClassification on the RVL-CDIP dataset for scanned document classification

How to decode CTC sequence with language model adjustment

How to fine-tune BART for summarization in two languages with Trainer class

How to evaluate BigBird on long document question answering on Trivia QA

How to create YouTube captions from any video by transcribing the audio with Wav2Vec

How to fine-tune the Vision Transformer (ViT) on CIFAR-10 using BOINC AI Transformers, Datasets and PyTorch Lightning

How to fine-tune the Vision Transformer (ViT) on CIFAR-10 using BOINC AI Transformers, Datasets and the 🌍 Trainer

How to evaluate LukeForEntityClassification on the Open Entity dataset

How to evaluate LukeForEntityPairClassification on the TACRED dataset

How to evaluate LukeForEntitySpanClassification on the CoNLL-2003 dataset

How to evaluate BigBirdPegasusForConditionalGeneration on PubMed dataset

How to leverage a pretrained Wav2Vec2 model for Emotion Classification on the MEGA dataset

How to use a trained DetrForObjectDetection model to detect objects in an image and visualize attention

How to fine-tune DetrForObjectDetection on a custom object detection dataset

How to fine-tune T5 on a Named Entity Recognition Task

Last updated