Transformers
  • 🌍GET STARTED
    • Transformers
    • Quick tour
    • Installation
  • 🌍TUTORIALS
    • Run inference with pipelines
    • Write portable code with AutoClass
    • Preprocess data
    • Fine-tune a pretrained model
    • Train with a script
    • Set up distributed training with BOINC AI Accelerate
    • Load and train adapters with BOINC AI PEFT
    • Share your model
    • Agents
    • Generation with LLMs
  • 🌍TASK GUIDES
    • 🌍NATURAL LANGUAGE PROCESSING
      • Text classification
      • Token classification
      • Question answering
      • Causal language modeling
      • Masked language modeling
      • Translation
      • Summarization
      • Multiple choice
    • 🌍AUDIO
      • Audio classification
      • Automatic speech recognition
    • 🌍COMPUTER VISION
      • Image classification
      • Semantic segmentation
      • Video classification
      • Object detection
      • Zero-shot object detection
      • Zero-shot image classification
      • Depth estimation
    • 🌍MULTIMODAL
      • Image captioning
      • Document Question Answering
      • Visual Question Answering
      • Text to speech
    • 🌍GENERATION
      • Customize the generation strategy
    • 🌍PROMPTING
      • Image tasks with IDEFICS
  • 🌍DEVELOPER GUIDES
    • Use fast tokenizers from BOINC AI Tokenizers
    • Run inference with multilingual models
    • Use model-specific APIs
    • Share a custom model
    • Templates for chat models
    • Run training on Amazon SageMaker
    • Export to ONNX
    • Export to TFLite
    • Export to TorchScript
    • Benchmarks
    • Notebooks with examples
    • Community resources
    • Custom Tools and Prompts
    • Troubleshoot
  • 🌍PERFORMANCE AND SCALABILITY
    • Overview
    • 🌍EFFICIENT TRAINING TECHNIQUES
      • Methods and tools for efficient training on a single GPU
      • Multiple GPUs and parallelism
      • Efficient training on CPU
      • Distributed CPU training
      • Training on TPUs
      • Training on TPU with TensorFlow
      • Training on Specialized Hardware
      • Custom hardware for training
      • Hyperparameter Search using Trainer API
    • 🌍OPTIMIZING INFERENCE
      • Inference on CPU
      • Inference on one GPU
      • Inference on many GPUs
      • Inference on Specialized Hardware
    • Instantiating a big model
    • Troubleshooting
    • XLA Integration for TensorFlow Models
    • Optimize inference using `torch.compile()`
  • 🌍CONTRIBUTE
    • How to contribute to transformers?
    • How to add a model to BOINC AI Transformers?
    • How to convert a BOINC AI Transformers model to TensorFlow?
    • How to add a pipeline to BOINC AI Transformers?
    • Testing
    • Checks on a Pull Request
  • 🌍CONCEPTUAL GUIDES
    • Philosophy
    • Glossary
    • What BOINC AI Transformers can do
    • How BOINC AI Transformers solve tasks
    • The Transformer model family
    • Summary of the tokenizers
    • Attention mechanisms
    • Padding and truncation
    • BERTology
    • Perplexity of fixed-length models
    • Pipelines for webserver inference
    • Model training anatomy
  • 🌍API
    • 🌍MAIN CLASSES
      • Agents and Tools
      • 🌍Auto Classes
        • Extending the Auto Classes
        • AutoConfig
        • AutoTokenizer
        • AutoFeatureExtractor
        • AutoImageProcessor
        • AutoProcessor
        • Generic model classes
          • AutoModel
          • TFAutoModel
          • FlaxAutoModel
        • Generic pretraining classes
          • AutoModelForPreTraining
          • TFAutoModelForPreTraining
          • FlaxAutoModelForPreTraining
        • Natural Language Processing
          • AutoModelForCausalLM
          • TFAutoModelForCausalLM
          • FlaxAutoModelForCausalLM
          • AutoModelForMaskedLM
          • TFAutoModelForMaskedLM
          • FlaxAutoModelForMaskedLM
          • AutoModelForMaskGenerationge
          • TFAutoModelForMaskGeneration
          • AutoModelForSeq2SeqLM
          • TFAutoModelForSeq2SeqLM
          • FlaxAutoModelForSeq2SeqLM
          • AutoModelForSequenceClassification
          • TFAutoModelForSequenceClassification
          • FlaxAutoModelForSequenceClassification
          • AutoModelForMultipleChoice
          • TFAutoModelForMultipleChoice
          • FlaxAutoModelForMultipleChoice
          • AutoModelForNextSentencePrediction
          • TFAutoModelForNextSentencePrediction
          • FlaxAutoModelForNextSentencePrediction
          • AutoModelForTokenClassification
          • TFAutoModelForTokenClassification
          • FlaxAutoModelForTokenClassification
          • AutoModelForQuestionAnswering
          • TFAutoModelForQuestionAnswering
          • FlaxAutoModelForQuestionAnswering
          • AutoModelForTextEncoding
          • TFAutoModelForTextEncoding
        • Computer vision
          • AutoModelForDepthEstimation
          • AutoModelForImageClassification
          • TFAutoModelForImageClassification
          • FlaxAutoModelForImageClassification
          • AutoModelForVideoClassification
          • AutoModelForMaskedImageModeling
          • TFAutoModelForMaskedImageModeling
          • AutoModelForObjectDetection
          • AutoModelForImageSegmentation
          • AutoModelForImageToImage
          • AutoModelForSemanticSegmentation
          • TFAutoModelForSemanticSegmentation
          • AutoModelForInstanceSegmentation
          • AutoModelForUniversalSegmentation
          • AutoModelForZeroShotImageClassification
          • TFAutoModelForZeroShotImageClassification
          • AutoModelForZeroShotObjectDetection
        • Audio
          • AutoModelForAudioClassification
          • AutoModelForAudioFrameClassification
          • TFAutoModelForAudioFrameClassification
          • AutoModelForCTC
          • AutoModelForSpeechSeq2Seq
          • TFAutoModelForSpeechSeq2Seq
          • FlaxAutoModelForSpeechSeq2Seq
          • AutoModelForAudioXVector
          • AutoModelForTextToSpectrogram
          • AutoModelForTextToWaveform
        • Multimodal
          • AutoModelForTableQuestionAnswering
          • TFAutoModelForTableQuestionAnswering
          • AutoModelForDocumentQuestionAnswering
          • TFAutoModelForDocumentQuestionAnswering
          • AutoModelForVisualQuestionAnswering
          • AutoModelForVision2Seq
          • TFAutoModelForVision2Seq
          • FlaxAutoModelForVision2Seq
      • Callbacks
      • Configuration
      • Data Collator
      • Keras callbacks
      • Logging
      • Models
      • Text Generation
      • ONNX
      • Optimization
      • Model outputs
      • Pipelines
      • Processors
      • Quantization
      • Tokenizer
      • Trainer
      • DeepSpeed Integration
      • Feature Extractor
      • Image Processor
    • 🌍MODELS
      • 🌍TEXT MODELS
        • ALBERT
        • BART
        • BARThez
        • BARTpho
        • BERT
        • BertGeneration
        • BertJapanese
        • Bertweet
        • BigBird
        • BigBirdPegasus
        • BioGpt
        • Blenderbot
        • Blenderbot Small
        • BLOOM
        • BORT
        • ByT5
        • CamemBERT
        • CANINE
        • CodeGen
        • CodeLlama
        • ConvBERT
        • CPM
        • CPMANT
        • CTRL
        • DeBERTa
        • DeBERTa-v2
        • DialoGPT
        • DistilBERT
        • DPR
        • ELECTRA
        • Encoder Decoder Models
        • ERNIE
        • ErnieM
        • ESM
        • Falcon
        • FLAN-T5
        • FLAN-UL2
        • FlauBERT
        • FNet
        • FSMT
        • Funnel Transformer
        • GPT
        • GPT Neo
        • GPT NeoX
        • GPT NeoX Japanese
        • GPT-J
        • GPT2
        • GPTBigCode
        • GPTSAN Japanese
        • GPTSw3
        • HerBERT
        • I-BERT
        • Jukebox
        • LED
        • LLaMA
        • LLama2
        • Longformer
        • LongT5
        • LUKE
        • M2M100
        • MarianMT
        • MarkupLM
        • MBart and MBart-50
        • MEGA
        • MegatronBERT
        • MegatronGPT2
        • Mistral
        • mLUKE
        • MobileBERT
        • MPNet
        • MPT
        • MRA
        • MT5
        • MVP
        • NEZHA
        • NLLB
        • NLLB-MoE
        • Nyströmformer
        • Open-Llama
        • OPT
        • Pegasus
        • PEGASUS-X
        • Persimmon
        • PhoBERT
        • PLBart
        • ProphetNet
        • QDQBert
        • RAG
        • REALM
        • Reformer
        • RemBERT
        • RetriBERT
        • RoBERTa
        • RoBERTa-PreLayerNorm
        • RoCBert
        • RoFormer
        • RWKV
        • Splinter
        • SqueezeBERT
        • SwitchTransformers
        • T5
        • T5v1.1
        • TAPEX
        • Transformer XL
        • UL2
        • UMT5
        • X-MOD
        • XGLM
        • XLM
        • XLM-ProphetNet
        • XLM-RoBERTa
        • XLM-RoBERTa-XL
        • XLM-V
        • XLNet
        • YOSO
      • 🌍VISION MODELS
        • BEiT
        • BiT
        • Conditional DETR
        • ConvNeXT
        • ConvNeXTV2
        • CvT
        • Deformable DETR
        • DeiT
        • DETA
        • DETR
        • DiNAT
        • DINO V2
        • DiT
        • DPT
        • EfficientFormer
        • EfficientNet
        • FocalNet
        • GLPN
        • ImageGPT
        • LeViT
        • Mask2Former
        • MaskFormer
        • MobileNetV1
        • MobileNetV2
        • MobileViT
        • MobileViTV2
        • NAT
        • PoolFormer
        • Pyramid Vision Transformer (PVT)
        • RegNet
        • ResNet
        • SegFormer
        • SwiftFormer
        • Swin Transformer
        • Swin Transformer V2
        • Swin2SR
        • Table Transformer
        • TimeSformer
        • UperNet
        • VAN
        • VideoMAE
        • Vision Transformer (ViT)
        • ViT Hybrid
        • ViTDet
        • ViTMAE
        • ViTMatte
        • ViTMSN
        • ViViT
        • YOLOS
      • 🌍AUDIO MODELS
        • Audio Spectrogram Transformer
        • Bark
        • CLAP
        • EnCodec
        • Hubert
        • MCTCT
        • MMS
        • MusicGen
        • Pop2Piano
        • SEW
        • SEW-D
        • Speech2Text
        • Speech2Text2
        • SpeechT5
        • UniSpeech
        • UniSpeech-SAT
        • VITS
        • Wav2Vec2
        • Wav2Vec2-Conformer
        • Wav2Vec2Phoneme
        • WavLM
        • Whisper
        • XLS-R
        • XLSR-Wav2Vec2
      • 🌍MULTIMODAL MODELS
        • ALIGN
        • AltCLIP
        • BLIP
        • BLIP-2
        • BridgeTower
        • BROS
        • Chinese-CLIP
        • CLIP
        • CLIPSeg
        • Data2Vec
        • DePlot
        • Donut
        • FLAVA
        • GIT
        • GroupViT
        • IDEFICS
        • InstructBLIP
        • LayoutLM
        • LayoutLMV2
        • LayoutLMV3
        • LayoutXLM
        • LiLT
        • LXMERT
        • MatCha
        • MGP-STR
        • Nougat
        • OneFormer
        • OWL-ViT
        • Perceiver
        • Pix2Struct
        • Segment Anything
        • Speech Encoder Decoder Models
        • TAPAS
        • TrOCR
        • TVLT
        • ViLT
        • Vision Encoder Decoder Models
        • Vision Text Dual Encoder
        • VisualBERT
        • X-CLIP
      • 🌍REINFORCEMENT LEARNING MODELS
        • Decision Transformer
        • Trajectory Transformer
      • 🌍TIME SERIES MODELS
        • Autoformer
        • Informer
        • Time Series Transformer
      • 🌍GRAPH MODELS
        • Graphormer
  • 🌍INTERNAL HELPERS
    • Custom Layers and Utilities
    • Utilities for pipelines
    • Utilities for Tokenizers
    • Utilities for Trainer
    • Utilities for Generation
    • Utilities for Image Processors
    • Utilities for Audio processing
    • General Utilities
    • Utilities for Time Series
Powered by GitBook
On this page
  • Contribute to 🌍 Transformers
  • Ways to contribute
  • Fixing outstanding issues
  • Submitting a bug-related issue or feature request
  • Do you want to implement a new model?
  • Do you want to add documentation?
  • Create a Pull Request
  1. CONTRIBUTE

How to contribute to transformers?

PreviousCONTRIBUTENextHow to add a model to BOINC AI Transformers?

Last updated 1 year ago

Contribute to 🌍 Transformers

Everyone is welcome to contribute, and we value everybody’s contribution. Code contributions are not the only way to help the community. Answering questions, helping others, and improving the documentation are also immensely valuable.

It also helps us if you spread the word! Reference the library in blog posts about the awesome projects it made possible, shout out on Twitter every time it has helped you, or simply ⭐️ the repository to say thank you.

However you choose to contribute, please be mindful and respect our .

This guide was heavily inspired by the awesome .

Ways to contribute

There are several ways you can contribute to 🌍 Transformers:

  • Fix outstanding issues with the existing code.

  • Submit issues related to bugs or desired new features.

  • Implement new models.

  • Contribute to the examples or to the documentation.

If you don’t know where to start, there is a special listing. It will give you a list of open issues that are beginner-friendly and help you start contributing to open-source. Just comment in the issue that you’d like to work on it.

For something slightly more challenging, you can also take a look at the list. In general though, if you feel like you know what you’re doing, go for it and we’ll help you get there! 🚀

All contributions are equally valuable to the community. 🥰

Fixing outstanding issues

Submitting a bug-related issue or feature request

Do your best to follow these guidelines when submitting a bug-related issue or a feature request. It will make it easier for us to come back to you quickly and with good feedback.

Did you find a bug?

The 🌍 Transformers library is robust and reliable thanks to users who report the problems they encounter.

Once you’ve confirmed the bug hasn’t already been reported, please include the following information in your issue so we can quickly resolve it:

  • Your OS type and version and Python, PyTorch and TensorFlow versions when applicable.

  • A short, self-contained, code snippet that allows us to reproduce the bug in less than 30s.

  • The full traceback if an exception is raised.

  • Attach any other additional information, like screenshots, you think may help.

To get the OS and software versions automatically, run the following command:

Copied

transformers-cli env

You can also run the same command from the root of the repository:

Copied

python src/transformers/commands/transformers_cli.py env

Do you want a new feature?

If there is a new feature you’d like to see in 🌍 Transformers, please open an issue and describe:

  1. What is the motivation behind this feature? Is it related to a problem or frustration with the library? Is it a feature related to something you need for a project? Is it something you worked on and think it could benefit the community?

    Whatever it is, we’d love to hear about it!

  2. Describe your requested feature in as much detail as possible. The more you can tell us about it, the better we’ll be able to help you.

  3. Provide a code snippet that demonstrates the features usage.

  4. If the feature is related to a paper, please include a link.

If your issue is well written we’re already 80% of the way there by the time you create it.

Do you want to implement a new model?

New models are constantly released and if you want to implement a new model, please provide the following information

  • A short description of the model and link to the paper.

  • Link to the implementation if it is open-sourced.

  • Link to the model weights if they are available.

If you are willing to contribute the model yourself, let us know so we can help you add it to 🌍 Transformers!

Do you want to add documentation?

We’re always looking for improvements to the documentation that make it more clear and accurate. Please let us know how the documentation can be improved such as typos and any content that is missing, unclear or inaccurate. We’ll be happy to make the changes or help you make a contribution if you’re interested!

Create a Pull Request

Before writing any code, we strongly advise you to search through the existing PRs or issues to make sure nobody is already working on the same thing. If you are unsure, it is always a good idea to open an issue to get some feedback.

  1. Clone your fork to your local disk, and add the base repository as a remote:

    Copied

    git clone git@github.com:<your Github handle>/transformers.git
    cd transformers
    git remote add upstream https://github.com/boincai/transformers.git
  2. Create a new branch to hold your development changes:

    Copied

    git checkout -b a-descriptive-name-for-my-changes

    🚨 Do not work on the main branch!

  3. Set up a development environment by running the following command in a virtual environment:

    Copied

    pip install -e ".[dev]"

    If 🌍 Transformers was already installed in the virtual environment, remove it with pip uninstall transformers before reinstalling it in editable mode with the -e flag.

    Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a failure with this command. If that’s the case make sure to install the Deep Learning framework you are working with (PyTorch, TensorFlow and/or Flax) then do:

    Copied

    pip install -e ".[quality]"

    which should be enough for most use cases.

  4. Develop the features on your branch.

    As you work on your code, you should make sure the test suite passes. Run the tests impacted by your changes like this:

    Copied

    pytest tests/<TEST_TO_RUN>.py

    🌍 Transformers relies on black and ruff to format its source code consistently. After you make changes, apply automatic style corrections and code verifications that can’t be automated in one go with:

    Copied

    make fixup

    This target is also optimized to only work with files modified by the PR you’re working on.

    If you prefer to run the checks one after the other, the following command applies the style corrections:

    Copied

    make style

    🌍 Transformers also uses ruff and a few custom scripts to check for coding mistakes. Quality controls are run by the CI, but you can run the same checks with:

    Copied

    make quality

    Finally, we have a lot of scripts to make sure we didn’t forget to update some files when adding a new model. You can run these scripts with:

    Copied

    make repo-consistency

    If you’re modifying documents under docs/source directory, make sure the documentation can still be built. This check will also run in the CI when you open a pull request. To run a local check make sure you install the documentation builder:

    Copied

    pip install ".[docs]"

    Run the following command from the root of the repository:

    Copied

    doc-builder build transformers docs/source/en --build_dir ~/tmp/test-build

    This will build the documentation in the ~/tmp/test-build folder where you can inspect the generated Markdown files with your favorite editor. You can also preview the docs on GitHub when you open a pull request.

    Once you’re happy with your changes, add changed files with git add and record your changes locally with git commit:

    Copied

    git add modified_file.py
    git commit

    To keep your copy of the code up to date with the original repository, rebase your branch on upstream/branch before you open a pull request or if requested by a maintainer:

    Copied

    git fetch upstream
    git rebase upstream/main

    Push your changes to your branch:

    Copied

    git push -u origin a-descriptive-name-for-my-changes

    If you’ve already opened a pull request, you’ll need to force push with the --force flag. Otherwise, if the pull request hasn’t been opened yet, you can just push your changes normally.

  5. It’s ok if maintainers request changes, it happens to our core contributors too! So everyone can see the changes in the pull request, work in your local branch and push the changes to your fork. They will automatically appear in the pull request.

Pull request checklist

☐ The pull request title should summarize your contribution. ☐ If your pull request addresses an issue, please mention the issue number in the pull request description to make sure they are linked (and people viewing the issue know you are working on it). ☐ To indicate a work in progress please prefix the title with [WIP]. These are useful to avoid duplicated work, and to differentiate it from PRs ready to be merged. ☐ Make sure existing tests pass. ☐ If adding a new feature, also add tests for it.

  • If you are adding a new model, make sure you use ModelTester.all_model_classes = (MyModel, MyModelWithLMHead,...) to trigger the common tests.

  • If you are adding new @slow tests, make sure they pass using RUN_SLOW=1 python -m pytest tests/models/my_new_model/test_my_new_model.py.

  • If you are adding a new tokenizer, write tests and make sure RUN_SLOW=1 python -m pytest tests/models/{your_model_name}/test_tokenization_{your_model_name}.py passes.

  • CircleCI does not run the slow tests, but GitHub Actions does every night!

Tests

We like pytest and pytest-xdist because it’s faster. From the root of the repository, specify a path to a subfolder or a test file to run the test.

Copied

python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model

Similarly, for the examples directory, specify a path to a subfolder or test file to run the test. For example, the following command tests the text classification subfolder in the PyTorch examples directory:

Copied

pip install -r examples/xxx/requirements.txt  # only needed the first time
python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification

In fact, this is actually how our make test and make test-examples commands are implemented (not including the pip install)!

You can also specify a smaller set of tests in order to test only the feature you’re working on.

By default, slow tests are skipped but you can set the RUN_SLOW environment variable to yes to run them. This will download many gigabytes of models so make sure you have enough disk space, a good internet connection or a lot of patience!

Remember to specify a path to a subfolder or a test file to run the test. Otherwise, you’ll run all the tests in the tests or examples folder, which will take a very long time!

Copied

RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model
RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification

Like the slow tests, there are other environment variables available which not enabled by default during testing:

  • RUN_CUSTOM_TOKENIZERS: Enables tests for custom tokenizers.

  • RUN_PT_FLAX_CROSS_TESTS: Enables tests for PyTorch + Flax integration.

  • RUN_PT_TF_CROSS_TESTS: Enables tests for TensorFlow + PyTorch integration.

🌍 Transformers uses pytest as a test runner only. It doesn’t use any pytest-specific features in the test suite itself.

This means unittest is fully supported. Here’s how to run tests with unittest:

Copied

python -m unittest discover -s tests -t . -v
python -m unittest discover -s examples -t examples -v

Style guide

Develop on Windows

Copied

git config core.autocrlf input

One way to run the make command on Windows is with MSYS2:

  1. Open the command line C:\msys64\msys2.exe (it should be available from the Start menu).

  2. Run in the shell: pacman -Syu and install make with pacman -S make.

  3. Add C:\msys64\usr\bin to your PATH environment variable.

You can now use make from any terminal (Powershell, cmd.exe, etc.)! 🎉

Sync a forked repository with upstream main (the BOINC AI repository)

When updating the main branch of a forked repository, please follow these steps to avoid pinging the upstream repository which adds reference notes to each upstream PR, and sends unnecessary notifications to the developers involved in these PRs.

  1. When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main.

  2. If a PR is absolutely necessary, use the following steps after checking out your branch:

Copied

git checkout -b your-branch-for-syncing
git pull --squash --no-commit upstream main
git commit -m '<your message without GitHub references>'
git push --set-upstream origin your-branch-for-syncing

If you notice an issue with the existing code and have a fix in mind, feel free to and open a Pull Request!

Before you report an issue, we would really appreciate it if you could make sure the bug was not already reported (use the search bar on GitHub under Issues). Your issue should also be related to bugs in the library itself, and not your code. If you’re unsure whether the bug is in your code or the library, please ask on the first. This helps us respond quicker to fixing issues related to the library versus general questions.

We have added to help you get started with your issue.

We have added a to help you get started with adding a new model, and we also have a more technical guide for 🌍.

For more details about how to generate, build, and write the documentation, take a look at the documentation .

You will need basic git proficiency to contribute to 🌍 Transformers. While git is not the easiest tool to use, it has the greatest manual. Type git --help in a shell and enjoy! If you prefer books, is a very good reference.

You’ll need or above to contribute to 🌍 Transformers. Follow the steps below to start contributing:

Fork the by clicking on the button on the repository’s page. This creates a copy of the code under your GitHub user account.

For more information about tests, check out the guide.

To learn more about those checks and how to fix any issues with them, check out the guide.

Please remember to write to clearly communicate the changes you made!

Now you can go to your fork of the repository on GitHub and click on Pull request to open a pull request. Make sure you tick off all the boxes in our below. When you’re ready, you can send your changes to the project maintainers for review.

☐ All public methods must have informative docstrings (see for an example). ☐ Due to the rapidly growing repository, don’t add any images, videos and other non-text files that’ll significantly weigh down the repository. Instead, use a Hub repository such as to host these files and reference them by URL. We recommend placing documentation related images in the following repository: . You can open a PR on this dataset repostitory and ask a BOINC AI member to merge it.

For more information about the checks run on a pull request, take a look at our guide.

An extensive test suite is included to test the library behavior and several examples. Library tests can be found in the folder and examples tests in the folder.

More environment variables and additional information can be found in the .

For documentation strings, 🌍 Transformers follows the . Check our for more information.

On Windows (unless you’re working in or WSL), you need to configure git to transform Windows CRLF line endings to Linux LF line endings:

, and we assume it’s installed in C:\msys64.

🌍
code of conduct
scikit-learn guide to contributing
Good First Issue
Good Second Issue
start contributing
forum
templates
detailed guide and templates
how to add a model to
Transformers
README
Pro Git
Python 3.8
repository
Fork
Testing
Checks on a Pull Request
good commit messages
checklist
modeling_bert.py
hf-internal-testing
boincai/documentation-images
Checks on a Pull Request
tests
examples
testing_utils.py
Google Python Style Guide
documentation writing guide
Windows Subsystem for Linux
Download MSYS2