How to convert a BOINC AI Transformers model to TensorFlow?

Having multiple frameworks available to use with ๐ŸŒŽ Transformers gives you flexibility to play their strengths when designing your application, but it implies that compatibility must be added on a per-model basis. The good news is that adding TensorFlow compatibility to an existing model is simpler than adding a new model from scratch! Whether you wish to have a deeper understanding of large TensorFlow models, make a major open-source contribution, or enable TensorFlow for your model of choice, this guide is for you.

This guide empowers you, a member of our community, to contribute TensorFlow model weights and/or architectures to be used in ๐ŸŒŽ Transformers, with minimal supervision from the BOINC AI team. Writing a new model is no small feat, but hopefully this guide will make it less of a rollercoaster ๐ŸŽข and more of a walk in the park ๐Ÿšถ. Harnessing our collective experiences is absolutely critical to make this process increasingly easier, and thus we highly encourage that you suggest improvements to this guide!

Before you dive deeper, it is recommended that you check the following resources if youโ€™re new to ๐ŸŒŽ Transformers:

In the remainder of this guide, you will learn whatโ€™s needed to add a new TensorFlow model architecture, the procedure to convert PyTorch into TensorFlow model weights, and how to efficiently debug mismatches across ML frameworks. Letโ€™s get started!

Are you unsure whether the model you wish to use already has a corresponding TensorFlow architecture?

Check the model_type field of the config.json of your model of choice (example). If the corresponding model folder in ๐ŸŒŽ Transformers has a file whose name starts with โ€œmodeling_tfโ€, it means that it has a corresponding TensorFlow architecture (example).

Step-by-step guide to add TensorFlow model architecture code

There are many ways to design a large model architecture, and multiple ways of implementing said design. However, you might recall from our general overview of ๐ŸŒŽ Transformers that we are an opinionated bunch - the ease of use of ๐ŸŒŽ Transformers relies on consistent design choices. From experience, we can tell you a few important things about adding TensorFlow models:

  • Donโ€™t reinvent the wheel! More often than not, there are at least two reference implementations you should check: the PyTorch equivalent of the model you are implementing and other TensorFlow models for the same class of problems.

  • Great model implementations survive the test of time. This doesnโ€™t happen because the code is pretty, but rather because the code is clear, easy to debug and build upon. If you make the life of the maintainers easy with your TensorFlow implementation, by replicating the same patterns as in other TensorFlow models and minimizing the mismatch to the PyTorch implementation, you ensure your contribution will be long lived.

  • Ask for help when youโ€™re stuck! The ๐ŸŒŽ Transformers team is here to help, and weโ€™ve probably found solutions to the same problems youโ€™re facing.

Hereโ€™s an overview of the steps needed to add a TensorFlow model architecture:

  1. Select the model you wish to convert

  2. Prepare transformers dev environment

  3. (Optional) Understand theoretical aspects and the existing implementation

  4. Implement the model architecture

  5. Implement model tests

  6. Submit the pull request

  7. (Optional) Build demos and share with the world

1.-3. Prepare your model contribution

1. Select the model you wish to convert

Letโ€™s start off with the basics: the first thing you need to know is the architecture you want to convert. If you donโ€™t have your eyes set on a specific architecture, asking the ๐ŸŒŽ Transformers team for suggestions is a great way to maximize your impact - we will guide you towards the most prominent architectures that are missing on the TensorFlow side. If the specific model you want to use with TensorFlow already has a TensorFlow architecture implementation in ๐ŸŒŽ Transformers but is lacking weights, feel free to jump straight into the weight conversion section of this page.

For simplicity, the remainder of this guide assumes youโ€™ve decided to contribute with the TensorFlow version of BrandNewBert (the same example as in the guide to add a new model from scratch).

Before starting the work on a TensorFlow model architecture, double-check that there is no ongoing effort to do so. You can search for BrandNewBert on the pull request GitHub page to confirm that there is no TensorFlow-related pull request.

2. Prepare transformers dev environment

Having selected the model architecture, open a draft PR to signal your intention to work on it. Follow the instructions below to set up your environment and open a draft PR.

  1. Fork the repository by clicking on the โ€˜Forkโ€™ button on the repositoryโ€™s page. This creates a copy of the code under your GitHub user account.

  2. Clone your transformers fork to your local disk, and add the base repository as a remote:

Copied

git clone https://github.com/[your Github handle]/transformers.git
cd transformers
git remote add upstream https://github.com/boincai/transformers.git
  1. Set up a development environment, for instance by running the following command:

Copied

python -m venv .env
source .env/bin/activate
pip install -e ".[dev]"

Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a failure with this command. If thatโ€™s the case make sure to install TensorFlow then do:

Copied

pip install -e ".[quality]"

Note: You donโ€™t need to have CUDA installed. Making the new model work on CPU is sufficient.

  1. Create a branch with a descriptive name from your main branch

Copied

git checkout -b add_tf_brand_new_bert
  1. Fetch and rebase to current main

Copied

git fetch upstream
git rebase upstream/main
  1. Add an empty .py file in transformers/src/models/brandnewbert/ named modeling_tf_brandnewbert.py. This will be your TensorFlow model file.

  2. Push the changes to your account using:

Copied

git add .
git commit -m "initial commit"
git push -u origin add_tf_brand_new_bert
  1. Once you are satisfied, go to the webpage of your fork on GitHub. Click on โ€œPull requestโ€. Make sure to add the GitHub handle of some members of the BOINC AI team as reviewers, so that the BOINC AI team gets notified for future changes.

  2. Change the PR into a draft by clicking on โ€œConvert to draftโ€ on the right of the GitHub pull request web page.

Now you have set up a development environment to port BrandNewBert to TensorFlow in ๐ŸŒŽ Transformers.

3. (Optional) Understand theoretical aspects and the existing implementation

You should take some time to read BrandNewBertโ€™s paper, if such descriptive work exists. There might be large sections of the paper that are difficult to understand. If this is the case, this is fine - donโ€™t worry! The goal is not to get a deep theoretical understanding of the paper, but to extract the necessary information required to effectively re-implement the model in ๐ŸŒŽ Transformers using TensorFlow. That being said, you donโ€™t have to spend too much time on the theoretical aspects, but rather focus on the practical ones, namely the existing model documentation page (e.g. model docs for BERT).

After youโ€™ve grasped the basics of the models you are about to implement, itโ€™s important to understand the existing implementation. This is a great chance to confirm that a working implementation matches your expectations for the model, as well as to foresee technical challenges on the TensorFlow side.

Itโ€™s perfectly natural that you feel overwhelmed with the amount of information that youโ€™ve just absorbed. It is definitely not a requirement that you understand all facets of the model at this stage. Nevertheless, we highly encourage you to clear any pressing questions in our forum.

4. Model implementation

Now itโ€™s time to finally start coding. Our suggested starting point is the PyTorch file itself: copy the contents of modeling_brand_new_bert.py inside src/transformers/models/brand_new_bert/ into modeling_tf_brand_new_bert.py. The goal of this section is to modify the file and update the import structure of ๐ŸŒŽ Transformers such that you can import TFBrandNewBert and TFBrandNewBert.from_pretrained(model_repo, from_pt=True) successfully loads a working TensorFlow BrandNewBert model.

Sadly, there is no prescription to convert a PyTorch model into TensorFlow. You can, however, follow our selection of tips to make the process as smooth as possible:

  • Prepend TF to the name of all classes (e.g. BrandNewBert becomes TFBrandNewBert).

  • Most PyTorch operations have a direct TensorFlow replacement. For example, torch.nn.Linear corresponds to tf.keras.layers.Dense, torch.nn.Dropout corresponds to tf.keras.layers.Dropout, etc. If youโ€™re not sure about a specific operation, you can use the TensorFlow documentation or the PyTorch documentation.

  • Look for patterns in the ๐ŸŒŽ Transformers codebase. If you come across a certain operation that doesnโ€™t have a direct replacement, the odds are that someone else already had the same problem.

  • By default, keep the same variable names and structure as in PyTorch. This will make it easier to debug, track issues, and add fixes down the line.

  • Some layers have different default values in each framework. A notable example is the batch normalization layerโ€™s epsilon (1e-5 in PyTorch and 1e-3 in TensorFlow). Double-check the documentation!

  • PyTorchโ€™s nn.Parameter variables typically need to be initialized within TF Layerโ€™s build(). See the following example: PyTorch / TensorFlow

  • If the PyTorch model has a #copied from ... on top of a function, the odds are that your TensorFlow model can also borrow that function from the architecture it was copied from, assuming it has a TensorFlow architecture.

  • Assigning the name attribute correctly in TensorFlow functions is critical to do the from_pt=True weight cross-loading. name is almost always the name of the corresponding variable in the PyTorch code. If name is not properly set, you will see it in the error message when loading the model weights.

  • The logic of the base model class, BrandNewBertModel, will actually reside in TFBrandNewBertMainLayer, a Keras layer subclass (example). TFBrandNewBertModel will simply be a wrapper around this layer.

  • Keras models need to be built in order to load pretrained weights. For that reason, TFBrandNewBertPreTrainedModel will need to hold an example of inputs to the model, the dummy_inputs (example).

  • If you get stuck, ask for help - weโ€™re here to help you! ๐ŸŒŽ

In addition to the model file itself, you will also need to add the pointers to the model classes and related documentation pages. You can complete this part entirely following the patterns in other PRs (example). Hereโ€™s a list of the needed manual changes:

  • Include all public classes of BrandNewBert in src/transformers/__init__.py

  • Add BrandNewBert classes to the corresponding Auto classes in src/transformers/models/auto/modeling_tf_auto.py

  • Add the lazy loading classes related to BrandNewBert in src/transformers/utils/dummy_tf_objects.py

  • Update the import structures for the public classes in src/transformers/models/brand_new_bert/__init__.py

  • Add the documentation pointers to the public methods of BrandNewBert in docs/source/en/model_doc/brand_new_bert.md

  • Add yourself to the list of contributors to BrandNewBert in docs/source/en/model_doc/brand_new_bert.md

  • Finally, add a green tick โœ… to the TensorFlow column of BrandNewBert in docs/source/en/index.md

When youโ€™re happy with your implementation, run the following checklist to confirm that your model architecture is ready:

  1. All layers that behave differently at train time (e.g. Dropout) are called with a training argument, which is propagated all the way from the top-level classes

  2. You have used #copied from ... whenever possible

  3. TFBrandNewBertMainLayer and all classes that use it have their call function decorated with @unpack_inputs

  4. TFBrandNewBertMainLayer is decorated with @keras_serializable

  5. A TensorFlow model can be loaded from PyTorch weights using TFBrandNewBert.from_pretrained(model_repo, from_pt=True)

  6. You can call the TensorFlow model using the expected input format

5. Add model tests

Hurray, youโ€™ve implemented a TensorFlow model! Now itโ€™s time to add tests to make sure that your model behaves as expected. As in the previous section, we suggest you start by copying the test_modeling_brand_new_bert.py file in tests/models/brand_new_bert/ into test_modeling_tf_brand_new_bert.py, and continue by making the necessary TensorFlow replacements. For now, in all .from_pretrained() calls, you should use the from_pt=True flag to load the existing PyTorch weights.

After youโ€™re done, itโ€™s time for the moment of truth: run the tests! ๐Ÿ˜ฌ

Copied

NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \
py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py

The most likely outcome is that youโ€™ll see a bunch of errors. Donโ€™t worry, this is expected! Debugging ML models is notoriously hard, and the key ingredient to success is patience (and breakpoint()). In our experience, the hardest problems arise from subtle mismatches between ML frameworks, for which we have a few pointers at the end of this guide. In other cases, a general test might not be directly applicable to your model, in which case we suggest an override at the model test class level. Regardless of the issue, donโ€™t hesitate to ask for help in your draft pull request if youโ€™re stuck.

When all tests pass, congratulations, your model is nearly ready to be added to the ๐ŸŒŽ Transformers library! ๐ŸŽ‰

6.-7. Ensure everyone can use your model

6. Submit the pull request

Once youโ€™re done with the implementation and the tests, itโ€™s time to submit a pull request. Before pushing your code, run our code formatting utility, make fixup ๐Ÿช„. This will automatically fix any formatting issues, which would cause our automatic checks to fail.

Itโ€™s now time to convert your draft pull request into a real pull request. To do so, click on the โ€œReady for reviewโ€ button and add Joao (@gante) and Matt (@Rocketknight1) as reviewers. A model pull request will need at least 3 reviewers, but they will take care of finding appropriate additional reviewers for your model.

After all reviewers are happy with the state of your PR, the final action point is to remove the from_pt=True flag in .from_pretrained() calls. Since there are no TensorFlow weights, you will have to add them! Check the section below for instructions on how to do it.

Finally, when the TensorFlow weights get merged, you have at least 3 reviewer approvals, and all CI checks are green, double-check the tests locally one last time

Copied

NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \
py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py

and we will merge your PR! Congratulations on the milestone ๐ŸŽ‰

7. (Optional) Build demos and share with the world

One of the hardest parts about open-source is discovery. How can the other users learn about the existence of your fabulous TensorFlow contribution? With proper communication, of course! ๐Ÿ“ฃ

There are two main ways to share your model with the community:

  • Build demos. These include Gradio demos, notebooks, and other fun ways to show off your model. We highly encourage you to add a notebook to our community-driven demos.

  • Share stories on social media like Twitter and LinkedIn. You should be proud of your work and share your achievement with the community - your model can now be used by thousands of engineers and researchers around the world ๐ŸŒ! We will be happy to retweet your posts and help you share your work with the community.

Adding TensorFlow weights to ๐ŸŒŽ Hub

Assuming that the TensorFlow model architecture is available in ๐ŸŒŽ Transformers, converting PyTorch weights into TensorFlow weights is a breeze!

Hereโ€™s how to do it:

  1. Make sure you are logged into your BOINC AI account in your terminal. You can log in using the command boincai-cli login (you can find your access tokens here)

  2. Run transformers-cli pt-to-tf --model-name foo/bar, where foo/bar is the name of the model repository containing the PyTorch weights you want to convert

  3. Tag @joaogante and @Rocketknight1 in the ๐ŸŒŽ Hub PR the command above has just created

Thatโ€™s it! ๐ŸŽ‰

Debugging mismatches across ML frameworks ๐Ÿ›

At some point, when adding a new architecture or when creating TensorFlow weights for an existing architecture, you might come across errors complaining about mismatches between PyTorch and TensorFlow. You might even decide to open the model architecture code for the two frameworks, and find that they look identical. Whatโ€™s going on? ๐Ÿค”

First of all, letโ€™s talk about why understanding these mismatches matters. Many community members will use ๐ŸŒŽ Transformers models out of the box, and trust that our models behave as expected. When there is a large mismatch between the two frameworks, it implies that the model is not following the reference implementation for at least one of the frameworks. This might lead to silent failures, in which the model runs but has poor performance. This is arguably worse than a model that fails to run at all! To that end, we aim at having a framework mismatch smaller than 1e-5 at all stages of the model.

As in other numerical problems, the devil is in the details. And as in any detail-oriented craft, the secret ingredient here is patience. Here is our suggested workflow for when you come across this type of issues:

  1. Locate the source of mismatches. The model youโ€™re converting probably has near identical inner variables up to a certain point. Place breakpoint() statements in the two frameworksโ€™ architectures, and compare the values of the numerical variables in a top-down fashion until you find the source of the problems.

  2. Now that youโ€™ve pinpointed the source of the issue, get in touch with the ๐ŸŒŽ Transformers team. It is possible that weโ€™ve seen a similar problem before and can promptly provide a solution. As a fallback, scan popular pages like StackOverflow and GitHub issues.

  3. If there is no solution in sight, it means youโ€™ll have to go deeper. The good news is that youโ€™ve located the issue, so you can focus on the problematic instruction, abstracting away the rest of the model! The bad news is that youโ€™ll have to venture into the source implementation of said instruction. In some cases, you might find an issue with a reference implementation - donโ€™t abstain from opening an issue in the upstream repository.

In some cases, in discussion with the ๐ŸŒŽ Transformers team, we might find that fixing the mismatch is infeasible. When the mismatch is very small in the output layers of the model (but potentially large in the hidden states), we might decide to ignore it in favor of distributing the model. The pt-to-tf CLI mentioned above has a --max-error flag to override the error message at weight conversion time.

Last updated