Big Transfer (BiT)

Big Transfer (BiT)

Big Transfer (BiT) is a type of pretraining recipe that pre-trains on a large supervised source dataset, and fine-tunes the weights on the target task. Models are trained on the JFT-300M dataset. The finetuned models contained in this collection are finetuned on ImageNet.

How do I use this model on an image?

To load a pretrained model:

Copied

>>> import timm
>>> model = timm.create_model('resnetv2_101x1_bitm', pretrained=True)
>>> model.eval()

To load and preprocess the image:

Copied

>>> import urllib
>>> from PIL import Image
>>> from timm.data import resolve_data_config
>>> from timm.data.transforms_factory import create_transform

>>> config = resolve_data_config({}, model=model)
>>> transform = create_transform(**config)

>>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
>>> urllib.request.urlretrieve(url, filename)
>>> img = Image.open(filename).convert('RGB')
>>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension

To get the model predictions:

Copied

To get the top-5 predictions class names:

Copied

Replace the model name with the variant you want to use, e.g. resnetv2_101x1_bitm. You can find the IDs in the model summaries at the top of this page.

To extract image features with this model, follow the timm feature extraction examplesarrow-up-right, just change the name of the model you want to use.

How do I finetune this model?

You can finetune any of the pre-trained models just by changing the classifier (the last layer).

Copied

To finetune on your own dataset, you have to write a training loop or adapt timmโ€™s training scriptarrow-up-right to use your dataset.

How do I train this model?

You can follow the timm recipe scriptsarrow-up-right for training a new model afresh.

Citation

Copied

Last updated