This quickstart is intended for developers who are ready to dive into the code and see an example of how to integrate timm into their model training workflow.
First, you’ll need to install timm. For more information on installation, see .
Copied
pip install timm
Load a Pretrained Model
Pretrained models can be loaded using .
Here, we load the pretrained mobilenetv3_large_100 model.
Copied
>>> import timm
>>> m = timm.create_model('mobilenetv3_large_100', pretrained=True)
>>> m.eval()
Note: The returned PyTorch model is set to train mode by default, so you must call .eval() on it if you plan to use it for inference.
List Models with Pretrained Weights
To list models packaged with timm, you can use . If you specify pretrained=True, this function will only return model names that have associated pretrained weights available.
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
Copied
>>> model = timm.create_model('mobilenetv3_large_100', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
Use a Pretrained Model for Feature Extraction
Without modifying the network, one can call model.forward_features(input) on any model instead of the usual model(input). This will bypass the head classifier and global pooling for networks.
Copied
>>> import timm
>>> import torch
>>> x = torch.randn(1, 3, 224, 224)
>>> model = timm.create_model('mobilenetv3_large_100', pretrained=True)
>>> features = model.forward_features(x)
>>> print(features.shape)
torch.Size([1, 960, 7, 7])
Image Augmentation
This will return a generic transform that uses reasonable defaults.
Pretrained models have specific transforms that were applied to images fed into them while training. If you use the wrong transform on your image, the model won’t understand what it’s seeing!
To figure out which transformations were used for a given pretrained model, we can start by taking a look at its pretrained_cfg
Note: Here, the pretrained model's config happens to be the same as the generic config we made earlier. This is not always the case. So, it's safer to use the data config to create the transform as we did here instead of using the generic transform.
Using Pretrained Models for Inference
Here, we will put together the above sections and use a pretrained model for inference.
First we’ll need an image to do inference on. Here we load a picture of a leaf from the web: