timm
  • ๐ŸŒGET STARTED
    • Home
    • Quickstart
    • Installation
  • ๐ŸŒTUTORIALS
    • Using Pretrained Models as Feature Extractors
    • Training With The Official Training Script
    • Share and Load Models from the BOINC AI Hub
  • ๐ŸŒMODEL PAGES
    • Model Summaries
    • Results
    • Adversarial Inception v3
    • AdvProp (EfficientNet)
    • Big Transfer (BiT)
    • CSP-DarkNet
    • CSP-ResNet
    • CSP-ResNeXt
    • DenseNet
    • Deep Layer Aggregation
    • Dual Path NetwORK(DPN)
    • ECA-ResNet
    • EfficientNet
    • EfficientNet (Knapsack Pruned)
    • Ensemble Adversarial Inception ResNet v2
    • ESE-VoVNet
    • FBNet
    • (Gluon) Inception v3
    • (Gluon) ResNet
    • (Gluon) ResNeXt
    • (Gluon) SENet
    • (Gluon) SE-ResNeXt
    • (Gluon) Xception
    • HRNet
    • Instagram ResNeXt WSL
    • Inception ResNet v2
    • Inception v3
    • Inception v4
    • (Legacy) SE-ResNet
    • (Legacy) SE-ResNeXt
    • (Legacy) SENet
    • MixNet
    • MnasNet
    • MobileNet v2
    • MobileNet v3
    • NASNet
    • Noisy Student (EfficientNet)
    • PNASNet
    • RegNetX
    • RegNetY
    • Res2Net
    • Res2NeXt
    • ResNeSt
    • ResNet
    • ResNet-D
    • ResNeXt
    • RexNet
    • SE-ResNet
    • SelecSLS
    • SE-ResNeXt
    • SK-ResNet
    • SK-ResNeXt
    • SPNASNet
    • SSL ResNet
    • SWSL ResNet
    • SWSL ResNeXt
    • (Tensorflow) EfficientNet
    • (Tensorflow) EfficientNet CondConv
    • (Tensorflow) EfficientNet Lite
    • (Tensorflow) MobileNet v3
    • (Tensorflow) MixNet
    • (Tensorflow) MobileNet v3
    • TResNet
    • Wide ResNet
    • Xception
  • ๐ŸŒREFERENCE
    • Models
    • Data
    • Optimizers
    • Learning Rate Schedulers
Powered by GitBook
On this page
  • ResNeSt
  • How do I use this model on an image?
  • How do I finetune this model?
  • How do I train this model?
  • Citation
  1. MODEL PAGES

ResNeSt

PreviousRes2NeXtNextResNet

Last updated 1 year ago

ResNeSt

A ResNeSt is a variant on a , which instead stacks . The cardinal group representations are then concatenated along the channel dimension: $V = \text{Concat}${$V^{1},V^{2},\cdots{V}^{K}$}. As in standard residual blocks, the final output $Y$ of otheur Split-Attention block is produced using a shortcut connection: $Y=V+X$, if the input and output feature-map share the same shape. For blocks with a stride, an appropriate transformation $\mathcal{T}$ is applied to the shortcut connection to align the output shapes: $Y=V+\mathcal{T}(X)$. For example, $\mathcal{T}$ can be strided convolution or combined convolution-with-pooling.

How do I use this model on an image?

To load a pretrained model:

Copied

>>> import timm
>>> model = timm.create_model('resnest101e', pretrained=True)
>>> model.eval()

To load and preprocess the image:

Copied

>>> import urllib
>>> from PIL import Image
>>> from timm.data import resolve_data_config
>>> from timm.data.transforms_factory import create_transform

>>> config = resolve_data_config({}, model=model)
>>> transform = create_transform(**config)

>>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
>>> urllib.request.urlretrieve(url, filename)
>>> img = Image.open(filename).convert('RGB')
>>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension

To get the model predictions:

Copied

>>> import torch
>>> with torch.no_grad():
...     out = model(tensor)
>>> probabilities = torch.nn.functional.softmax(out[0], dim=0)
>>> print(probabilities.shape)
>>> # prints: torch.Size([1000])

To get the top-5 predictions class names:

Copied

>>> # Get imagenet class mappings
>>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
>>> urllib.request.urlretrieve(url, filename) 
>>> with open("imagenet_classes.txt", "r") as f:
...     categories = [s.strip() for s in f.readlines()]

>>> # Print top categories per image
>>> top5_prob, top5_catid = torch.topk(probabilities, 5)
>>> for i in range(top5_prob.size(0)):
...     print(categories[top5_catid[i]], top5_prob[i].item())
>>> # prints class names and probabilities like:
>>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)]

Replace the model name with the variant you want to use, e.g. resnest101e. You can find the IDs in the model summaries at the top of this page.

How do I finetune this model?

You can finetune any of the pre-trained models just by changing the classifier (the last layer).

Copied

>>> model = timm.create_model('resnest101e', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)

How do I train this model?

Citation

Copied

@misc{zhang2020resnest,
      title={ResNeSt: Split-Attention Networks}, 
      author={Hang Zhang and Chongruo Wu and Zhongyue Zhang and Yi Zhu and Haibin Lin and Zhi Zhang and Yue Sun and Tong He and Jonas Mueller and R. Manmatha and Mu Li and Alexander Smola},
      year={2020},
      eprint={2004.08955},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

To extract image features with this model, follow the , just change the name of the model you want to use.

To finetune on your own dataset, you have to write a training loop or adapt to use your dataset.

You can follow the for training a new model afresh.

๐ŸŒ
ResNet
Split-Attention blocks
timm feature extraction examples
timmโ€™s training script
timm recipe scripts