AutoImageProcessor
AutoImageProcessor
class transformers.AutoImageProcessor
( )
This is a generic image processor class that will be instantiated as one of the image processor classes of the library when created with the AutoImageProcessor.from_pretrained() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
from_pretrained
( pretrained_model_name_or_path**kwargs )
Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) — This can be either:a string, the model id of a pretrained image_processor hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.a path to a directory containing a image processor file saved using the save_pretrained() method, e.g.,
./my_model_directory/
.a path or url to a saved image processor JSON file, e.g.,
./my_model_directory/preprocessor_config.json
.
cache_dir (
str
oros.PathLike
, optional) — Path to a directory in which a downloaded pretrained model image processor should be cached if the standard cache should not be used.force_download (
bool
, optional, defaults toFalse
) — Whether or not to force to (re-)download the image processor files and override the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received file. Attempts to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
The proxies are used on each request.token (
str
or bool, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, will use the token generated when runninghuggingface-cli login
(stored in~/.huggingface
).revision (
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.return_unused_kwargs (
bool
, optional, defaults toFalse
) — IfFalse
, then this function returns just the final image processor object. IfTrue
, then this functions returns aTuple(image_processor, unused_kwargs)
where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not image processor attributes: i.e., the part ofkwargs
which has not been used to updateimage_processor
and is otherwise ignored.trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (
Dict[str, Any]
, optional) — The values in kwargs of any keys which are image processor attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not image processor attributes is controlled by thereturn_unused_kwargs
keyword parameter.
Instantiate one of the image processor classes of the library from a pretrained model vocabulary.
The image processor class to instantiate is selected based on the model_type
property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path
:
align — EfficientNetImageProcessor (ALIGN model)
beit — BeitImageProcessor (BEiT model)
bit — BitImageProcessor (BiT model)
blip — BlipImageProcessor (BLIP model)
blip-2 — BlipImageProcessor (BLIP-2 model)
bridgetower — BridgeTowerImageProcessor (BridgeTower model)
chinese_clip — ChineseCLIPImageProcessor (Chinese-CLIP model)
clip — CLIPImageProcessor (CLIP model)
clipseg — ViTImageProcessor (CLIPSeg model)
conditional_detr — ConditionalDetrImageProcessor (Conditional DETR model)
convnext — ConvNextImageProcessor (ConvNeXT model)
convnextv2 — ConvNextImageProcessor (ConvNeXTV2 model)
cvt — ConvNextImageProcessor (CvT model)
data2vec-vision — BeitImageProcessor (Data2VecVision model)
deformable_detr — DeformableDetrImageProcessor (Deformable DETR model)
deit — DeiTImageProcessor (DeiT model)
deta — DetaImageProcessor (DETA model)
detr — DetrImageProcessor (DETR model)
dinat — ViTImageProcessor (DiNAT model)
dinov2 — BitImageProcessor (DINOv2 model)
donut-swin — DonutImageProcessor (DonutSwin model)
dpt — DPTImageProcessor (DPT model)
efficientformer — EfficientFormerImageProcessor (EfficientFormer model)
efficientnet — EfficientNetImageProcessor (EfficientNet model)
flava — FlavaImageProcessor (FLAVA model)
focalnet — BitImageProcessor (FocalNet model)
git — CLIPImageProcessor (GIT model)
glpn — GLPNImageProcessor (GLPN model)
groupvit — CLIPImageProcessor (GroupViT model)
idefics — IdeficsImageProcessor (IDEFICS model)
imagegpt — ImageGPTImageProcessor (ImageGPT model)
instructblip — BlipImageProcessor (InstructBLIP model)
layoutlmv2 — LayoutLMv2ImageProcessor (LayoutLMv2 model)
layoutlmv3 — LayoutLMv3ImageProcessor (LayoutLMv3 model)
levit — LevitImageProcessor (LeViT model)
mask2former — Mask2FormerImageProcessor (Mask2Former model)
maskformer — MaskFormerImageProcessor (MaskFormer model)
mgp-str — ViTImageProcessor (MGP-STR model)
mobilenet_v1 — MobileNetV1ImageProcessor (MobileNetV1 model)
mobilenet_v2 — MobileNetV2ImageProcessor (MobileNetV2 model)
mobilevit — MobileViTImageProcessor (MobileViT model)
mobilevitv2 — MobileViTImageProcessor (MobileViTV2 model)
nat — ViTImageProcessor (NAT model)
nougat — NougatImageProcessor (Nougat model)
oneformer — OneFormerImageProcessor (OneFormer model)
owlvit — OwlViTImageProcessor (OWL-ViT model)
perceiver — PerceiverImageProcessor (Perceiver model)
pix2struct — Pix2StructImageProcessor (Pix2Struct model)
poolformer — PoolFormerImageProcessor (PoolFormer model)
pvt — PvtImageProcessor (PVT model)
regnet — ConvNextImageProcessor (RegNet model)
resnet — ConvNextImageProcessor (ResNet model)
sam — SamImageProcessor (SAM model)
segformer — SegformerImageProcessor (SegFormer model)
swiftformer — ViTImageProcessor (SwiftFormer model)
swin — ViTImageProcessor (Swin Transformer model)
swin2sr — Swin2SRImageProcessor (Swin2SR model)
swinv2 — ViTImageProcessor (Swin Transformer V2 model)
table-transformer — DetrImageProcessor (Table Transformer model)
timesformer — VideoMAEImageProcessor (TimeSformer model)
tvlt — TvltImageProcessor (TVLT model)
upernet — SegformerImageProcessor (UPerNet model)
van — ConvNextImageProcessor (VAN model)
videomae — VideoMAEImageProcessor (VideoMAE model)
vilt — ViltImageProcessor (ViLT model)
vit — ViTImageProcessor (ViT model)
vit_hybrid — ViTHybridImageProcessor (ViT Hybrid model)
vit_mae — ViTImageProcessor (ViTMAE model)
vit_msn — ViTImageProcessor (ViTMSN model)
vitmatte — VitMatteImageProcessor (ViTMatte model)
xclip — CLIPImageProcessor (X-CLIP model)
yolos — YolosImageProcessor (YOLOS model)
Passing token=True
is required when you want to use a private model.
Examples:
Copied
register
( config_classimage_processor_classexist_ok = False )
Parameters
config_class (PretrainedConfig) — The configuration corresponding to the model to register.
image_processor_class (ImageProcessingMixin) — The image processor to register.
Register a new image processor for this class.
Last updated