# AutoProcessor

### AutoProcessor

#### class transformers.AutoProcessor

[\<source>](https://github.com/huggingface/transformers/blob/v4.34.1/src/transformers/models/auto/processing_auto.py#L118)

( )

This is a generic processor class that will be instantiated as one of the processor classes of the library when created with the [AutoProcessor.from\_pretrained()](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/auto#transformers.AutoProcessor.from_pretrained) class method.

This class cannot be instantiated directly using `__init__()` (throws an error).

**from\_pretrained**

[\<source>](https://github.com/huggingface/transformers/blob/v4.34.1/src/transformers/models/auto/processing_auto.py#L132)

( pretrained\_model\_name\_or\_path\*\*kwargs )

Parameters

* **pretrained\_model\_name\_or\_path** (`str` or `os.PathLike`) — This can be either:
  * a string, the *model id* of a pretrained feature\_extractor hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced under a user or organization name, like `dbmdz/bert-base-german-cased`.
  * a path to a *directory* containing a processor files saved using the `save_pretrained()` method, e.g., `./my_model_directory/`.
* **cache\_dir** (`str` or `os.PathLike`, *optional*) — Path to a directory in which a downloaded pretrained model feature extractor should be cached if the standard cache should not be used.
* **force\_download** (`bool`, *optional*, defaults to `False`) — Whether or not to force to (re-)download the feature extractor files and override the cached versions if they exist.
* **resume\_download** (`bool`, *optional*, defaults to `False`) — Whether or not to delete incompletely received file. Attempts to resume the download if such a file exists.
* **proxies** (`Dict[str, str]`, *optional*) — A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
* **token** (`str` or *bool*, *optional*) — The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).
* **revision** (`str`, *optional*, defaults to `"main"`) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.
* **return\_unused\_kwargs** (`bool`, *optional*, defaults to `False`) — If `False`, then this function returns just the final feature extractor object. If `True`, then this functions returns a `Tuple(feature_extractor, unused_kwargs)` where *unused\_kwargs* is a dictionary consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part of `kwargs` which has not been used to update `feature_extractor` and is otherwise ignored.
* **trust\_remote\_code** (`bool`, *optional*, defaults to `False`) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
* **kwargs** (`Dict[str, Any]`, *optional*) — The values in kwargs of any keys which are feature extractor attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are *not* feature extractor attributes is controlled by the `return_unused_kwargs` keyword parameter.

Instantiate one of the processor classes of the library from a pretrained model vocabulary.

The processor class to instantiate is selected based on the `model_type` property of the config object (either passed as an argument or loaded from `pretrained_model_name_or_path` if possible):

* **align** — [AlignProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/align#transformers.AlignProcessor) (ALIGN model)
* **altclip** — [AltCLIPProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/altclip#transformers.AltCLIPProcessor) (AltCLIP model)
* **bark** — [BarkProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/bark#transformers.BarkProcessor) (Bark model)
* **blip** — [BlipProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/blip#transformers.BlipProcessor) (BLIP model)
* **blip-2** — [Blip2Processor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/blip-2#transformers.Blip2Processor) (BLIP-2 model)
* **bridgetower** — [BridgeTowerProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/bridgetower#transformers.BridgeTowerProcessor) (BridgeTower model)
* **chinese\_clip** — [ChineseCLIPProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/chinese_clip#transformers.ChineseCLIPProcessor) (Chinese-CLIP model)
* **clap** — [ClapProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/clap#transformers.ClapProcessor) (CLAP model)
* **clip** — [CLIPProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/clip#transformers.CLIPProcessor) (CLIP model)
* **clipseg** — [CLIPSegProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/clipseg#transformers.CLIPSegProcessor) (CLIPSeg model)
* **flava** — [FlavaProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/flava#transformers.FlavaProcessor) (FLAVA model)
* **git** — [GitProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/git#transformers.GitProcessor) (GIT model)
* **groupvit** — [CLIPProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/clip#transformers.CLIPProcessor) (GroupViT model)
* **hubert** — [Wav2Vec2Processor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor) (Hubert model)
* **idefics** — [IdeficsProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/idefics#transformers.IdeficsProcessor) (IDEFICS model)
* **instructblip** — [InstructBlipProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/instructblip#transformers.InstructBlipProcessor) (InstructBLIP model)
* **layoutlmv2** — [LayoutLMv2Processor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/layoutlmv2#transformers.LayoutLMv2Processor) (LayoutLMv2 model)
* **layoutlmv3** — [LayoutLMv3Processor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/layoutlmv3#transformers.LayoutLMv3Processor) (LayoutLMv3 model)
* **markuplm** — [MarkupLMProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/markuplm#transformers.MarkupLMProcessor) (MarkupLM model)
* **mctct** — [MCTCTProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mctct#transformers.MCTCTProcessor) (M-CTC-T model)
* **mgp-str** — [MgpstrProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mgp-str#transformers.MgpstrProcessor) (MGP-STR model)
* **oneformer** — [OneFormerProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/oneformer#transformers.OneFormerProcessor) (OneFormer model)
* **owlvit** — [OwlViTProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/owlvit#transformers.OwlViTProcessor) (OWL-ViT model)
* **pix2struct** — [Pix2StructProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/pix2struct#transformers.Pix2StructProcessor) (Pix2Struct model)
* **pop2piano** — [Pop2PianoProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/pop2piano#transformers.Pop2PianoProcessor) (Pop2Piano model)
* **sam** — [SamProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/sam#transformers.SamProcessor) (SAM model)
* **sew** — [Wav2Vec2Processor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor) (SEW model)
* **sew-d** — [Wav2Vec2Processor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor) (SEW-D model)
* **speech\_to\_text** — [Speech2TextProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/speech_to_text#transformers.Speech2TextProcessor) (Speech2Text model)
* **speech\_to\_text\_2** — [Speech2Text2Processor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/speech_to_text_2#transformers.Speech2Text2Processor) (Speech2Text2 model)
* **speecht5** — [SpeechT5Processor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/speecht5#transformers.SpeechT5Processor) (SpeechT5 model)
* **trocr** — [TrOCRProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/trocr#transformers.TrOCRProcessor) (TrOCR model)
* **tvlt** — [TvltProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/tvlt#transformers.TvltProcessor) (TVLT model)
* **unispeech** — [Wav2Vec2Processor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor) (UniSpeech model)
* **unispeech-sat** — [Wav2Vec2Processor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor) (UniSpeechSat model)
* **vilt** — [ViltProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/vilt#transformers.ViltProcessor) (ViLT model)
* **vision-text-dual-encoder** — [VisionTextDualEncoderProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderProcessor) (VisionTextDualEncoder model)
* **wav2vec2** — [Wav2Vec2Processor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor) (Wav2Vec2 model)
* **wav2vec2-conformer** — [Wav2Vec2Processor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor) (Wav2Vec2-Conformer model)
* **wavlm** — [Wav2Vec2Processor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor) (WavLM model)
* **whisper** — [WhisperProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/whisper#transformers.WhisperProcessor) (Whisper model)
* **xclip** — [XCLIPProcessor](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/xclip#transformers.XCLIPProcessor) (X-CLIP model)

Passing `token=True` is required when you want to use a private model.

Examples:

Copied

```
>>> from transformers import AutoProcessor

>>> # Download processor from huggingface.co and cache.
>>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h")

>>> # If processor files are in a directory (e.g. processor was saved using *save_pretrained('./test/saved_model/')*)
>>> # processor = AutoProcessor.from_pretrained("./test/saved_model/")
```

**register**

[\<source>](https://github.com/huggingface/transformers/blob/v4.34.1/src/transformers/models/auto/processing_auto.py#L321)

( config\_classprocessor\_classexist\_ok = False )

Parameters

* **config\_class** ([PretrainedConfig](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/configuration#transformers.PretrainedConfig)) — The configuration corresponding to the model to register.
* **processor\_class** (`FeatureExtractorMixin`) — The processor to register.

Register a new processor for this class.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://boinc-ai.gitbook.io/transformers/api/main-classes/auto-classes/autoprocessor.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
