Run inference with pipelines
Last updated
Last updated
The makes it simple to use any model from the for inference on any language, computer vision, speech, and multimodal tasks. Even if you donβt have experience with a specific modality or arenβt familiar with the underlying code behind the models, you can still use them for inference with the ! This tutorial will teach you to:
Use a for inference.
Use a specific tokenizer or model.
Use a for audio, vision, and multimodal tasks.
Take a look at the documentation for a complete list of supported tasks and available parameters.
While each task has an associated , it is simpler to use the general abstraction which contains all the task-specific pipelines. The automatically loads a default model and a preprocessing class capable of inference for your task. Letβs take the example of using the for automatic speech recognition (ASR), or speech-to-text.
Start by creating a and specify the inference task:
Copied
Pass your input to the . In the case of speech recognition, this is an audio input file:
Copied
Letβs give it a try here to see how it performs:
Copied
If you have several inputs, you can pass your input as a list:
Copied
Pipelines are great for experimentation as switching from one model to another is trivial; however, there are some ways to optimize them for larger workloads than experimentation. See the following guides that dive into iterating over whole datasets or using pipelines in a webserver: of the docs:
Copied
Letβs check out 3 important ones:
If you use device=n
, the pipeline automatically puts the model on the specified device. This will work regardless of whether you are using PyTorch or Tensorflow.
Copied
Copied
The following code automatically loads and stores model weights across devices:
Copied
Note that if device_map="auto"
is passed, there is no need to add the argument device=device
when instantiating your pipeline
as you may encounter some unexpected behavior!
But if it works in your use case, you can use:
Copied
This runs the pipeline on the 4 provided audio files, but it will pass them in batches of 2 to the model (which is on a GPU, where batching is more likely to help) without requiring any further code from you. The output should always match what you would have received without batching. It is only meant as a way to help you get more speed out of a pipeline.
Copied
As you can see, the model inferred the text and also outputted when the various sentences were pronounced.
Copied
The pipeline can also run inference on a large dataset. The easiest way we recommend doing this is by using an iterator:
Copied
Since batching could speed things up, it may be useful to try tuning the batch_size
parameter here.
Copied
Creating an inference engine is a complex topic which deserves it's own page.
Specify your task and pass your image to the classifier. The image can be a link, a local path or a base64-encoded image. For example, what species of cat is shown below?
Copied
Copied
Copied
Copied
You can easily run pipeline
on large models using π accelerate
! First make sure you have installed accelerate
with pip install accelerate
.
First load your model using device_map="auto"
! We will use facebook/opt-1.3b
for our example.
Copied
You can also pass 8-bit loaded models if you install bitsandbytes
and add the argument load_in_8bit=True
Copied
Note that you can replace the checkpoint with any of the BOINC AI model that supports large model loading such as BLOOM!
Not the result you had in mind? Check out some of the on the Hub to see if you can get a better transcription.
Letβs try the model from OpenAI. Whisper was released 2 years later than Wav2Vec2, and was trained on close to 10x more data. As such, it beats Wav2Vec2 on most downstream benchmarks. It also has the added benefit of predicting punctuation and casing, neither of which are possible with Wav2Vec2.
Now this result looks more accurate! For a deep-dive comparison on Wav2Vec2 vs Whisper, refer to the . We really encourage you to check out the Hub for models in different languages, models specialized in your field, and more. You can check out and compare model results directly from your browser on the Hub to see if it fits or handles corner cases better than other ones. And if you donβt find a model for your use case, you can always start your own!
supports many parameters; some are task specific, and some are general to all pipelines. In general, you can specify parameters anywhere you want:
If the model is too large for a single GPU and you are using PyTorch, you can set device_map="auto"
to automatically determine how to load and store the model weights. Using the device_map
argument requires the π package:
By default, pipelines will not batch inference for reasons explained in detail . The reason is that batching is not necessarily faster, and can actually be quite slower in some cases.
Pipelines can also alleviate some of the complexities of batching because, for some pipelines, a single item (like a long audio file) needs to be chunked into multiple parts to be processed by a model. The pipeline performs this for you.
All tasks provide task specific parameters which allow for additional flexibility and options to help you get your job done. For instance, the method has a return_timestamps
parameter which sounds promising for subtitling videos:
There are many parameters available for each task, so check out each taskβs API reference to see what you can tinker with! For instance, the has a chunk_length_s
parameter which is helpful for working on really long audio files (for example, subtitling entire movies or hour-long videos) that a model typically cannot handle on its own:
If you canβt find a parameter that would really help you out, feel free to !
The iterator data()
yields each result, and the pipeline automatically recognizes the input is iterable and will start fetching the data while it continues to process it on the GPU (this uses under the hood). This is important because you donβt have to allocate memory for the whole dataset and you can feed the GPU as fast as possible.
The simplest way to iterate over a dataset is to just load one from π :
Using a for vision tasks is practically identical.
Using a for NLP tasks is practically identical.
The supports more than one modality. For example, a visual question answering (VQA) task combines text and image. Feel free to use any image link you like and a question you want to ask about the image. The image can be a URL or a local path to the image.
For example, if you use this :
To run the example above you need to have installed in addition to π Transformers: