Pipelines
Pipelines
The pipelines are a great and easy way to use models for inference. These pipelines are objects that abstract most of the complex code from the library, offering a simple API dedicated to several tasks, including Named Entity Recognition, Masked Language Modeling, Sentiment Analysis, Feature Extraction and Question Answering. See the task summary for examples of use.
There are two categories of pipeline abstractions to be aware about:
The pipeline() which is the most powerful object encapsulating all other pipelines.
Task-specific pipelines are available for audio, computer vision, natural language processing, and multimodal tasks.
The pipeline abstraction
The pipeline abstraction is a wrapper around all the other available pipelines. It is instantiated as any other pipeline but can provide additional quality of life.
Simple call on one item:
Copied
If you want to use a specific model from the hub you can ignore the task if the model on the hub already defines it:
Copied
To call a pipeline on many items, you can call it with a list.
Copied
To iterate over full datasets it is recommended to use a dataset
directly. This means you don’t need to allocate the whole dataset at once, nor do you need to do batching yourself. This should work just as fast as custom loops on GPU. If it doesn’t don’t hesitate to create an issue.
Copied
For ease of use, a generator is also possible:
Copied
transformers.pipeline
( task: str = Nonemodel: typing.Union[str, ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel'), NoneType] = Noneconfig: typing.Union[str, transformers.configuration_utils.PretrainedConfig, NoneType] = Nonetokenizer: typing.Union[str, transformers.tokenization_utils.PreTrainedTokenizer, ForwardRef('PreTrainedTokenizerFast'), NoneType] = Nonefeature_extractor: typing.Union[str, ForwardRef('SequenceFeatureExtractor'), NoneType] = Noneimage_processor: typing.Union[str, transformers.image_processing_utils.BaseImageProcessor, NoneType] = Noneframework: typing.Optional[str] = Nonerevision: typing.Optional[str] = Noneuse_fast: bool = Truetoken: typing.Union[bool, str, NoneType] = Nonedevice: typing.Union[int, str, ForwardRef('torch.device'), NoneType] = Nonedevice_map = Nonetorch_dtype = Nonetrust_remote_code: typing.Optional[bool] = Nonemodel_kwargs: typing.Dict[str, typing.Any] = Nonepipeline_class: typing.Optional[typing.Any] = None**kwargs ) → Pipeline
Parameters
task (
str
) — The task defining which pipeline will be returned. Currently accepted tasks are:"audio-classification"
: will return a AudioClassificationPipeline."automatic-speech-recognition"
: will return a AutomaticSpeechRecognitionPipeline."conversational"
: will return a ConversationalPipeline."depth-estimation"
: will return a DepthEstimationPipeline."document-question-answering"
: will return a DocumentQuestionAnsweringPipeline."feature-extraction"
: will return a FeatureExtractionPipeline."fill-mask"
: will return a FillMaskPipeline:."image-classification"
: will return a ImageClassificationPipeline."image-segmentation"
: will return a ImageSegmentationPipeline."image-to-image"
: will return a ImageToImagePipeline."image-to-text"
: will return a ImageToTextPipeline."mask-generation"
: will return aMaskGenerationPipeline
."object-detection"
: will return a ObjectDetectionPipeline."question-answering"
: will return a QuestionAnsweringPipeline."summarization"
: will return a SummarizationPipeline."table-question-answering"
: will return a TableQuestionAnsweringPipeline."text2text-generation"
: will return a Text2TextGenerationPipeline."text-classification"
(alias"sentiment-analysis"
available): will return a TextClassificationPipeline."text-generation"
: will return a TextGenerationPipeline:."text-to-audio"
(alias"text-to-speech"
available): will return a TextToAudioPipeline:."token-classification"
(alias"ner"
available): will return a TokenClassificationPipeline."translation"
: will return a TranslationPipeline."translation_xx_to_yy"
: will return a TranslationPipeline."video-classification"
: will return a VideoClassificationPipeline."visual-question-answering"
: will return a VisualQuestionAnsweringPipeline."zero-shot-classification"
: will return a ZeroShotClassificationPipeline."zero-shot-image-classification"
: will return a ZeroShotImageClassificationPipeline."zero-shot-audio-classification"
: will return a ZeroShotAudioClassificationPipeline."zero-shot-object-detection"
: will return a ZeroShotObjectDetectionPipeline.
model (
str
or PreTrainedModel or TFPreTrainedModel, optional) — The model that will be used by the pipeline to make predictions. This can be a model identifier or an actual instance of a pretrained model inheriting from PreTrainedModel (for PyTorch) or TFPreTrainedModel (for TensorFlow).If not provided, the default for the
task
will be loaded.config (
str
or PretrainedConfig, optional) — The configuration that will be used by the pipeline to instantiate the model. This can be a model identifier or an actual pretrained model configuration inheriting from PretrainedConfig.If not provided, the default configuration file for the requested model will be used. That means that if
model
is given, its default configuration will be used. However, ifmodel
is not supplied, thistask
’s default model’s config is used instead.tokenizer (
str
or PreTrainedTokenizer, optional) — The tokenizer that will be used by the pipeline to encode data for the model. This can be a model identifier or an actual pretrained tokenizer inheriting from PreTrainedTokenizer.If not provided, the default tokenizer for the given
model
will be loaded (if it is a string). Ifmodel
is not specified or not a string, then the default tokenizer forconfig
is loaded (if it is a string). However, ifconfig
is also not given or not a string, then the default tokenizer for the giventask
will be loaded.feature_extractor (
str
orPreTrainedFeatureExtractor
, optional) — The feature extractor that will be used by the pipeline to encode data for the model. This can be a model identifier or an actual pretrained feature extractor inheriting fromPreTrainedFeatureExtractor
.Feature extractors are used for non-NLP models, such as Speech or Vision models as well as multi-modal models. Multi-modal models will also require a tokenizer to be passed.
If not provided, the default feature extractor for the given
model
will be loaded (if it is a string). Ifmodel
is not specified or not a string, then the default feature extractor forconfig
is loaded (if it is a string). However, ifconfig
is also not given or not a string, then the default feature extractor for the giventask
will be loaded.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.revision (
str
, optional, defaults to"main"
) — When passing a task name or a string model identifier: The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on boincai.com, sorevision
can be any identifier allowed by git.use_fast (
bool
, optional, defaults toTrue
) — Whether or not to use a Fast tokenizer if possible (a PreTrainedTokenizerFast).use_auth_token (
str
or bool, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, will use the token generated when runningboincai-cli login
(stored in~/.boincai
).device (
int
orstr
ortorch.device
) — Defines the device (e.g.,"cpu"
,"cuda:1"
,"mps"
, or a GPU ordinal rank like1
) on which this pipeline will be allocated.device_map (
str
orDict[str, Union[int, str, torch.device]
, optional) — Sent directly asmodel_kwargs
(just a simpler shortcut). Whenaccelerate
library is present, setdevice_map="auto"
to compute the most optimizeddevice_map
automatically (see here for more information).Do not use
device_map
ANDdevice
at the same time as they will conflicttorch_dtype (
str
ortorch.dtype
, optional) — Sent directly asmodel_kwargs
(just a simpler shortcut) to use the available precision for this model (torch.float16
,torch.bfloat16
, … or"auto"
).trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom code defined on the Hub in their own modeling, configuration, tokenization or even pipeline files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.model_kwargs (
Dict[str, Any]
, optional) — Additional dictionary of keyword arguments passed along to the model’sfrom_pretrained(..., **model_kwargs)
function.kwargs (
Dict[str, Any]
, optional) — Additional keyword arguments passed along to the specific pipeline init (see the documentation for the corresponding pipeline class for possible values).
Returns
A suitable pipeline for the task.
Utility factory method to build a Pipeline.
Pipelines are made of:
A tokenizer in charge of mapping raw textual input to token.
A model to make predictions from the inputs.
Some (optional) post processing for enhancing model’s output.
Examples:
Copied
Pipeline batching
All pipelines can use batching. This will work whenever the pipeline uses its streaming ability (so when passing lists or Dataset
or generator
).
Copied
However, this is not automatically a win for performance. It can be either a 10x speedup or 5x slowdown depending on hardware, data and the actual model being used.
Example where it’s mostly a speedup:
Copied
Copied
Example where it’s most a slowdown:
Copied
This is a occasional very long sentence compared to the other. In that case, the whole batch will need to be 400 tokens long, so the whole batch will be [64, 400] instead of [64, 4], leading to the high slowdown. Even worse, on bigger batches, the program simply crashes.
Copied
There are no good (general) solutions for this problem, and your mileage may vary depending on your use cases. Rule of thumb:
For users, a rule of thumb is:
Measure performance on your load, with your hardware. Measure, measure, and keep measuring. Real numbers are the only way to go.
If you are latency constrained (live product doing inference), don’t batch
If you are using CPU, don’t batch.
If you are using throughput (you want to run your model on a bunch of static data), on GPU, then:
If you have no clue about the size of the sequence_length (“natural” data), by default don’t batch, measure and try tentatively to add it, add OOM checks to recover when it will fail (and it will at some point if you don’t control the sequence_length.)
If your sequence_length is super regular, then batching is more likely to be VERY interesting, measure and push it until you get OOMs.
The larger the GPU the more likely batching is going to be more interesting
As soon as you enable batching, make sure you can handle OOMs nicely.
Pipeline chunk batching
zero-shot-classification
and question-answering
are slightly specific in the sense, that a single input might yield multiple forward pass of a model. Under normal circumstances, this would yield issues with batch_size
argument.
In order to circumvent this issue, both of these pipelines are a bit specific, they are ChunkPipeline
instead of regular Pipeline
. In short:
Copied
Now becomes:
Copied
This should be very transparent to your code because the pipelines are used in the same way.
This is a simplified view, since the pipeline can handle automatically the batch to ! Meaning you don’t have to care about how many forward passes you inputs are actually going to trigger, you can optimize the batch_size
independently of the inputs. The caveats from the previous section still apply.
Pipeline custom code
If you want to override a specific pipeline.
Don’t hesitate to create an issue for your task at hand, the goal of the pipeline is to be easy to use and support most cases, so transformers
could maybe support your use case.
If you want to try simply you can:
Subclass your pipeline of choice
Copied
That should enable you to do all the custom code you want.
Implementing a pipeline
Audio
Pipelines available for audio tasks include the following.
AudioClassificationPipeline
class transformers.AudioClassificationPipeline
( *args**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Audio classification pipeline using any AutoModelForAudioClassification
. This pipeline predicts the class of a raw waveform or an audio file. In case of an audio file, ffmpeg should be installed to support multiple audio formats.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
This pipeline can currently be loaded from pipeline() using the following task identifier: "audio-classification"
.
See the list of available models on boincai.com/models.
__call__
( inputs: typing.Union[numpy.ndarray, bytes, str]**kwargs ) → A list of dict
with the following keys
Parameters
inputs (
np.ndarray
orbytes
orstr
ordict
) — The inputs is either :str
that is the filename of the audio file, the file will be read at the correct sampling rate to get the waveform using ffmpeg. This requires ffmpeg to be installed on the system.bytes
it is supposed to be the content of an audio file and is interpreted by ffmpeg in the same way.(
np.ndarray
of shape (n, ) of typenp.float32
ornp.float64
) Raw audio at the correct sampling rate (no further check will be done)dict
form can be used to pass raw audio sampled at arbitrarysampling_rate
and let this pipeline do the resampling. The dict must be either be in the format{"sampling_rate": int, "raw": np.array}
, or{"sampling_rate": int, "array": np.array}
, where the key"raw"
or"array"
is used to denote the raw audio waveform.
top_k (
int
, optional, defaults to None) — The number of top labels that will be returned by the pipeline. If the provided number isNone
or higher than the number of labels available in the model configuration, it will default to the number of labels.
Returns
A list of dict
with the following keys
label (
str
) — The label predicted.score (
float
) — The corresponding probability.
Classify the sequence(s) given as inputs. See the AutomaticSpeechRecognitionPipeline documentation for more information.
AutomaticSpeechRecognitionPipeline
class transformers.AutomaticSpeechRecognitionPipeline
( model: PreTrainedModelfeature_extractor: typing.Union[ForwardRef('SequenceFeatureExtractor'), str] = Nonetokenizer: typing.Optional[transformers.tokenization_utils.PreTrainedTokenizer] = Nonedecoder: typing.Union[ForwardRef('BeamSearchDecoderCTC'), str, NoneType] = Nonemodelcard: typing.Optional[transformers.modelcard.ModelCard] = Noneframework: typing.Optional[str] = Nonetask: str = ''args_parser: ArgumentHandler = Nonedevice: typing.Union[int, ForwardRef('torch.device')] = Nonetorch_dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = Nonebinary_output: bool = False**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
feature_extractor (SequenceFeatureExtractor) — The feature extractor that will be used by the pipeline to encode waveform for the model.
chunk_length_s (
float
, optional, defaults to 0) — The input length for in each chunk. Ifchunk_length_s = 0
then chunking is disabled (default).For more information on how to effectively use
chunk_length_s
, please have a look at the ASR chunking blog post.stride_length_s (
float
, optional, defaults tochunk_length_s / 6
) — The length of stride on the left and right of each chunk. Used only withchunk_length_s > 0
. This enables the model to see more context and infer letters better than without this context but the pipeline discards the stride bits at the end to make the final reconstitution as perfect as possible.For more information on how to effectively use
stride_length_s
, please have a look at the ASR chunking blog post.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed. If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of themodel
, or to PyTorch if no model is provided.device (Union[
int
,torch.device
], optional) — Device ordinal for CPU/GPU supports. Setting this toNone
will leverage CPU, a positive will run the model on the associated CUDA device id.decoder (
pyctcdecode.BeamSearchDecoderCTC
, optional) — PyCTCDecode’s BeamSearchDecoderCTC can be passed for language model boosted decoding. See Wav2Vec2ProcessorWithLM for more information.
Pipeline that aims at extracting spoken text contained within some audio.
The input can be either a raw waveform or a audio file. In case of the audio file, ffmpeg should be installed for to support multiple audio formats
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
__call__
( inputs: typing.Union[numpy.ndarray, bytes, str]**kwargs ) → Dict
Parameters
inputs (
np.ndarray
orbytes
orstr
ordict
) — The inputs is either :str
that is either the filename of a local audio file, or a public URL address to download the audio file. The file will be read at the correct sampling rate to get the waveform using ffmpeg. This requires ffmpeg to be installed on the system.bytes
it is supposed to be the content of an audio file and is interpreted by ffmpeg in the same way.(
np.ndarray
of shape (n, ) of typenp.float32
ornp.float64
) Raw audio at the correct sampling rate (no further check will be done)dict
form can be used to pass raw audio sampled at arbitrarysampling_rate
and let this pipeline do the resampling. The dict must be in the format{"sampling_rate": int, "raw": np.array}
with optionally a"stride": (left: int, right: int)
than can ask the pipeline to treat the firstleft
samples and lastright
samples to be ignored in decoding (but used at inference to provide more context to the model). Only usestride
with CTC models.
return_timestamps (optional,
str
orbool
) — Only available for pure CTC models (Wav2Vec2, HuBERT, etc) and the Whisper model. Not available for other sequence-to-sequence models.For CTC models, timestamps can take one of two formats:
"char"
: the pipeline will return timestamps along the text for every character in the text. For instance, if you get[{"text": "h", "timestamp": (0.5, 0.6)}, {"text": "i", "timestamp": (0.7, 0.9)}]
, then it means the model predicts that the letter “h” was spoken after0.5
and before0.6
seconds."word"
: the pipeline will return timestamps along the text for every word in the text. For instance, if you get[{"text": "hi ", "timestamp": (0.5, 0.9)}, {"text": "there", "timestamp": (1.0, 1.5)}]
, then it means the model predicts that the word “hi” was spoken after0.5
and before0.9
seconds.
For the Whisper model, timestamps can take one of two formats:
"word"
: same as above for word-level CTC timestamps. Word-level timestamps are predicted through the dynamic-time warping (DTW) algorithm, an approximation to word-level timestamps by inspecting the cross-attention weights.True
: the pipeline will return timestamps along the text for segments of words in the text. For instance, if you get[{"text": " Hi there!", "timestamp": (0.5, 1.5)}]
, then it means the model predicts that the segment “Hi there!” was spoken after0.5
and before1.5
seconds. Note that a segment of text refers to a sequence of one or more words, rather than individual words as with word-level timestamps.
generate_kwargs (
dict
, optional) — The dictionary of ad-hoc parametrization ofgenerate_config
to be used for the generation call. For a complete overview of generate, check the following guide.max_new_tokens (
int
, optional) — The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt.
Returns
Dict
A dictionary with the following keys:
text (
str
): The recognized text.chunks (optional(,
List[Dict]
) When usingreturn_timestamps
, thechunks
will become a list containing all the various text chunks identified by the model, e.g.*[{"text": "hi ", "timestamp": (0.5, 0.9)}, {"text": "there", "timestamp": (1.0, 1.5)}]
. The original full text can roughly be recovered by doing"".join(chunk["text"] for chunk in output["chunks"])
.
Transcribe the audio sequence(s) given as inputs to text. See the AutomaticSpeechRecognitionPipeline documentation for more information.
TextToAudioPipeline
class transformers.TextToAudioPipeline
( *argsvocoder = Nonesampling_rate = None**kwargs )
Text-to-audio generation pipeline using any AutoModelForTextToWaveform
or AutoModelForTextToSpectrogram
. This pipeline generates an audio file from an input text and optional other conditional inputs.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
This pipeline can currently be loaded from pipeline() using the following task identifiers: "text-to-speech"
or "text-to-audio"
.
See the list of available models on boincai.com/models.
__call__
( text_inputs: typing.Union[str, typing.List[str]]**forward_params ) → A dict
or a list of dict
Parameters
text_inputs (
str
orList[str]
) — The text(s) to generate.forward_params (optional) — Parameters passed to the model generation/forward method.
Returns
A dict
or a list of dict
The dictionaries have two keys:
audio (
np.ndarray
of shape(nb_channels, audio_length)
) — The generated audio waveform.sampling_rate (
int
) — The sampling rate of the generated audio waveform.
Generates speech/audio from the inputs. See the TextToAudioPipeline documentation for more information.
ZeroShotAudioClassificationPipeline
class transformers.ZeroShotAudioClassificationPipeline
( **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Zero shot audio classification pipeline using ClapModel
. This pipeline predicts the class of an audio when you provide an audio and a set of candidate_labels
.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial This audio classification pipeline can currently be loaded from pipeline() using the following task identifier: "zero-shot-audio-classification"
. See the list of available models on boincai.com/models.
__call__
( audios: typing.Union[numpy.ndarray, bytes, str]**kwargs )
Parameters
audios (
str
,List[str]
,np.array
orList[np.array]
) — The pipeline handles three types of inputs:A string containing a http link pointing to an audio
A string containing a local path to an audio
An audio loaded in numpy
candidate_labels (
List[str]
) — The candidate labels for this audiohypothesis_template (
str
, optional, defaults to"This is a sound of {}"
) — The sentence used in cunjunction with candidate_labels to attempt the audio classification by replacing the placeholder with the candidate_labels. Then likelihood is estimated by using logits_per_audio
Assign labels to the audio(s) passed as inputs.
Computer vision
Pipelines available for computer vision tasks include the following.
DepthEstimationPipeline
class transformers.DepthEstimationPipeline
( *args**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Depth estimation pipeline using any AutoModelForDepthEstimation
. This pipeline predicts the depth of an image.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
This depth estimation pipeline can currently be loaded from pipeline() using the following task identifier: "depth-estimation"
.
See the list of available models on boincai.com/models.
__call__
( images: typing.Union[str, typing.List[str], ForwardRef('Image.Image'), typing.List[ForwardRef('Image.Image')]]**kwargs )
Parameters
images (
str
,List[str]
,PIL.Image
orList[PIL.Image]
) — The pipeline handles three types of images:A string containing a http link pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images, which must then be passed as a string. Images in a batch must all be in the same format: all as http links, all as local paths, or all as PIL images.
top_k (
int
, optional, defaults to 5) — The number of top labels that will be returned by the pipeline. If the provided number is higher than the number of labels available in the model configuration, it will default to the number of labels.timeout (
float
, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.
Assign labels to the image(s) passed as inputs.
ImageClassificationPipeline
class transformers.ImageClassificationPipeline
( *args**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Image classification pipeline using any AutoModelForImageClassification
. This pipeline predicts the class of an image.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
This image classification pipeline can currently be loaded from pipeline() using the following task identifier: "image-classification"
.
See the list of available models boincai.com/models.
__call__
( images: typing.Union[str, typing.List[str], ForwardRef('Image.Image'), typing.List[ForwardRef('Image.Image')]]**kwargs )
Parameters
images (
str
,List[str]
,PIL.Image
orList[PIL.Image]
) — The pipeline handles three types of images:A string containing a http link pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images, which must then be passed as a string. Images in a batch must all be in the same format: all as http links, all as local paths, or all as PIL images.
top_k (
int
, optional, defaults to 5) — The number of top labels that will be returned by the pipeline. If the provided number is higher than the number of labels available in the model configuration, it will default to the number of labels.timeout (
float
, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.
Assign labels to the image(s) passed as inputs.
ImageSegmentationPipeline
class transformers.ImageSegmentationPipeline
( *args**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Image segmentation pipeline using any AutoModelForXXXSegmentation
. This pipeline predicts masks of objects and their classes.
Example:
Copied
This image segmentation pipeline can currently be loaded from pipeline() using the following task identifier: "image-segmentation"
.
See the list of available models on boincai.com/models.
__call__
( images**kwargs )
Parameters
images (
str
,List[str]
,PIL.Image
orList[PIL.Image]
) — The pipeline handles three types of images:A string containing an HTTP(S) link pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images. Images in a batch must all be in the same format: all as HTTP(S) links, all as local paths, or all as PIL images.
subtask (
str
, optional) — Segmentation task to be performed, choose [semantic
,instance
andpanoptic
] depending on model capabilities. If not set, the pipeline will attempt tp resolve in the following order:panoptic
,instance
,semantic
.threshold (
float
, optional, defaults to 0.9) — Probability threshold to filter out predicted masks.mask_threshold (
float
, optional, defaults to 0.5) — Threshold to use when turning the predicted masks into binary values.overlap_mask_area_threshold (
float
, optional, defaults to 0.5) — Mask overlap threshold to eliminate small, disconnected segments.timeout (
float
, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.
Perform segmentation (detect masks & classes) in the image(s) passed as inputs.
ImageToImagePipeline
class transformers.ImageToImagePipeline
( *args**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Image to Image pipeline using any AutoModelForImageToImage
. This pipeline generates an image based on a previous image input.
Example:
Copied
This image to image pipeline can currently be loaded from pipeline() using the following task identifier: "image-to-image"
.
See the list of available models on boincai.com/models.
__call__
( images: typing.Union[str, typing.List[str], ForwardRef('Image.Image'), typing.List[ForwardRef('Image.Image')]]**kwargs )
Parameters
images (
str
,List[str]
,PIL.Image
orList[PIL.Image]
) — The pipeline handles three types of images:A string containing a http link pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images, which must then be passed as a string. Images in a batch must all be in the same format: all as http links, all as local paths, or all as PIL images.
timeout (
float
, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is used and the call may block forever.
Transform the image(s) passed as inputs.
ObjectDetectionPipeline
class transformers.ObjectDetectionPipeline
( *args**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Object detection pipeline using any AutoModelForObjectDetection
. This pipeline predicts bounding boxes of objects and their classes.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
This object detection pipeline can currently be loaded from pipeline() using the following task identifier: "object-detection"
.
See the list of available models on boincai.com/models..
__call__
( *args**kwargs )
Parameters
images (
str
,List[str]
,PIL.Image
orList[PIL.Image]
) — The pipeline handles three types of images:A string containing an HTTP(S) link pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images. Images in a batch must all be in the same format: all as HTTP(S) links, all as local paths, or all as PIL images.
threshold (
float
, optional, defaults to 0.9) — The probability necessary to make a prediction.timeout (
float
, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.
Detect objects (bounding boxes & classes) in the image(s) passed as inputs.
VideoClassificationPipeline
class transformers.VideoClassificationPipeline
( *args**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Video classification pipeline using any AutoModelForVideoClassification
. This pipeline predicts the class of a video.
This video classification pipeline can currently be loaded from pipeline() using the following task identifier: "video-classification"
.
See the list of available models on boincai.com/models..
__call__
( videos: typing.Union[str, typing.List[str]]**kwargs )
Parameters
videos (
str
,List[str]
) — The pipeline handles three types of videos:A string containing a http link pointing to a video
A string containing a local path to a video
The pipeline accepts either a single video or a batch of videos, which must then be passed as a string. Videos in a batch must all be in the same format: all as http links or all as local paths.
top_k (
int
, optional, defaults to 5) — The number of top labels that will be returned by the pipeline. If the provided number is higher than the number of labels available in the model configuration, it will default to the number of labels.num_frames (
int
, optional, defaults toself.model.config.num_frames
) — The number of frames sampled from the video to run the classification on. If not provided, will default to the number of frames specified in the model configuration.frame_sampling_rate (
int
, optional, defaults to 1) — The sampling rate used to select frames from the video. If not provided, will default to 1, i.e. every frame will be used.
Assign labels to the video(s) passed as inputs.
ZeroShotImageClassificationPipeline
class transformers.ZeroShotImageClassificationPipeline
( **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Zero shot image classification pipeline using CLIPModel
. This pipeline predicts the class of an image when you provide an image and a set of candidate_labels
.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
This image classification pipeline can currently be loaded from pipeline() using the following task identifier: "zero-shot-image-classification"
.
See the list of available models on boincai.com/models..
__call__
( images: typing.Union[str, typing.List[str], ForwardRef('Image'), typing.List[ForwardRef('Image')]]**kwargs )
Parameters
images (
str
,List[str]
,PIL.Image
orList[PIL.Image]
) — The pipeline handles three types of images:A string containing a http link pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
candidate_labels (
List[str]
) — The candidate labels for this imagehypothesis_template (
str
, optional, defaults to"This is a photo of {}"
) — The sentence used in cunjunction with candidate_labels to attempt the image classification by replacing the placeholder with the candidate_labels. Then likelihood is estimated by using logits_per_imagetimeout (
float
, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.
Assign labels to the image(s) passed as inputs.
ZeroShotObjectDetectionPipeline
class transformers.ZeroShotObjectDetectionPipeline
( **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Zero shot object detection pipeline using OwlViTForObjectDetection
. This pipeline predicts bounding boxes of objects when you provide an image and a set of candidate_labels
.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
This object detection pipeline can currently be loaded from pipeline() using the following task identifier: "zero-shot-object-detection"
.
See the list of available models on boincai.com/models..
__call__
( image: typing.Union[str, ForwardRef('Image.Image'), typing.List[typing.Dict[str, typing.Any]]]candidate_labels: typing.Union[str, typing.List[str]] = None**kwargs )
Parameters
image (
str
,PIL.Image
orList[Dict[str, Any]]
) — The pipeline handles three types of images:A string containing an http url pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
You can use this parameter to send directly a list of images, or a dataset or a generator like so:
Detect objects (bounding boxes & classes) in the image(s) passed as inputs.
Natural Language Processing
Pipelines available for natural language processing tasks include the following.
ConversationalPipeline
class transformers.Conversation
( messages: typing.Union[str, typing.List[typing.Dict[str, str]]] = Noneconversation_id: UUID = None**deprecated_kwargs )
Parameters
messages (Union[str, List[Dict[str, str]]], optional) — The initial messages to start the conversation, either a string, or a list of dicts containing “role” and “content” keys. If a string is passed, it is interpreted as a single message with the “user” role.
conversation_id (
uuid.UUID
, optional) — Unique identifier for the conversation. If not provided, a random UUID4 id will be assigned to the conversation.
Utility class containing a conversation and its history. This class is meant to be used as an input to the ConversationalPipeline. The conversation contains several utility functions to manage the addition of new user inputs and generated model responses.
Usage:
Copied
add_user_input
( text: stroverwrite: bool = False )
Add a user input to the conversation for the next round. This is a legacy method that assumes that inputs must alternate user/assistant/user/assistant, and so will not add multiple user messages in succession. We recommend just using add_message
with role “user” instead.
append_response
( response: str )
This is a legacy method. We recommend just using add_message
with an appropriate role instead.
mark_processed
( )
This is a legacy method that no longer has any effect, as the Conversation no longer distinguishes between processed and unprocessed user input.
class transformers.ConversationalPipeline
( *args**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.min_length_for_response (
int
, optional, defaults to 32) — The minimum length (in number of tokens) for a response.minimum_tokens (
int
, optional, defaults to 10) — The minimum length of tokens to leave for a response.
Multi-turn conversational pipeline.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
This conversational pipeline can currently be loaded from pipeline() using the following task identifier: "conversational"
.
The models that this pipeline can use are models that have been fine-tuned on a multi-turn conversational task, currently: ‘microsoft/DialoGPT-small’, ‘microsoft/DialoGPT-medium’, ‘microsoft/DialoGPT-large’. See the up-to-date list of available models on boincai.com/models.
__call__
( conversations: typing.Union[transformers.pipelines.conversational.Conversation, typing.List[transformers.pipelines.conversational.Conversation]]num_workers = 0**kwargs ) → Conversation or a list of Conversation
Parameters
conversations (a Conversation or a list of Conversation) — Conversations to generate responses for.
clean_up_tokenization_spaces (
bool
, optional, defaults toFalse
) — Whether or not to clean up the potential extra spaces in the text output. generate_kwargs — Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).
Returns
Conversation or a list of Conversation
Conversation(s) with updated generated responses for those containing a new user input.
Generate responses for the conversation(s) given as inputs.
FillMaskPipeline
class transformers.FillMaskPipeline
( model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel')]tokenizer: typing.Optional[transformers.tokenization_utils.PreTrainedTokenizer] = Nonefeature_extractor: typing.Optional[ForwardRef('SequenceFeatureExtractor')] = Noneimage_processor: typing.Optional[transformers.image_processing_utils.BaseImageProcessor] = Nonemodelcard: typing.Optional[transformers.modelcard.ModelCard] = Noneframework: typing.Optional[str] = Nonetask: str = ''args_parser: ArgumentHandler = Nonedevice: typing.Union[int, ForwardRef('torch.device')] = Nonetorch_dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = Nonebinary_output: bool = False**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.top_k (
int
, defaults to 5) — The number of predictions to return.targets (
str
orList[str]
, optional) — When passed, the model will limit the scores to the passed targets instead of looking up in the whole vocab. If the provided targets are not in the model vocab, they will be tokenized and the first resulting token will be used (with a warning, and that might be slower).
Masked language modeling prediction pipeline using any ModelWithLMHead
. See the masked language modeling examples for more information.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
This mask filling pipeline can currently be loaded from pipeline() using the following task identifier: "fill-mask"
.
The models that this pipeline can use are models that have been trained with a masked language modeling objective, which includes the bi-directional models in the library. See the up-to-date list of available models on boincai.com/models.
This pipeline only works for inputs with exactly one token masked. Experimental: We added support for multiple masks. The returned values are raw model output, and correspond to disjoint probabilities where one might expect joint probabilities (See discussion).
This pipeline now supports tokenizer_kwargs. For example try:
Copied
__call__
( inputs*args**kwargs ) → A list or a list of list of dict
Parameters
args (
str
orList[str]
) — One or several texts (or one list of prompts) with masked tokens.targets (
str
orList[str]
, optional) — When passed, the model will limit the scores to the passed targets instead of looking up in the whole vocab. If the provided targets are not in the model vocab, they will be tokenized and the first resulting token will be used (with a warning, and that might be slower).top_k (
int
, optional) — When passed, overrides the number of predictions to return.
Returns
A list or a list of list of dict
Each result comes as list of dictionaries with the following keys:
sequence (
str
) — The corresponding input with the mask token prediction.score (
float
) — The corresponding probability.token (
int
) — The predicted token id (to replace the masked one).token_str (
str
) — The predicted token (to replace the masked one).
Fill the masked token in the text(s) given as inputs.
NerPipeline
class transformers.TokenClassificationPipeline
( args_parser = <transformers.pipelines.token_classification.TokenClassificationArgumentHandler object at 0x7f8de61e9ac0>*args**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.ignore_labels (
List[str]
, defaults to["O"]
) — A list of labels to ignore.grouped_entities (
bool
, optional, defaults toFalse
) — DEPRECATED, useaggregation_strategy
instead. Whether or not to group the tokens corresponding to the same entity together in the predictions or not.stride (
int
, optional) — If stride is provided, the pipeline is applied on all the text. The text is split into chunks of size model_max_length. Works only with fast tokenizers andaggregation_strategy
different fromNONE
. The value of this argument defines the number of overlapping tokens between chunks. In other words, the model will shift forward bytokenizer.model_max_length - stride
tokens each step.aggregation_strategy (
str
, optional, defaults to"none"
) — The strategy to fuse (or not) tokens based on the model prediction.“none” : Will simply not do any aggregation and simply return raw results from the model
“simple” : Will attempt to group entities following the default schema. (A, B-TAG), (B, I-TAG), (C, I-TAG), (D, B-TAG2) (E, B-TAG2) will end up being [{“word”: ABC, “entity”: “TAG”}, {“word”: “D”, “entity”: “TAG2”}, {“word”: “E”, “entity”: “TAG2”}] Notice that two consecutive B tags will end up as different entities. On word based languages, we might end up splitting words undesirably : Imagine Microsoft being tagged as [{“word”: “Micro”, “entity”: “ENTERPRISE”}, {“word”: “soft”, “entity”: “NAME”}]. Look for FIRST, MAX, AVERAGE for ways to mitigate that and disambiguate words (on languages that support that meaning, which is basically tokens separated by a space). These mitigations will only work on real words, “New york” might still be tagged with two different entities.
“first” : (works only on word based models) Will use the
SIMPLE
strategy except that words, cannot end up with different tags. Words will simply use the tag of the first token of the word when there is ambiguity.“average” : (works only on word based models) Will use the
SIMPLE
strategy except that words, cannot end up with different tags. scores will be averaged first across tokens, and then the maximum label is applied.“max” : (works only on word based models) Will use the
SIMPLE
strategy except that words, cannot end up with different tags. Word entity will simply be the token with the maximum score.
Named Entity Recognition pipeline using any ModelForTokenClassification
. See the named entity recognition examples for more information.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
This token recognition pipeline can currently be loaded from pipeline() using the following task identifier: "ner"
(for predicting the classes of tokens in a sequence: person, organisation, location or miscellaneous).
The models that this pipeline can use are models that have been fine-tuned on a token classification task. See the up-to-date list of available models on boincai.com/models.
aggregate_words
( entities: typing.List[dict]aggregation_strategy: AggregationStrategy )
Override tokens from a given word that disagree to force agreement on word boundaries.
Example: micro|soft| com|pany| B-ENT I-NAME I-ENT I-ENT will be rewritten with first strategy as microsoft| company| B-ENT I-ENT
gather_pre_entities
( sentence: strinput_ids: ndarrayscores: ndarrayoffset_mapping: typing.Union[typing.List[typing.Tuple[int, int]], NoneType]special_tokens_mask: ndarrayaggregation_strategy: AggregationStrategy )
Fuse various numpy arrays into dicts with all the information needed for aggregation
group_entities
( entities: typing.List[dict] )
Parameters
entities (
dict
) — The entities predicted by the pipeline.
Find and group together the adjacent tokens with the same entity predicted.
group_sub_entities
( entities: typing.List[dict] )
Parameters
entities (
dict
) — The entities predicted by the pipeline.
Group together the adjacent tokens with the same entity predicted.
See TokenClassificationPipeline for all details.
QuestionAnsweringPipeline
class transformers.QuestionAnsweringPipeline
( model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel')]tokenizer: PreTrainedTokenizermodelcard: typing.Optional[transformers.modelcard.ModelCard] = Noneframework: typing.Optional[str] = Nonetask: str = ''**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Question Answering pipeline using any ModelForQuestionAnswering
. See the question answering examples for more information.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
This question answering pipeline can currently be loaded from pipeline() using the following task identifier: "question-answering"
.
The models that this pipeline can use are models that have been fine-tuned on a question answering task. See the up-to-date list of available models on boincai.com/models
__call__
( *args**kwargs ) → A dict
or a list of dict
Parameters
args (
SquadExample
or a list ofSquadExample
) — One or severalSquadExample
containing the question and context.X (
SquadExample
or a list ofSquadExample
, optional) — One or severalSquadExample
containing the question and context (will be treated the same way as if passed as the first positional argument).data (
SquadExample
or a list ofSquadExample
, optional) — One or severalSquadExample
containing the question and context (will be treated the same way as if passed as the first positional argument).question (
str
orList[str]
) — One or several question(s) (must be used in conjunction with thecontext
argument).context (
str
orList[str]
) — One or several context(s) associated with the question(s) (must be used in conjunction with thequestion
argument).topk (
int
, optional, defaults to 1) — The number of answers to return (will be chosen by order of likelihood). Note that we return less than topk answers if there are not enough options available within the context.doc_stride (
int
, optional, defaults to 128) — If the context is too long to fit with the question for the model, it will be split in several chunks with some overlap. This argument controls the size of that overlap.max_answer_len (
int
, optional, defaults to 15) — The maximum length of predicted answers (e.g., only answers with a shorter length are considered).max_seq_len (
int
, optional, defaults to 384) — The maximum length of the total sentence (context + question) in tokens of each chunk passed to the model. The context will be split in several chunks (usingdoc_stride
as overlap) if needed.max_question_len (
int
, optional, defaults to 64) — The maximum length of the question after tokenization. It will be truncated if needed.handle_impossible_answer (
bool
, optional, defaults toFalse
) — Whether or not we accept impossible as an answer.align_to_words (
bool
, optional, defaults toTrue
) — Attempts to align the answer to real words. Improves quality on space separated langages. Might hurt on non-space-separated languages (like Japanese or Chinese)
Returns
A dict
or a list of dict
Each result comes as a dictionary with the following keys:
score (
float
) — The probability associated to the answer.start (
int
) — The character start index of the answer (in the tokenized version of the input).end (
int
) — The character end index of the answer (in the tokenized version of the input).answer (
str
) — The answer to the question.
Answer the question(s) given as inputs by using the context(s).
create_sample
( question: typing.Union[str, typing.List[str]]context: typing.Union[str, typing.List[str]] ) → One or a list of SquadExample
Parameters
question (
str
orList[str]
) — The question(s) asked.context (
str
orList[str]
) — The context(s) in which we will look for the answer.
Returns
One or a list of SquadExample
The corresponding SquadExample
grouping question and context.
QuestionAnsweringPipeline leverages the SquadExample
internally. This helper method encapsulate all the logic for converting question(s) and context(s) to SquadExample
.
We currently support extractive question answering.
span_to_answer
( text: strstart: intend: int ) → Dictionary like `{‘answer’
Parameters
text (
str
) — The actual context to extract the answer from.start (
int
) — The answer starting token index.end (
int
) — The answer end token index.
Returns
Dictionary like `{‘answer’
str, ‘start’: int, ‘end’: int}`
When decoding from token probabilities, this method maps token indexes to actual word in the initial context.
SummarizationPipeline
class transformers.SummarizationPipeline
( *args**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Summarize news articles and other documents.
This summarizing pipeline can currently be loaded from pipeline() using the following task identifier: "summarization"
.
The models that this pipeline can use are models that have been fine-tuned on a summarization task, which is currently, ’bart-large-cnn’, ’t5-small’, ’t5-base’, ’t5-large’, ’t5-3b’, ’t5-11b’. See the up-to-date list of available models on boincai.com/models. For a list of available parameters, see the following documentation
Usage:
Copied
__call__
( *args**kwargs ) → A list or a list of list of dict
Parameters
documents (str or
List[str]
) — One or several articles (or one list of articles) to summarize.return_text (
bool
, optional, defaults toTrue
) — Whether or not to include the decoded texts in the outputsreturn_tensors (
bool
, optional, defaults toFalse
) — Whether or not to include the tensors of predictions (as token indices) in the outputs.clean_up_tokenization_spaces (
bool
, optional, defaults toFalse
) — Whether or not to clean up the potential extra spaces in the text output. generate_kwargs — Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).
Returns
A list or a list of list of dict
Each result comes as a dictionary with the following keys:
summary_text (
str
, present whenreturn_text=True
) — The summary of the corresponding input.summary_token_ids (
torch.Tensor
ortf.Tensor
, present whenreturn_tensors=True
) — The token ids of the summary.
Summarize the text(s) given as inputs.
TableQuestionAnsweringPipeline
class transformers.TableQuestionAnsweringPipeline
( args_parser = <transformers.pipelines.table_question_answering.TableQuestionAnsweringArgumentHandler object at 0x7f8de62ba6d0>*args**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Table Question Answering pipeline using a ModelForTableQuestionAnswering
. This pipeline is only available in PyTorch.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
This tabular question answering pipeline can currently be loaded from pipeline() using the following task identifier: "table-question-answering"
.
The models that this pipeline can use are models that have been fine-tuned on a tabular question answering task. See the up-to-date list of available models on boincai.com/models.
__call__
( *args**kwargs ) → A dictionary or a list of dictionaries containing results
Parameters
table (
pd.DataFrame
orDict
) — Pandas DataFrame or dictionary that will be converted to a DataFrame containing all the table values. See above for an example of dictionary.query (
str
orList[str]
) — Query or list of queries that will be sent to the model alongside the table.sequential (
bool
, optional, defaults toFalse
) — Whether to do inference sequentially or as a batch. Batching is faster, but models like SQA require the inference to be done sequentially to extract relations within sequences, given their conversational nature.padding (
bool
,str
or PaddingStrategy, optional, defaults toFalse
) — Activates and controls padding. Accepts the following values:True
or'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (
bool
,str
orTapasTruncationStrategy
, optional, defaults toFalse
) — Activates and controls truncation. Accepts the following values:True
or'drop_rows_to_fit'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will truncate row by row, removing rows from the table.False
or'do_not_truncate'
(default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
Returns
A dictionary or a list of dictionaries containing results
Each result is a dictionary with the following keys:
answer (
str
) — The answer of the query given the table. If there is an aggregator, the answer will be preceded byAGGREGATOR >
.coordinates (
List[Tuple[int, int]]
) — Coordinates of the cells of the answers.cells (
List[str]
) — List of strings made up of the answer cell values.aggregator (
str
) — If the model has an aggregator, this returns the aggregator.
Answers queries according to a table. The pipeline accepts several types of inputs which are detailed below:
pipeline(table, query)
pipeline(table, [query])
pipeline(table=table, query=query)
pipeline(table=table, query=[query])
pipeline({"table": table, "query": query})
pipeline({"table": table, "query": [query]})
pipeline([{"table": table, "query": query}, {"table": table, "query": query}])
The table
argument should be a dict or a DataFrame built from that dict, containing the whole table:
Example:
Copied
This dictionary can be passed in as such, or can be converted to a pandas DataFrame:
Example:
Copied
TextClassificationPipeline
class transformers.TextClassificationPipeline
( **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.return_all_scores (
bool
, optional, defaults toFalse
) — Whether to return all prediction scores or just the one of the predicted class.function_to_apply (
str
, optional, defaults to"default"
) — The function to apply to the model outputs in order to retrieve the scores. Accepts four different values:"default"
: if the model has a single label, will apply the sigmoid function on the output. If the model has several labels, will apply the softmax function on the output."sigmoid"
: Applies the sigmoid function on the output."softmax"
: Applies the softmax function on the output."none"
: Does not apply any function on the output.
Text classification pipeline using any ModelForSequenceClassification
. See the sequence classification examples for more information.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
This text classification pipeline can currently be loaded from pipeline() using the following task identifier: "sentiment-analysis"
(for classifying sequences according to positive or negative sentiments).
If multiple classification labels are available (model.config.num_labels >= 2
), the pipeline will run a softmax over the results. If there is a single label, the pipeline will run a sigmoid over the result.
The models that this pipeline can use are models that have been fine-tuned on a sequence classification task. See the up-to-date list of available models on boincai.com/models.
__call__
( *args**kwargs ) → A list or a list of list of dict
Parameters
args (
str
orList[str]
orDict[str]
, orList[Dict[str]]
) — One or several texts to classify. In order to use text pairs for your classification, you can send a dictionary containing{"text", "text_pair"}
keys, or a list of those.top_k (
int
, optional, defaults to1
) — How many results to return.function_to_apply (
str
, optional, defaults to"default"
) — The function to apply to the model outputs in order to retrieve the scores. Accepts four different values:If this argument is not specified, then it will apply the following functions according to the number of labels:
If the model has a single label, will apply the sigmoid function on the output.
If the model has several labels, will apply the softmax function on the output.
Possible values are:
"sigmoid"
: Applies the sigmoid function on the output."softmax"
: Applies the softmax function on the output."none"
: Does not apply any function on the output.
Returns
A list or a list of list of dict
Each result comes as list of dictionaries with the following keys:
label (
str
) — The label predicted.score (
float
) — The corresponding probability.
If top_k
is used, one such dictionary is returned per label.
Classify the text(s) given as inputs.
TextGenerationPipeline
class transformers.TextGenerationPipeline
( *args**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Language generation pipeline using any ModelWithLMHead
. This pipeline predicts the words that will follow a specified text prompt.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial. You can pass text generation parameters to this pipeline to control stopping criteria, decoding strategy, and more. Learn more about text generation parameters in Text generation strategies and Text generation.
This language generation pipeline can currently be loaded from pipeline() using the following task identifier: "text-generation"
.
The models that this pipeline can use are models that have been trained with an autoregressive language modeling objective, which includes the uni-directional models in the library (e.g. gpt2). See the list of available models on boincai.com/models.
__call__
( text_inputs**kwargs ) → A list or a list of list of dict
Parameters
args (
str
orList[str]
) — One or several prompts (or one list of prompts) to complete.return_tensors (
bool
, optional, defaults toFalse
) — Whether or not to return the tensors of predictions (as token indices) in the outputs. If set toTrue
, the decoded text is not returned.return_text (
bool
, optional, defaults toTrue
) — Whether or not to return the decoded texts in the outputs.return_full_text (
bool
, optional, defaults toTrue
) — If set toFalse
only added text is returned, otherwise the full text is returned. Only meaningful if return_text is set to True.clean_up_tokenization_spaces (
bool
, optional, defaults toFalse
) — Whether or not to clean up the potential extra spaces in the text output.prefix (
str
, optional) — Prefix added to prompt.handle_long_generation (
str
, optional) — By default, this pipelines does not handle long generation (ones that exceed in one form or the other the model maximum length). There is no perfect way to adress this (more info :https://github.com/boincai/transformers/issues/14033#issuecomment-948385227). This provides common strategies to work around that problem depending on your use case.None
: default strategy where nothing in particular happens"hole"
: Truncates left of input, and leaves a gap wide enough to let generation happen (might truncate a lot of the prompt and not suitable when generation exceed the model capacity)
generate_kwargs — Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).
Returns
A list or a list of list of dict
Returns one of the following dictionaries (cannot return a combination of both generated_text
and generated_token_ids
):
generated_text (
str
, present whenreturn_text=True
) — The generated text.generated_token_ids (
torch.Tensor
ortf.Tensor
, present whenreturn_tensors=True
) — The token ids of the generated text.
Complete the prompt(s) given as inputs.
Text2TextGenerationPipeline
class transformers.Text2TextGenerationPipeline
( *args**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Pipeline for text to text generation using seq2seq models.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial. You can pass text generation parameters to this pipeline to control stopping criteria, decoding strategy, and more. Learn more about text generation parameters in Text generation strategies and Text generation.
This Text2TextGenerationPipeline pipeline can currently be loaded from pipeline() using the following task identifier: "text2text-generation"
.
The models that this pipeline can use are models that have been fine-tuned on a translation task. See the up-to-date list of available models on boincai.com/models. For a list of available parameters, see the following documentation
Usage:
Copied
__call__
( *args**kwargs ) → A list or a list of list of dict
Parameters
args (
str
orList[str]
) — Input text for the encoder.return_tensors (
bool
, optional, defaults toFalse
) — Whether or not to include the tensors of predictions (as token indices) in the outputs.return_text (
bool
, optional, defaults toTrue
) — Whether or not to include the decoded texts in the outputs.clean_up_tokenization_spaces (
bool
, optional, defaults toFalse
) — Whether or not to clean up the potential extra spaces in the text output.truncation (
TruncationStrategy
, optional, defaults toTruncationStrategy.DO_NOT_TRUNCATE
) — The truncation strategy for the tokenization within the pipeline.TruncationStrategy.DO_NOT_TRUNCATE
(default) will never truncate, but it is sometimes desirable to truncate the input to fit the model’s max_length instead of throwing an error down the line. generate_kwargs — Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).
Returns
A list or a list of list of dict
Each result comes as a dictionary with the following keys:
generated_text (
str
, present whenreturn_text=True
) — The generated text.generated_token_ids (
torch.Tensor
ortf.Tensor
, present whenreturn_tensors=True
) — The token ids of the generated text.
Generate the output text(s) using text(s) given as inputs.
check_inputs
( input_length: intmin_length: intmax_length: int )
Checks whether there might be something wrong with given input with regard to the model.
TokenClassificationPipeline
class transformers.TokenClassificationPipeline
( args_parser = <transformers.pipelines.token_classification.TokenClassificationArgumentHandler object at 0x7f8de61e9ac0>*args**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.ignore_labels (
List[str]
, defaults to["O"]
) — A list of labels to ignore.grouped_entities (
bool
, optional, defaults toFalse
) — DEPRECATED, useaggregation_strategy
instead. Whether or not to group the tokens corresponding to the same entity together in the predictions or not.stride (
int
, optional) — If stride is provided, the pipeline is applied on all the text. The text is split into chunks of size model_max_length. Works only with fast tokenizers andaggregation_strategy
different fromNONE
. The value of this argument defines the number of overlapping tokens between chunks. In other words, the model will shift forward bytokenizer.model_max_length - stride
tokens each step.aggregation_strategy (
str
, optional, defaults to"none"
) — The strategy to fuse (or not) tokens based on the model prediction.“none” : Will simply not do any aggregation and simply return raw results from the model
“simple” : Will attempt to group entities following the default schema. (A, B-TAG), (B, I-TAG), (C, I-TAG), (D, B-TAG2) (E, B-TAG2) will end up being [{“word”: ABC, “entity”: “TAG”}, {“word”: “D”, “entity”: “TAG2”}, {“word”: “E”, “entity”: “TAG2”}] Notice that two consecutive B tags will end up as different entities. On word based languages, we might end up splitting words undesirably : Imagine Microsoft being tagged as [{“word”: “Micro”, “entity”: “ENTERPRISE”}, {“word”: “soft”, “entity”: “NAME”}]. Look for FIRST, MAX, AVERAGE for ways to mitigate that and disambiguate words (on languages that support that meaning, which is basically tokens separated by a space). These mitigations will only work on real words, “New york” might still be tagged with two different entities.
“first” : (works only on word based models) Will use the
SIMPLE
strategy except that words, cannot end up with different tags. Words will simply use the tag of the first token of the word when there is ambiguity.“average” : (works only on word based models) Will use the
SIMPLE
strategy except that words, cannot end up with different tags. scores will be averaged first across tokens, and then the maximum label is applied.“max” : (works only on word based models) Will use the
SIMPLE
strategy except that words, cannot end up with different tags. Word entity will simply be the token with the maximum score.
Named Entity Recognition pipeline using any ModelForTokenClassification
. See the named entity recognition examples for more information.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
This token recognition pipeline can currently be loaded from pipeline() using the following task identifier: "ner"
(for predicting the classes of tokens in a sequence: person, organisation, location or miscellaneous).
The models that this pipeline can use are models that have been fine-tuned on a token classification task. See the up-to-date list of available models on boincai.com/models.
__call__
( inputs: typing.Union[str, typing.List[str]]**kwargs ) → A list or a list of list of dict
Parameters
inputs (
str
orList[str]
) — One or several texts (or one list of texts) for token classification.
Returns
A list or a list of list of dict
Each result comes as a list of dictionaries (one for each token in the corresponding input, or each entity if this pipeline was instantiated with an aggregation_strategy) with the following keys:
word (
str
) — The token/word classified. This is obtained by decoding the selected tokens. If you want to have the exact string in the original sentence, usestart
andend
.score (
float
) — The corresponding probability forentity
.entity (
str
) — The entity predicted for that token/word (it is named entity_group when aggregation_strategy is not"none"
.index (
int
, only present whenaggregation_strategy="none"
) — The index of the corresponding token in the sentence.start (
int
, optional) — The index of the start of the corresponding entity in the sentence. Only exists if the offsets are available within the tokenizerend (
int
, optional) — The index of the end of the corresponding entity in the sentence. Only exists if the offsets are available within the tokenizer
Classify each token of the text(s) given as inputs.
aggregate_words
( entities: typing.List[dict]aggregation_strategy: AggregationStrategy )
Override tokens from a given word that disagree to force agreement on word boundaries.
Example: micro|soft| com|pany| B-ENT I-NAME I-ENT I-ENT will be rewritten with first strategy as microsoft| company| B-ENT I-ENT
gather_pre_entities
( sentence: strinput_ids: ndarrayscores: ndarrayoffset_mapping: typing.Union[typing.List[typing.Tuple[int, int]], NoneType]special_tokens_mask: ndarrayaggregation_strategy: AggregationStrategy )
Fuse various numpy arrays into dicts with all the information needed for aggregation
group_entities
( entities: typing.List[dict] )
Parameters
entities (
dict
) — The entities predicted by the pipeline.
Find and group together the adjacent tokens with the same entity predicted.
group_sub_entities
( entities: typing.List[dict] )
Parameters
entities (
dict
) — The entities predicted by the pipeline.
Group together the adjacent tokens with the same entity predicted.
TranslationPipeline
class transformers.TranslationPipeline
( *args**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Translates from one language to another.
This translation pipeline can currently be loaded from pipeline() using the following task identifier: "translation_xx_to_yy"
.
The models that this pipeline can use are models that have been fine-tuned on a translation task. See the up-to-date list of available models on boincai.com/models. For a list of available parameters, see the following documentation
Usage:
Copied
__call__
( *args**kwargs ) → A list or a list of list of dict
Parameters
args (
str
orList[str]
) — Texts to be translated.return_tensors (
bool
, optional, defaults toFalse
) — Whether or not to include the tensors of predictions (as token indices) in the outputs.return_text (
bool
, optional, defaults toTrue
) — Whether or not to include the decoded texts in the outputs.clean_up_tokenization_spaces (
bool
, optional, defaults toFalse
) — Whether or not to clean up the potential extra spaces in the text output.src_lang (
str
, optional) — The language of the input. Might be required for multilingual models. Will not have any effect for single pair translation modelstgt_lang (
str
, optional) — The language of the desired output. Might be required for multilingual models. Will not have any effect for single pair translation models generate_kwargs — Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).
Returns
A list or a list of list of dict
Each result comes as a dictionary with the following keys:
translation_text (
str
, present whenreturn_text=True
) — The translation.translation_token_ids (
torch.Tensor
ortf.Tensor
, present whenreturn_tensors=True
) — The token ids of the translation.
Translate the text(s) given as inputs.
ZeroShotClassificationPipeline
class transformers.ZeroShotClassificationPipeline
( args_parser = <transformers.pipelines.zero_shot_classification.ZeroShotClassificationArgumentHandler object at 0x7f8de59f6670>*args**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
NLI-based zero-shot classification pipeline using a ModelForSequenceClassification
trained on NLI (natural language inference) tasks. Equivalent of text-classification
pipelines, but these models don’t require a hardcoded number of potential classes, they can be chosen at runtime. It usually means it’s slower but it is much more flexible.
Any combination of sequences and labels can be passed and each combination will be posed as a premise/hypothesis pair and passed to the pretrained model. Then, the logit for entailment is taken as the logit for the candidate label being valid. Any NLI model can be used, but the id of the entailment label must be included in the model config’s :attr:~transformers.PretrainedConfig.label2id.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
This NLI pipeline can currently be loaded from pipeline() using the following task identifier: "zero-shot-classification"
.
The models that this pipeline can use are models that have been fine-tuned on an NLI task. See the up-to-date list of available models on boincai.com/models.
__call__
( sequences: typing.Union[str, typing.List[str]]*args**kwargs ) → A dict
or a list of dict
Parameters
sequences (
str
orList[str]
) — The sequence(s) to classify, will be truncated if the model input is too large.candidate_labels (
str
orList[str]
) — The set of possible class labels to classify each sequence into. Can be a single label, a string of comma-separated labels, or a list of labels.hypothesis_template (
str
, optional, defaults to"This example is {}."
) — The template used to turn each label into an NLI-style hypothesis. This template must include a {} or similar syntax for the candidate label to be inserted into the template. For example, the default template is"This example is {}."
With the candidate label"sports"
, this would be fed into the model like"<cls> sequence to classify <sep> This example is sports . <sep>"
. The default template works well in many cases, but it may be worthwhile to experiment with different templates depending on the task setting.multi_label (
bool
, optional, defaults toFalse
) — Whether or not multiple candidate labels can be true. IfFalse
, the scores are normalized such that the sum of the label likelihoods for each sequence is 1. IfTrue
, the labels are considered independent and probabilities are normalized for each candidate by doing a softmax of the entailment score vs. the contradiction score.
Returns
A dict
or a list of dict
Each result comes as a dictionary with the following keys:
sequence (
str
) — The sequence for which this is the output.labels (
List[str]
) — The labels sorted by order of likelihood.scores (
List[float]
) — The probabilities for each of the labels.
Classify the sequence(s) given as inputs. See the ZeroShotClassificationPipeline documentation for more information.
Multimodal
Pipelines available for multimodal tasks include the following.
DocumentQuestionAnsweringPipeline
class transformers.DocumentQuestionAnsweringPipeline
( *args**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Document Question Answering pipeline using any AutoModelForDocumentQuestionAnswering
. The inputs/outputs are similar to the (extractive) question answering pipeline; however, the pipeline takes an image (and optional OCR’d words/boxes) as input instead of text context.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
This document question answering pipeline can currently be loaded from pipeline() using the following task identifier: "document-question-answering"
.
The models that this pipeline can use are models that have been fine-tuned on a document question answering task. See the up-to-date list of available models on boincai.com/models.
__call__
( image: typing.Union[ForwardRef('Image.Image'), str]question: typing.Optional[str] = Noneword_boxes: typing.Tuple[str, typing.List[float]] = None**kwargs ) → A dict
or a list of dict
Parameters
image (
str
orPIL.Image
) — The pipeline handles three types of images:A string containing a http link pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images. If given a single image, it can be broadcasted to multiple questions.
question (
str
) — A question to ask of the document.word_boxes (
List[str, Tuple[float, float, float, float]]
, optional) — A list of words and bounding boxes (normalized 0->1000). If you provide this optional input, then the pipeline will use these words and boxes instead of running OCR on the image to derive them for models that need them (e.g. LayoutLM). This allows you to reuse OCR’d results across many invocations of the pipeline without having to re-run it each time.top_k (
int
, optional, defaults to 1) — The number of answers to return (will be chosen by order of likelihood). Note that we return less than top_k answers if there are not enough options available within the context.doc_stride (
int
, optional, defaults to 128) — If the words in the document are too long to fit with the question for the model, it will be split in several chunks with some overlap. This argument controls the size of that overlap.max_answer_len (
int
, optional, defaults to 15) — The maximum length of predicted answers (e.g., only answers with a shorter length are considered).max_seq_len (
int
, optional, defaults to 384) — The maximum length of the total sentence (context + question) in tokens of each chunk passed to the model. The context will be split in several chunks (usingdoc_stride
as overlap) if needed.max_question_len (
int
, optional, defaults to 64) — The maximum length of the question after tokenization. It will be truncated if needed.handle_impossible_answer (
bool
, optional, defaults toFalse
) — Whether or not we accept impossible as an answer.lang (
str
, optional) — Language to use while running OCR. Defaults to english.tesseract_config (
str
, optional) — Additional flags to pass to tesseract while running OCR.timeout (
float
, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.
Returns
A dict
or a list of dict
Each result comes as a dictionary with the following keys:
score (
float
) — The probability associated to the answer.start (
int
) — The start word index of the answer (in the OCR’d version of the input or providedword_boxes
).end (
int
) — The end word index of the answer (in the OCR’d version of the input or providedword_boxes
).answer (
str
) — The answer to the question.words (
list[int]
) — The index of each word/box pair that is in the answer
Answer the question(s) given as inputs by using the document(s). A document is defined as an image and an optional list of (word, box) tuples which represent the text in the document. If the word_boxes
are not provided, it will use the Tesseract OCR engine (if available) to extract the words and boxes automatically for LayoutLM-like models which require them as input. For Donut, no OCR is run.
You can invoke the pipeline several ways:
pipeline(image=image, question=question)
pipeline(image=image, question=question, word_boxes=word_boxes)
pipeline([{"image": image, "question": question}])
pipeline([{"image": image, "question": question, "word_boxes": word_boxes}])
FeatureExtractionPipeline
class transformers.FeatureExtractionPipeline
( model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel')]tokenizer: typing.Optional[transformers.tokenization_utils.PreTrainedTokenizer] = Nonefeature_extractor: typing.Optional[ForwardRef('SequenceFeatureExtractor')] = Noneimage_processor: typing.Optional[transformers.image_processing_utils.BaseImageProcessor] = Nonemodelcard: typing.Optional[transformers.modelcard.ModelCard] = Noneframework: typing.Optional[str] = Nonetask: str = ''args_parser: ArgumentHandler = Nonedevice: typing.Union[int, ForwardRef('torch.device')] = Nonetorch_dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = Nonebinary_output: bool = False**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.return_tensors (
bool
, optional) — IfTrue
, returns a tensor according to the specified framework, otherwise returns a list.task (
str
, defaults to""
) — A task-identifier for the pipeline.args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id.tokenize_kwargs (
dict
, optional) — Additional dictionary of keyword arguments passed along to the tokenizer.
Feature extraction pipeline using no model head. This pipeline extracts the hidden states from the base transformer, which can be used as features in downstream tasks.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
This feature extraction pipeline can currently be loaded from pipeline() using the task identifier: "feature-extraction"
.
All models may be used for this pipeline. See a list of all models, including community-contributed models on boincai.com/models.
__call__
( *args**kwargs ) → A nested list of float
Parameters
args (
str
orList[str]
) — One or several texts (or one list of texts) to get the features of.
Returns
A nested list of float
The features computed by the model.
Extract the features of the input(s).
ImageToTextPipeline
class transformers.ImageToTextPipeline
( *args**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Image To Text pipeline using a AutoModelForVision2Seq
. This pipeline predicts a caption for a given image.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
This image to text pipeline can currently be loaded from pipeline() using the following task identifier: “image-to-text”.
See the list of available models on boincai.com/models.
__call__
( images: typing.Union[str, typing.List[str], ForwardRef('Image.Image'), typing.List[ForwardRef('Image.Image')]]**kwargs ) → A list or a list of list of dict
Parameters
images (
str
,List[str]
,PIL.Image
orList[PIL.Image]
) — The pipeline handles three types of images:A string containing a HTTP(s) link pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images.
max_new_tokens (
int
, optional) — The amount of maximum tokens to generate. By default it will usegenerate
default.generate_kwargs (
Dict
, optional) — Pass it to send all of these arguments directly togenerate
allowing full control of this function.timeout (
float
, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.
Returns
A list or a list of list of dict
Each result comes as a dictionary with the following key:
generated_text (
str
) — The generated text.
Assign labels to the image(s) passed as inputs.
VisualQuestionAnsweringPipeline
class transformers.VisualQuestionAnsweringPipeline
( *args**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Visual Question Answering pipeline using a AutoModelForVisualQuestionAnswering
. This pipeline is currently only available in PyTorch.
Example:
Copied
Learn more about the basics of using a pipeline in the pipeline tutorial
This visual question answering pipeline can currently be loaded from pipeline() using the following task identifiers: "visual-question-answering", "vqa"
.
The models that this pipeline can use are models that have been fine-tuned on a visual question answering task. See the up-to-date list of available models on boincai.com/models.
__call__
( image: typing.Union[ForwardRef('Image.Image'), str]question: str = None**kwargs ) → A dictionary or a list of dictionaries containing the result. The dictionaries contain the following keys
Parameters
image (
str
,List[str]
,PIL.Image
orList[PIL.Image]
) — The pipeline handles three types of images:A string containing a http link pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images. If given a single image, it can be broadcasted to multiple questions.
question (
str
,List[str]
) — The question(s) asked. If given a single question, it can be broadcasted to multiple images.top_k (
int
, optional, defaults to 5) — The number of top labels that will be returned by the pipeline. If the provided number is higher than the number of labels available in the model configuration, it will default to the number of labels.timeout (
float
, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.
Returns
A dictionary or a list of dictionaries containing the result. The dictionaries contain the following keys
label (
str
) — The label identified by the model.score (
int
) — The score attributed by the model for that label.
Answers open-ended questions about images. The pipeline accepts several types of inputs which are detailed below:
pipeline(image=image, question=question)
pipeline({"image": image, "question": question})
pipeline([{"image": image, "question": question}])
pipeline([{"image": image, "question": question}, {"image": image, "question": question}])
Parent class: Pipeline
class transformers.Pipeline
( model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel')]tokenizer: typing.Optional[transformers.tokenization_utils.PreTrainedTokenizer] = Nonefeature_extractor: typing.Optional[ForwardRef('SequenceFeatureExtractor')] = Noneimage_processor: typing.Optional[transformers.image_processing_utils.BaseImageProcessor] = Nonemodelcard: typing.Optional[transformers.modelcard.ModelCard] = Noneframework: typing.Optional[str] = Nonetask: str = ''args_parser: ArgumentHandler = Nonedevice: typing.Union[int, ForwardRef('torch.device')] = Nonetorch_dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = Nonebinary_output: bool = False**kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (
str
orModelCard
, optional) — Model card attributed to the model for this pipeline.framework (
str
, optional) — The framework to use, either"pt"
for PyTorch or"tf"
for TensorFlow. The specified framework must be installed.If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the
model
, or to PyTorch if no model is provided.task (
str
, defaults to""
) — A task-identifier for the pipeline.num_workers (
int
, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.batch_size (
int
, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (
int
, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass nativetorch.device
or astr
too.binary_output (
bool
, optional, defaults toFalse
) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
The Pipeline class is the class from which all pipelines inherit. Refer to this class for methods shared across different pipelines.
Base class implementing pipelined operations. Pipeline workflow is defined as a sequence of the following operations:
Input -> Tokenization -> Model Inference -> Post-Processing (task dependent) -> Output
Pipeline supports running on CPU or GPU through the device argument (see below).
Some pipeline, like for instance FeatureExtractionPipeline ('feature-extraction'
) output large tensor object as nested-lists. In order to avoid dumping such large structure as textual data we provide the binary_output
constructor argument. If set to True
, the output will be stored in the pickle format.
check_model_type
( supported_models: typing.Union[typing.List[str], dict] )
Parameters
supported_models (
List[str]
ordict
) — The list of models supported by the pipeline, or a dictionary with model class values.
Check if the model class is in supported by the pipeline.
device_placement
( )
Context Manager allowing tensor allocation on the user-specified device in framework agnostic way.
Examples:
Copied
ensure_tensor_on_device
( **inputs ) → Dict[str, torch.Tensor]
Parameters
inputs (keyword arguments that should be
torch.Tensor
, the rest is ignored) — The tensors to place onself.device
.Recursive on lists only. —
Returns
Dict[str, torch.Tensor]
The same as inputs
but on the proper device.
Ensure PyTorch tensors are on the specified device.
postprocess
( model_outputs: ModelOutput**postprocess_parameters: typing.Dict )
Postprocess will receive the raw outputs of the _forward
method, generally tensors, and reformat them into something more friendly. Generally it will output a list or a dict or results (containing just strings and numbers).
predict
( X )
Scikit / Keras interface to transformers’ pipelines. This method will forward to call().
preprocess
( input_: typing.Any**preprocess_parameters: typing.Dict )
Preprocess will take the input_
of a specific pipeline and return a dictionary of everything necessary for _forward
to run properly. It should contain at least one tensor, but might have arbitrary other items.
save_pretrained
( save_directory: strsafe_serialization: bool = False )
Parameters
save_directory (
str
) — A path to the directory where to saved. It will be created if it doesn’t exist.safe_serialization (
str
) — Whether to save the model usingsafetensors
or the traditional way for PyTorch or Tensorflow
Save the pipeline’s model and tokenizer.
transform
( X )
Scikit / Keras interface to transformers’ pipelines. This method will forward to call().
Last updated