Utilities for pipelines
Utilities for pipelines
This page lists all the utility functions the library provides for pipelines.
Most of those are only useful if you are studying the code of the models in the library.
Argument handling
class transformers.pipelines.ArgumentHandler
( )
Base interface for handling arguments for each Pipeline.
class transformers.pipelines.ZeroShotClassificationArgumentHandler
( )
Handles arguments for zero-shot for text classification by turning each possible label into an NLI premise/hypothesis pair.
class transformers.pipelines.QuestionAnsweringArgumentHandler
( )
QuestionAnsweringPipeline requires the user to provide multiple arguments (i.e. question & context) to be mapped to internal SquadExample
.
QuestionAnsweringArgumentHandler manages all the possible to create a SquadExample
from the command-line supplied arguments.
Data format
class transformers.PipelineDataFormat
( output_path: typing.Optional[str]input_path: typing.Optional[str]column: typing.Optional[str]overwrite: bool = False )
Parameters
output_path (
str
, optional) — Where to save the outgoing data.input_path (
str
, optional) — Where to look for the input data.column (
str
, optional) — The column to read.overwrite (
bool
, optional, defaults toFalse
) — Whether or not to overwrite theoutput_path
.
Base class for all the pipeline supported data format both for reading and writing. Supported data formats currently includes:
JSON
CSV
stdin/stdout (pipe)
PipelineDataFormat
also includes some utilities to work with multi-columns like mapping from datasets columns to pipelines keyword arguments through the dataset_kwarg_1=dataset_column_1
format.
from_str
( format: stroutput_path: typing.Optional[str]input_path: typing.Optional[str]column: typing.Optional[str]overwrite = False ) → PipelineDataFormat
Parameters
format (
str
) — The format of the desired pipeline. Acceptable values are"json"
,"csv"
or"pipe"
.output_path (
str
, optional) — Where to save the outgoing data.input_path (
str
, optional) — Where to look for the input data.column (
str
, optional) — The column to read.overwrite (
bool
, optional, defaults toFalse
) — Whether or not to overwrite theoutput_path
.
Returns
The proper data format.
Creates an instance of the right subclass of PipelineDataFormat depending on format
.
save
( data: typing.Union[dict, typing.List[dict]] )
Parameters
data (
dict
or list ofdict
) — The data to store.
Save the provided data object with the representation for the current PipelineDataFormat.
save_binary
( data: typing.Union[dict, typing.List[dict]] ) → str
Parameters
data (
dict
or list ofdict
) — The data to store.
Returns
str
Path where the data has been saved.
Save the provided data object as a pickle-formatted binary data on the disk.
class transformers.CsvPipelineDataFormat
( output_path: typing.Optional[str]input_path: typing.Optional[str]column: typing.Optional[str]overwrite = False )
Parameters
output_path (
str
, optional) — Where to save the outgoing data.input_path (
str
, optional) — Where to look for the input data.column (
str
, optional) — The column to read.overwrite (
bool
, optional, defaults toFalse
) — Whether or not to overwrite theoutput_path
.
Support for pipelines using CSV data format.
save
( data: typing.List[dict] )
Parameters
data (
List[dict]
) — The data to store.
Save the provided data object with the representation for the current PipelineDataFormat.
class transformers.JsonPipelineDataFormat
( output_path: typing.Optional[str]input_path: typing.Optional[str]column: typing.Optional[str]overwrite = False )
Parameters
output_path (
str
, optional) — Where to save the outgoing data.input_path (
str
, optional) — Where to look for the input data.column (
str
, optional) — The column to read.overwrite (
bool
, optional, defaults toFalse
) — Whether or not to overwrite theoutput_path
.
Support for pipelines using JSON file format.
save
( data: dict )
Parameters
data (
dict
) — The data to store.
Save the provided data object in a json file.
class transformers.PipedPipelineDataFormat
( output_path: typing.Optional[str]input_path: typing.Optional[str]column: typing.Optional[str]overwrite: bool = False )
Parameters
output_path (
str
, optional) — Where to save the outgoing data.input_path (
str
, optional) — Where to look for the input data.column (
str
, optional) — The column to read.overwrite (
bool
, optional, defaults toFalse
) — Whether or not to overwrite theoutput_path
.
Read data from piped input to the python process. For multi columns data, columns should separated by
If columns are provided, then the output will be a dictionary with {column_x: value_x}
save
( data: dict )
Parameters
data (
dict
) — The data to store.
Print the data.
Utilities
class transformers.pipelines.PipelineException
( task: strmodel: strreason: str )
Parameters
task (
str
) — The task of the pipeline.model (
str
) — The model used by the pipeline.reason (
str
) — The error message to display.
Raised by a Pipeline when handling call.
Last updated