TAPEX
Last updated
Last updated
This model is in maintenance mode only, so we wonβt accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0. You can do so by running the following command: pip install -U transformers==4.30.0
.
The TAPEX model was proposed in by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. TAPEX pre-trains a BART model to solve synthetic SQL queries, after which it can be fine-tuned to answer natural language questions related to tabular data, as well as performing table fact checking.
TAPEX has been fine-tuned on several datasets:
(Sequential Question Answering by Microsoft)
(Wiki Table Questions by Stanford University)
(by Salesforce)
(by USCB NLP Lab).
The abstract from the paper is the following:
Recent progress in language model pre-training has achieved a great success via leveraging large-scale unstructured textual data. However, it is still a challenge to apply pre-training on structured tabular data due to the absence of large-scale high-quality tabular data. In this paper, we propose TAPEX to show that table pre-training can be achieved by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries and their execution outputs. TAPEX addresses the data scarcity challenge via guiding the language model to mimic a SQL executor on the diverse, large-scale and high-quality synthetic corpus. We evaluate TAPEX on four benchmark datasets. Experimental results demonstrate that TAPEX outperforms previous table pre-training approaches by a large margin and achieves new state-of-the-art results on all of them. This includes improvements on the weakly-supervised WikiSQL denotation accuracy to 89.5% (+2.3%), the WikiTableQuestions denotation accuracy to 57.5% (+4.8%), the SQA denotation accuracy to 74.5% (+3.5%), and the TabFact accuracy to 84.2% (+3.2%). To our knowledge, this is the first work to exploit table pre-training via synthetic executable programs and to achieve new state-of-the-art results on various downstream tasks.
Tips:
TAPEX is a generative (seq2seq) model. One can directly plug in the weights of TAPEX into a BART model.
TAPEX has checkpoints on the hub that are either pre-trained only, or fine-tuned on WTQ, SQA, WikiSQL and TabFact.
Sentences + tables are presented to the model as sentence + " " + linearized table
. The linearized table has the following format: col: col1 | col2 | col 3 row 1 : val1 | val2 | val3 row 2 : ...
.
TAPEX has its own tokenizer, that allows to prepare all data for the model easily. One can pass Pandas DataFrames and strings to the tokenizer, and it will automatically create the input_ids
and attention_mask
(as shown in the usage examples below).
Copied
Copied
Copied
( vocab_filemerges_filedo_lower_case = Trueerrors = 'replace'bos_token = '<s>'eos_token = '</s>'sep_token = '</s>'cls_token = '<s>'unk_token = '<unk>'pad_token = '<pad>'mask_token = '<mask>'add_prefix_space = Falsemax_cell_length = 15**kwargs )
Parameters
vocab_file (str
) β Path to the vocabulary file.
merges_file (str
) β Path to the merges file.
do_lower_case (bool
, optional, defaults to True
) β Whether or not to lowercase the input when tokenizing.
bos_token (str
, optional, defaults to "<s>"
) β The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token
.
eos_token (str
, optional, defaults to "</s>"
) β The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token
.
sep_token (str
, optional, defaults to "</s>"
) β The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.
cls_token (str
, optional, defaults to "<s>"
) β The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str
, optional, defaults to "<unk>"
) β The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
pad_token (str
, optional, defaults to "<pad>"
) β The token used for padding, for example when batching sequences of different lengths.
mask_token (str
, optional, defaults to "<mask>"
) β The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
add_prefix_space (bool
, optional, defaults to False
) β Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (BART tokenizer detect beginning of words by the preceding space).
max_cell_length (int
, optional, defaults to 15) β Maximum number of characters per cell when linearizing a table. If this number is exceeded, truncation takes place.
Construct a TAPEX tokenizer. Based on byte-level Byte-Pair-Encoding (BPE).
This tokenizer can be used to flatten one or more table(s) and concatenate them with one or more related sentences to be used by TAPEX models. The format that the TAPEX tokenizer creates is the following:
sentence col: col1 | col2 | col 3 row 1 : val1 | val2 | val3 row 2 : β¦
The tokenizer supports a single table + single query, a single table and multiple queries (in which case the table will be duplicated for every query), a single query and multiple tables (in which case the query will be duplicated for every table), and multiple tables and queries. In other words, you can provide a batch of tables + questions to the tokenizer for instance to prepare them for the model.
Tokenization itself is based on the BPE algorithm. It is identical to the one used by BART, RoBERTa and GPT-2.
__call__
( table: typing.Union[ForwardRef('pd.DataFrame'), typing.List[ForwardRef('pd.DataFrame')]] = Nonequery: typing.Union[str, typing.List[str], NoneType] = Noneanswer: typing.Union[str, typing.List[str]] = Noneadd_special_tokens: bool = Truepadding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = Falsetruncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = Nonemax_length: typing.Optional[int] = Nonestride: int = 0pad_to_multiple_of: typing.Optional[int] = Nonereturn_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = Nonereturn_token_type_ids: typing.Optional[bool] = Nonereturn_attention_mask: typing.Optional[bool] = Nonereturn_overflowing_tokens: bool = Falsereturn_special_tokens_mask: bool = Falsereturn_offsets_mapping: bool = Falsereturn_length: bool = Falseverbose: bool = True**kwargs )
Parameters
table (pd.DataFrame
, List[pd.DataFrame]
) β Table(s) containing tabular data.
query (str
or List[str]
, optional) β Sentence or batch of sentences related to one or more table(s) to be encoded. Note that the number of sentences must match the number of tables.
answer (str
or List[str]
, optional) β Optionally, the corresponding answer to the questions as supervision.
add_special_tokens (bool
, optional, defaults to True
) β Whether or not to add special tokens when encoding the sequences. This will use the underlying PretrainedTokenizerBase.build_inputs_with_special_tokens
function, which defines which tokens are automatically added to the input ids. This is usefull if you want to add bos
or eos
tokens automatically.
True
or 'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
'max_length'
: Pad to a maximum length specified with the argument max_length
or to the maximum acceptable input length for the model if that argument is not provided.
False
or 'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
True
or 'longest_first'
: Truncate to a maximum length specified with the argument max_length
or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
'only_first'
: Truncate to a maximum length specified with the argument max_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second'
: Truncate to a maximum length specified with the argument max_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False
or 'do_not_truncate'
(default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (int
, optional) β Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None
, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int
, optional, defaults to 0) β If set to a number along with max_length
, the overflowing tokens returned when return_overflowing_tokens=True
will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.
is_split_into_words (bool
, optional, defaults to False
) β Whether or not the input is already pre-tokenized (e.g., split into words). If set to True
, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int
, optional) β If set will pad the sequence to a multiple of the provided value. Requires padding
to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5
(Volta).
'tf'
: Return TensorFlow tf.constant
objects.
'pt'
: Return PyTorch torch.Tensor
objects.
'np'
: Return Numpy np.ndarray
objects.
add_special_tokens (bool
, optional, defaults to True
) β Whether or not to encode the sequences with the special tokens relative to their model.
True
or 'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
'max_length'
: Pad to a maximum length specified with the argument max_length
or to the maximum acceptable input length for the model if that argument is not provided.
False
or 'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
Activates and controls truncation. Accepts the following values:
'drop_rows_to_fit'
: Truncate to a maximum length specified with the argument max_length
or to the maximum acceptable input length for the model if that argument is not provided. This will truncate row by row, removing rows from the table.
True
or 'longest_first'
: Truncate to a maximum length specified with the argument max_length
or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
'only_first'
: Truncate to a maximum length specified with the argument max_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second'
: Truncate to a maximum length specified with the argument max_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False
or 'do_not_truncate'
(default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (int
, optional) β Controls the maximum length to use by one of the truncation/padding parameters. If left unset or set to None
, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int
, optional, defaults to 0) β If set to a number along with max_length
, the overflowing tokens returned when return_overflowing_tokens=True
will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.
pad_to_multiple_of (int
, optional) β If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5
(Volta).
'tf'
: Return TensorFlow tf.constant
objects.
'pt'
: Return PyTorch torch.Tensor
objects.
'np'
: Return Numpy np.ndarray
objects.
Main method to tokenize and prepare for the model one or several table-sequence pair(s).
save_vocabulary
( save_directory: strfilename_prefix: typing.Optional[str] = None )
Below, we illustrate how to use TAPEX for table question answering. As one can see, one can directly plug in the weights of TAPEX into a BART model. We use the , which will automatically instantiate the appropriate tokenizer () and model () for us, based on the configuration file of the checkpoint on the hub.
Note that also supports batched inference. Hence, one can provide a batch of different tables/questions, or a batch of a single table and multiple questions, or a batch of a single query and multiple tables. Letβs illustrate this:
In case one wants to do table verification (i.e. the task of determining whether a given sentence is supported or refuted by the contents of a table), one can instantiate a model. TAPEX has checkpoints on the hub fine-tuned on TabFact, an important benchmark for table fact checking (it achieves 84% accuracy). The code example below again leverages the .
errors (str
, optional, defaults to "replace"
) β Paradigm to follow when decoding bytes to UTF-8. See for more information.
This tokenizer inherits from which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
padding (bool
, str
or , optional, defaults to False
) β Activates and controls padding. Accepts the following values:
truncation (bool
, str
or , optional, defaults to False
) β Activates and controls truncation. Accepts the following values:
return_tensors (str
or , optional) β If set, will return tensors instead of list of python integers. Acceptable values are:
padding (bool
, str
or , optional, defaults to False
) β Activates and controls padding. Accepts the following values:
truncation (bool
, str
, TapexTruncationStrategy
or , β optional, defaults to False
):
return_tensors (str
or , optional) β If set, will return tensors instead of list of python integers. Acceptable values are: