TFAutoModelForTableQuestionAnswering
Last updated
Last updated
( *args**kwargs )
This is a generic model class that will be instantiated as one of the model classes of the library (with a table question answering head) when created with the class method or the class method.
This class cannot be instantiated directly using __init__()
(throws an error).
from_config
( **kwargs )
Parameters
config () β The model class to instantiate is selected based on the configuration class:
configuration class: (TAPAS model)
Instantiates one of the model classes of the library (with a table question answering head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the modelβs configuration. Use to load the model weights.
Examples:
Copied
from_pretrained
( *model_args**kwargs )
Parameters
pretrained_model_name_or_path (str
or os.PathLike
) β Can be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased
, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased
.
A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin
). In this case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
model_args (additional positional arguments, optional) β Will be passed along to the underlying model __init__()
method.
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model is loaded by supplying a local directory as pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (str
or os.PathLike
, optional) β Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
from_pt (bool
, optional, defaults to False
) β Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path
argument).
force_download (bool
, optional, defaults to False
) β Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.
resume_download (bool
, optional, defaults to False
) β Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.
proxies (Dict[str, str]
, optional) β A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
output_loading_info(bool
, optional, defaults to False
) β Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
local_files_only(bool
, optional, defaults to False
) β Whether or not to only look at local files (e.g., not try downloading the model).
revision (str
, optional, defaults to "main"
) β The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision
can be any identifier allowed by git.
trust_remote_code (bool
, optional, defaults to False
) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
code_revision (str
, optional, defaults to "main"
) β The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision
can be any identifier allowed by git.
kwargs (additional keyword arguments, optional) β Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True
). Behaves differently depending on whether a config
is provided or automatically loaded:
If a configuration is provided with config
, **kwargs
will be directly passed to the underlying modelβs __init__
method (we assume all relevant updates to the configuration have already been done)
Instantiate one of the model classes of the library (with a table question answering head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when itβs missing, by falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
Copied
A path to a directory containing model weights saved using , e.g., ./my_model_directory/
.
config (, optional) β Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model was saved using and is reloaded by supplying the save directory.
If a configuration is not provided, kwargs
will be first passed to the configuration class initialization function (). Each key of kwargs
that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying modelβs __init__
function.
tapas β (TAPAS model)