Chinese-CLIP
Chinese-CLIP
Overview
The Chinese-CLIP model was proposed in Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou. Chinese-CLIP is an implementation of CLIP (Radford et al., 2021) on a large-scale dataset of Chinese image-text pairs. It is capable of performing cross-modal retrieval and also playing as a vision backbone for vision tasks like zero-shot image classification, open-domain object detection, etc. The original Chinese-CLIP code is released at this link.
The abstract from the paper is the following:
The tremendous success of CLIP (Radford et al., 2021) has promoted the research and application of contrastive learning for vision-language pretraining. In this work, we construct a large-scale dataset of image-text pairs in Chinese, where most data are retrieved from publicly available datasets, and we pretrain Chinese CLIP models on the new dataset. We develop 5 Chinese CLIP models of multiple sizes, spanning from 77 to 958 million parameters. Furthermore, we propose a two-stage pretraining method, where the model is first trained with the image encoder frozen and then trained with all parameters being optimized, to achieve enhanced model performance. Our comprehensive experiments demonstrate that Chinese CLIP can achieve the state-of-the-art performance on MUGE, Flickr30K-CN, and COCO-CN in the setups of zero-shot learning and finetuning, and it is able to achieve competitive performance in zero-shot image classification based on the evaluation on the ELEVATER benchmark (Li et al., 2022). Our codes, pretrained models, and demos have been released.
Usage
The code snippet below shows how to compute image & text features and similarities:
Copied
Currently, we release the following scales of pretrained Chinese-CLIP models at HF Model Hub:
The Chinese-CLIP model was contributed by OFA-Sys.
ChineseCLIPConfig
class transformers.ChineseCLIPConfig
( text_config = Nonevision_config = Noneprojection_dim = 512logit_scale_init_value = 2.6592**kwargs )
Parameters
text_config (
dict
, optional) โ Dictionary of configuration options used to initialize ChineseCLIPTextConfig.vision_config (
dict
, optional) โ Dictionary of configuration options used to initialize ChineseCLIPVisionConfig.projection_dim (
int
, optional, defaults to 512) โ Dimentionality of text and vision projection layers.logit_scale_init_value (
float
, optional, defaults to 2.6592) โ The inital value of the logit_scale paramter. Default is used as per the original ChineseCLIP implementation.kwargs (optional) โ Dictionary of keyword arguments.
ChineseCLIPConfig is the configuration class to store the configuration of a ChineseCLIPModel. It is used to instantiate Chinese-CLIP model according to the specified arguments, defining the text model and vision model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the Chinese-CLIP OFA-Sys/chinese-clip-vit-base-patch16 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
Copied
from_text_vision_configs
( text_config: ChineseCLIPTextConfigvision_config: ChineseCLIPVisionConfig**kwargs )
Instantiate a ChineseCLIPConfig (or a derived class) from Chinese-CLIP text model configuration and Chinese-CLIP vision model configuration. Returns: ChineseCLIPConfig: An instance of a configuration object
ChineseCLIPTextConfig
class transformers.ChineseCLIPTextConfig
( vocab_size = 30522hidden_size = 768num_hidden_layers = 12num_attention_heads = 12intermediate_size = 3072hidden_act = 'gelu'hidden_dropout_prob = 0.1attention_probs_dropout_prob = 0.1max_position_embeddings = 512type_vocab_size = 2initializer_range = 0.02initializer_factor = 1.0layer_norm_eps = 1e-12pad_token_id = 0position_embedding_type = 'absolute'use_cache = True**kwargs )
Parameters
vocab_size (
int
, optional, defaults to 30522) โ Vocabulary size of the CHINESE_CLIP model. Defines the number of different tokens that can be represented by theinputs_ids
passed when calling ChineseCLIPModel.hidden_size (
int
, optional, defaults to 768) โ Dimensionality of the encoder layers and the pooler layer.num_hidden_layers (
int
, optional, defaults to 12) โ Number of hidden layers in the Transformer encoder.num_attention_heads (
int
, optional, defaults to 12) โ Number of attention heads for each attention layer in the Transformer encoder.intermediate_size (
int
, optional, defaults to 3072) โ Dimensionality of the โintermediateโ (often named feed-forward) layer in the Transformer encoder.hidden_act (
str
orCallable
, optional, defaults to"gelu"
) โ The non-linear activation function (function or string) in the encoder and pooler. If string,"gelu"
,"relu"
,"silu"
and"gelu_new"
are supported.hidden_dropout_prob (
float
, optional, defaults to 0.1) โ The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.attention_probs_dropout_prob (
float
, optional, defaults to 0.1) โ The dropout ratio for the attention probabilities.max_position_embeddings (
int
, optional, defaults to 512) โ The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).type_vocab_size (
int
, optional, defaults to 2) โ The vocabulary size of thetoken_type_ids
passed when calling ChineseCLIPModel.initializer_range (
float
, optional, defaults to 0.02) โ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.layer_norm_eps (
float
, optional, defaults to 1e-12) โ The epsilon used by the layer normalization layers.position_embedding_type (
str
, optional, defaults to"absolute"
) โ Type of position embedding. Choose one of"absolute"
,"relative_key"
,"relative_key_query"
. For positional embeddings use"absolute"
. For more information on"relative_key"
, please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on"relative_key_query"
, please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.).use_cache (
bool
, optional, defaults toTrue
) โ Whether or not the model should return the last key/values attentions (not used by all models). Only relevant ifconfig.is_decoder=True
.
This is the configuration class to store the configuration of a ChineseCLIPModel. It is used to instantiate a Chinese CLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Chinese CLIP [OFA-Sys/chinese-clip-vit-base-patch16](https: //boincai.com/OFA-Sys/chinese-clip-vit-base-patch16) architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
Copied
ChineseCLIPVisionConfig
class transformers.ChineseCLIPVisionConfig
( hidden_size = 768intermediate_size = 3072projection_dim = 512num_hidden_layers = 12num_attention_heads = 12num_channels = 3image_size = 224patch_size = 32hidden_act = 'quick_gelu'layer_norm_eps = 1e-05attention_dropout = 0.0initializer_range = 0.02initializer_factor = 1.0**kwargs )
Parameters
hidden_size (
int
, optional, defaults to 768) โ Dimensionality of the encoder layers and the pooler layer.intermediate_size (
int
, optional, defaults to 3072) โ Dimensionality of the โintermediateโ (i.e., feed-forward) layer in the Transformer encoder.num_hidden_layers (
int
, optional, defaults to 12) โ Number of hidden layers in the Transformer encoder.num_attention_heads (
int
, optional, defaults to 12) โ Number of attention heads for each attention layer in the Transformer encoder.image_size (
int
, optional, defaults to 224) โ The size (resolution) of each image.patch_size (
int
, optional, defaults to 32) โ The size (resolution) of each patch.hidden_act (
str
orfunction
, optional, defaults to"quick_gelu"
) โ The non-linear activation function (function or string) in the encoder and pooler. If string,"gelu"
,"relu"
,"selu"
and"gelu_new"
`"quick_gelu"
are supported.layer_norm_eps (
float
, optional, defaults to 1e-5) โ The epsilon used by the layer normalization layers.attention_dropout (
float
, optional, defaults to 0.0) โ The dropout ratio for the attention probabilities.initializer_range (
float
, optional, defaults to 0.02) โ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.initializer_factor (`floatโ, optional, defaults to 1) โ A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing).
This is the configuration class to store the configuration of a ChineseCLIPModel. It is used to instantiate an ChineseCLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ChineseCLIP [OFA-Sys/chinese-clip-vit-base-patch16](https: //boincai.com/OFA-Sys/chinese-clip-vit-base-patch16) architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
Copied
ChineseCLIPImageProcessor
class transformers.ChineseCLIPImageProcessor
( do_resize: bool = Truesize: typing.Dict[str, int] = Noneresample: Resampling = <Resampling.BICUBIC: 3>do_center_crop: bool = Truecrop_size: typing.Dict[str, int] = Nonedo_rescale: bool = Truerescale_factor: typing.Union[int, float] = 0.00392156862745098do_normalize: bool = Trueimage_mean: typing.Union[float, typing.List[float], NoneType] = Noneimage_std: typing.Union[float, typing.List[float], NoneType] = Nonedo_convert_rgb: bool = True**kwargs )
Parameters
do_resize (
bool
, optional, defaults toTrue
) โ Whether to resize the imageโs (height, width) dimensions to the specifiedsize
. Can be overridden bydo_resize
in thepreprocess
method.size (
Dict[str, int]
optional, defaults to{"shortest_edge" -- 224}
): Size of the image after resizing. The shortest edge of the image is resized to size[โshortest_edgeโ], with the longest edge resized to keep the input aspect ratio. Can be overridden bysize
in thepreprocess
method.resample (
PILImageResampling
, optional, defaults toPILImageResampling.BICUBIC
) โ Resampling filter to use if resizing the image. Can be overridden byresample
in thepreprocess
method.do_center_crop (
bool
, optional, defaults toTrue
) โ Whether to center crop the image to the specifiedcrop_size
. Can be overridden bydo_center_crop
in thepreprocess
method.crop_size (
Dict[str, int]
optional, defaults to 224) โ Size of the output image after applyingcenter_crop
. Can be overridden bycrop_size
in thepreprocess
method.do_rescale (
bool
, optional, defaults toTrue
) โ Whether to rescale the image by the specified scalerescale_factor
. Can be overridden bydo_rescale
in thepreprocess
method.rescale_factor (
int
orfloat
, optional, defaults to1/255
) โ Scale factor to use if rescaling the image. Can be overridden byrescale_factor
in thepreprocess
method. do_normalize โ Whether to normalize the image. Can be overridden bydo_normalize
in thepreprocess
method.image_mean (
float
orList[float]
, optional, defaults toIMAGENET_STANDARD_MEAN
) โ Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by theimage_mean
parameter in thepreprocess
method.image_std (
float
orList[float]
, optional, defaults toIMAGENET_STANDARD_STD
) โ Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by theimage_std
parameter in thepreprocess
method. Can be overridden by theimage_std
parameter in thepreprocess
method.do_convert_rgb (
bool
, optional, defaults toTrue
) โ Whether to convert the image to RGB.
Constructs a Chinese-CLIP image processor.
preprocess
( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]do_resize: bool = Nonesize: typing.Dict[str, int] = Noneresample: Resampling = Nonedo_center_crop: bool = Nonecrop_size: int = Nonedo_rescale: bool = Nonerescale_factor: float = Nonedo_normalize: bool = Noneimage_mean: typing.Union[float, typing.List[float], NoneType] = Noneimage_std: typing.Union[float, typing.List[float], NoneType] = Nonedo_convert_rgb: bool = Nonereturn_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = Nonedata_format: typing.Optional[transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None**kwargs )
Parameters
images (
ImageInput
) โ Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, setdo_rescale=False
.do_resize (
bool
, optional, defaults toself.do_resize
) โ Whether to resize the image.size (
Dict[str, int]
, optional, defaults toself.size
) โ Size of the image after resizing. Shortest edge of the image is resized to size[โshortest_edgeโ], with the longest edge resized to keep the input aspect ratio.resample (
int
, optional, defaults toself.resample
) โ Resampling filter to use if resizing the image. This can be one of the enumPILImageResampling
. Only has an effect ifdo_resize
is set toTrue
.do_center_crop (
bool
, optional, defaults toself.do_center_crop
) โ Whether to center crop the image.crop_size (
Dict[str, int]
, optional, defaults toself.crop_size
) โ Size of the center crop. Only has an effect ifdo_center_crop
is set toTrue
.do_rescale (
bool
, optional, defaults toself.do_rescale
) โ Whether to rescale the image.rescale_factor (
float
, optional, defaults toself.rescale_factor
) โ Rescale factor to rescale the image by ifdo_rescale
is set toTrue
.do_normalize (
bool
, optional, defaults toself.do_normalize
) โ Whether to normalize the image.image_mean (
float
orList[float]
, optional, defaults toself.image_mean
) โ Image mean to use for normalization. Only has an effect ifdo_normalize
is set toTrue
.image_std (
float
orList[float]
, optional, defaults toself.image_std
) โ Image standard deviation to use for normalization. Only has an effect ifdo_normalize
is set toTrue
.do_convert_rgb (
bool
, optional, defaults toself.do_convert_rgb
) โ Whether to convert the image to RGB.return_tensors (
str
orTensorType
, optional) โ The type of tensors to return. Can be one of:Unset: Return a list of
np.ndarray
.TensorType.TENSORFLOW
or'tf'
: Return a batch of typetf.Tensor
.TensorType.PYTORCH
or'pt'
: Return a batch of typetorch.Tensor
.TensorType.NUMPY
or'np'
: Return a batch of typenp.ndarray
.TensorType.JAX
or'jax'
: Return a batch of typejax.numpy.ndarray
.
data_format (
ChannelDimension
orstr
, optional, defaults toChannelDimension.FIRST
) โ The channel dimension format for the output image. Can be one of:"channels_first"
orChannelDimension.FIRST
: image in (num_channels, height, width) format."channels_last"
orChannelDimension.LAST
: image in (height, width, num_channels) format.Unset: Use the channel dimension format of the input image.
input_data_format (
ChannelDimension
orstr
, optional) โ The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:"channels_first"
orChannelDimension.FIRST
: image in (num_channels, height, width) format."channels_last"
orChannelDimension.LAST
: image in (height, width, num_channels) format."none"
orChannelDimension.NONE
: image in (height, width) format.
Preprocess an image or batch of images.
ChineseCLIPFeatureExtractor
class transformers.ChineseCLIPFeatureExtractor
( *args**kwargs )
ChineseCLIPProcessor
class transformers.ChineseCLIPProcessor
( image_processor = Nonetokenizer = None**kwargs )
Parameters
image_processor (ChineseCLIPImageProcessor) โ The image processor is a required input.
tokenizer (BertTokenizerFast) โ The tokenizer is a required input.
Constructs a Chinese-CLIP processor which wraps a Chinese-CLIP image processor and a Chinese-CLIP tokenizer into a single processor.
ChineseCLIPProcessor offers all the functionalities of ChineseCLIPImageProcessor and BertTokenizerFast. See the __call__()
and decode() for more information.
batch_decode
( *args**kwargs )
This method forwards all its arguments to BertTokenizerFastโs batch_decode(). Please refer to the docstring of this method for more information.
decode
( *args**kwargs )
This method forwards all its arguments to BertTokenizerFastโs decode(). Please refer to the docstring of this method for more information.
ChineseCLIPModel
class transformers.ChineseCLIPModel
( config: ChineseCLIPConfig )
Parameters
config (ChineseCLIPConfig) โ Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
( input_ids: typing.Optional[torch.LongTensor] = Nonepixel_values: typing.Optional[torch.FloatTensor] = Noneattention_mask: typing.Optional[torch.Tensor] = Nonetoken_type_ids: typing.Optional[torch.Tensor] = Noneposition_ids: typing.Optional[torch.LongTensor] = Nonereturn_loss: typing.Optional[bool] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) โ transformers.models.chinese_clip.modeling_chinese_clip.ChineseCLIPOutput
or tuple(torch.FloatTensor)
Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) โ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
attention_mask (
torch.Tensor
of shape(batch_size, sequence_length)
, optional) โ Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) โ Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]
:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) โ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]
.pixel_values (
torch.FloatTensor
of shape(batch_size, num_channels, height, width)
) โ Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See ChineseCLIPImageProcessor.call() for details.return_loss (
bool
, optional) โ Whether or not to return the contrastive loss.output_attentions (
bool
, optional) โ Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) โ Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) โ Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.chinese_clip.modeling_chinese_clip.ChineseCLIPOutput
or tuple(torch.FloatTensor)
A transformers.models.chinese_clip.modeling_chinese_clip.ChineseCLIPOutput
or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration (<class 'transformers.models.chinese_clip.configuration_chinese_clip.ChineseCLIPConfig'>
) and inputs.
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenreturn_loss
isTrue
) โ Contrastive loss for image-text similarity.logits_per_image:(
torch.FloatTensor
of shape(image_batch_size, text_batch_size)
) โ The scaled dot product scores betweenimage_embeds
andtext_embeds
. This represents the image-text similarity scores.logits_per_text:(
torch.FloatTensor
of shape(text_batch_size, image_batch_size)
) โ The scaled dot product scores betweentext_embeds
andimage_embeds
. This represents the text-image similarity scores.text_embeds(
torch.FloatTensor
of shape(batch_size, output_dim
) โ The text embeddings obtained by applying the projection layer to the pooled output of ChineseCLIPTextModel.image_embeds(
torch.FloatTensor
of shape(batch_size, output_dim
) โ The image embeddings obtained by applying the projection layer to the pooled output of ChineseCLIPVisionModel.text_model_output(
BaseModelOutputWithPoolingAndCrossAttentions
): The output of the ChineseCLIPTextModel.vision_model_output(
BaseModelOutputWithPoolingAndCrossAttentions
): The output of the ChineseCLIPVisionModel.
The ChineseCLIPModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
Copied
get_text_features
( input_ids: typing.Optional[torch.Tensor] = Noneattention_mask: typing.Optional[torch.Tensor] = Nonetoken_type_ids: typing.Optional[torch.Tensor] = Noneposition_ids: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) โ text_features (torch.FloatTensor
of shape (batch_size, output_dim
)
Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) โ Indices of input sequence tokens in the vocabulary.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
attention_mask (
torch.FloatTensor
of shape(batch_size, sequence_length)
, optional) โ Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) โ Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]
:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) โ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]
.head_mask (
torch.FloatTensor
of shape(num_heads,)
or(num_layers, num_heads)
, optional) โ Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]
:1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) โ Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the modelโs internal embedding lookup matrix.output_attentions (
bool
, optional) โ Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) โ Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) โ Whether or not to return a ModelOutput instead of a plain tuple.
Returns
text_features (torch.FloatTensor
of shape (batch_size, output_dim
)
The text embeddings obtained by applying the projection layer to the final [CLS] hidden state of Text-Transformer.
The ChineseCLIPModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
Copied
get_image_features
( pixel_values: typing.Optional[torch.FloatTensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) โ image_features (torch.FloatTensor
of shape (batch_size, output_dim
)
Parameters
pixel_values (
torch.FloatTensor
of shape(batch_size, num_channels, height, width)
) โ Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See ChineseCLIPImageProcessor.call() for details.output_attentions (
bool
, optional) โ Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) โ Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) โ Whether or not to return a ModelOutput instead of a plain tuple.
Returns
image_features (torch.FloatTensor
of shape (batch_size, output_dim
)
The image embeddings obtained by applying the projection layer to the final [CLS] hidden state of Vision-Transformer.
The ChineseCLIPModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
Copied
ChineseCLIPTextModel
class transformers.ChineseCLIPTextModel
( configadd_pooling_layer = True )
Parameters
config (ChineseCLIPConfig) โ Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The text model from CHINESE_CLIP without any head or projection on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in Attention is all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the is_decoder
argument of the configuration set to True
. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder
argument and add_cross_attention
set to True
; an encoder_hidden_states
is then expected as an input to the forward pass.
forward
( input_ids: typing.Optional[torch.Tensor] = Noneattention_mask: typing.Optional[torch.Tensor] = Nonetoken_type_ids: typing.Optional[torch.Tensor] = Noneposition_ids: typing.Optional[torch.Tensor] = Nonehead_mask: typing.Optional[torch.Tensor] = Noneinputs_embeds: typing.Optional[torch.Tensor] = Noneencoder_hidden_states: typing.Optional[torch.Tensor] = Noneencoder_attention_mask: typing.Optional[torch.Tensor] = Nonepast_key_values: typing.Optional[typing.List[torch.FloatTensor]] = Noneuse_cache: typing.Optional[bool] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) โ transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) โ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
attention_mask (
torch.Tensor
of shape(batch_size, sequence_length)
, optional) โ Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) โ Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]
:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) โ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]
.pixel_values (
torch.FloatTensor
of shape(batch_size, num_channels, height, width)
) โ Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See ChineseCLIPImageProcessor.call() for details.return_loss (
bool
, optional) โ Whether or not to return the contrastive loss.output_attentions (
bool
, optional) โ Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) โ Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) โ Whether or not to return a ModelOutput instead of a plain tuple.encoder_hidden_states (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) โ Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.encoder_attention_mask (
torch.FloatTensor
of shape(batch_size, sequence_length)
, optional) โ Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (
tuple(tuple(torch.FloatTensor))
of lengthconfig.n_layers
with each tuple having 4 tensors of shape(batch_size, num_heads, sequence_length - 1, embed_size_per_head)
) โ Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.If
past_key_values
are used, the user can optionally input only the lastdecoder_input_ids
(those that donโt have their past key value states given to this model) of shape(batch_size, 1)
instead of alldecoder_input_ids
of shape(batch_size, sequence_length)
.use_cache (
bool
, optional) โ If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
).
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration (ChineseCLIPConfig) and inputs.
last_hidden_state (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
) โ Sequence of hidden-states at the output of the last layer of the model.pooler_output (
torch.FloatTensor
of shape(batch_size, hidden_size)
) โ Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) โ Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) โ Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
andconfig.add_cross_attention=True
is passed or whenconfig.output_attentions=True
) โ Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights of the decoderโs cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
past_key_values (
tuple(tuple(torch.FloatTensor))
, optional, returned whenuse_cache=True
is passed or whenconfig.use_cache=True
) โ Tuple oftuple(torch.FloatTensor)
of lengthconfig.n_layers
, with each tuple having 2 tensors of shape(batch_size, num_heads, sequence_length, embed_size_per_head)
) and optionally ifconfig.is_encoder_decoder=True
2 additional tensors of shape(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True
in the cross-attention blocks) that can be used (seepast_key_values
input) to speed up sequential decoding.
The ChineseCLIPTextModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
ChineseCLIPVisionModel
class transformers.ChineseCLIPVisionModel
( config: ChineseCLIPVisionConfig )
Parameters
config (ChineseCLIPConfig) โ Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The vision model from CHINESE_CLIP without any head or projection on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
( pixel_values: typing.Optional[torch.FloatTensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) โ transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (
torch.FloatTensor
of shape(batch_size, num_channels, height, width)
) โ Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See ChineseCLIPImageProcessor.call() for details.output_attentions (
bool
, optional) โ Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) โ Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) โ Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration (<class 'transformers.models.chinese_clip.configuration_chinese_clip.ChineseCLIPVisionConfig'>
) and inputs.
last_hidden_state (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
) โ Sequence of hidden-states at the output of the last layer of the model.pooler_output (
torch.FloatTensor
of shape(batch_size, hidden_size)
) โ Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) โ Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) โ Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The ChineseCLIPVisionModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
Copied
Last updated