Glossary
Glossary
This glossary defines general machine learning and π Transformers terms to help you better understand the documentation.
A
attention mask
The attention mask is an optional argument used when batching sequences together.
This argument indicates to the model which tokens should be attended to, and which should not.
For example, consider these two sequences:
Copied
The encoded versions have different lengths:
Copied
Therefore, we canβt put them together in the same tensor as-is. The first sequence needs to be padded up to the length of the second one, or the second one needs to be truncated down to the length of the first one.
In the first case, the list of IDs will be extended by the padding indices. We can pass a list to the tokenizer and ask it to pad like this:
Copied
We can see that 0s have been added on the right of the first sentence to make it the same length as the second one:
Copied
Copied
autoencoding models
autoregressive models
B
backbone
C
causal language modeling
A pretraining task where the model reads the texts in order and has to predict the next word. Itβs usually done by reading the whole sentence but using a mask inside the model to hide the future tokens at a certain timestep.
channel
Color images are made up of some combination of values in three channels - red, green, and blue (RGB) - and grayscale images only have one channel. In π€ Transformers, the channel can be the first or last dimension of an imageβs tensor: [n_channels
, height
, width
] or [height
, width
, n_channels
].
connectionist temporal classification (CTC)
An algorithm which allows a model to learn without knowing exactly how the input and output are aligned; CTC calculates the distribution of all possible outputs for a given input and chooses the most likely output from it. CTC is commonly used in speech recognition tasks because speech doesnβt always cleanly align with the transcript for a variety of reasons such as a speakerβs different speech rates.
convolution
A type of layer in a neural network where the input matrix is multiplied element-wise by a smaller matrix (kernel or filter) and the values are summed up in a new matrix. This is known as a convolutional operation which is repeated over the entire input matrix. Each operation is applied to a different segment of the input matrix. Convolutional neural networks (CNNs) are commonly used in computer vision.
D
decoder input IDs
This input is specific to encoder-decoder models, and contains the input IDs that will be fed to the decoder. These inputs should be used for sequence to sequence tasks, such as translation or summarization, and are usually built in a way specific to each model.
Most encoder-decoder models (BART, T5) create their decoder_input_ids
on their own from the labels
. In such models, passing the labels
is the preferred way to handle training.
Please check each modelβs docs to see how they handle these input IDs for sequence to sequence training.
decoder models
Also referred to as autoregressive models, decoder models involve a pretraining task (called causal language modeling) where the model reads the texts in order and has to predict the next word. Itβs usually done by reading the whole sentence with a mask to hide future tokens at a certain timestep.
deep learning (DL)
Machine learning algorithms which uses neural networks with several layers.
E
encoder models
F
feature extraction
The process of selecting and transforming raw data into a set of features that are more informative and useful for machine learning algorithms. Some examples of feature extraction include transforming raw text into word embeddings and extracting important features such as edges or shapes from image/video data.
feed forward chunking
In each residual attention block in transformers the self-attention layer is usually followed by 2 feed forward layers. The intermediate embedding size of the feed forward layers is often bigger than the hidden size of the model (e.g., for bert-base-uncased
).
finetuned models
H
head
The model head refers to the last layer of a neural network that accepts the raw hidden states and projects them onto a different dimension. There is a different model head for each task. For example:
I
image patch
Vision-based Transformers models split an image into smaller patches which are linearly embedded, and then passed as a sequence to the model. You can find the patch_size
- or resolution - of the model in its configuration.
inference
input IDs
The input ids are often the only required parameters to be passed to the model as input. They are token indices, numerical representations of tokens building the sequences that will be used as input by the model.
Copied
The tokenizer takes care of splitting the sequence into tokens available in the tokenizer vocabulary.
Copied
The tokens are either words or subwords. Here for instance, βVRAMβ wasnβt in the model vocabulary, so itβs been split in βVβ, βRAβ and βMβ. To indicate those tokens are not separate words but parts of the same word, a double-hash prefix is added for βRAβ and βMβ:
Copied
Copied
The tokenizer returns a dictionary with all the arguments necessary for its corresponding model to work properly. The token indices are under the key input_ids
:
Copied
Note that the tokenizer automatically adds βspecial tokensβ (if the associated model relies on them) which are special IDs the model sometimes uses.
If we decode the previous sequence of ids,
Copied
we will see
Copied
L
labels
The labels are an optional argument which can be passed in order for the model to compute the loss itself. These labels should be the expected prediction of the model: it will use the standard loss in order to compute the loss between its predictions and the expected value (the label).
These labels are different according to the model head, for example:
Each modelβs labels may be different, so be sure to always check the documentation of each model for more information about their specific labels!
large language models (LLM)
A generic term that refers to transformer language models (GPT-3, BLOOM, OPT) that were trained on a large quantity of data. These models also tend to have a large number of learnable parameters (e.g. 175 billion for GPT-3).
M
masked language modeling (MLM)
A pretraining task where the model sees a corrupted version of the texts, usually done by masking some tokens randomly, and has to predict the original text.
multimodal
A task that combines texts with another kind of inputs (for instance images).
N
Natural language generation (NLG)
Natural language processing (NLP)
A generic way to say βdeal with textsβ.
Natural language understanding (NLU)
All tasks related to understanding what is in a text (for instance classifying the whole text, individual words).
P
pipeline
A pipeline in π€ Transformers is an abstraction referring to a series of steps that are executed in a specific order to preprocess and transform data and return a prediction from a model. Some example stages found in a pipeline might be data preprocessing, feature extraction, and normalization.
pixel values
A tensor of the numerical representations of an image that is passed to a model. The pixel values have a shape of [batch_size
, num_channels
, height
, width
], and are generated from an image processor.
pooling
An operation that reduces a matrix into a smaller matrix, either by taking the maximum or average of the pooled dimension(s). Pooling layers are commonly found between convolutional layers to downsample the feature representation.
position IDs
Contrary to RNNs that have the position of each token embedded within them, transformers are unaware of the position of each token. Therefore, the position IDs (position_ids
) are used by the model to identify each tokenβs position in the list of tokens.
They are an optional parameter. If no position_ids
are passed to the model, the IDs are automatically created as absolute positional embeddings.
Absolute positional embeddings are selected in the range [0, config.max_position_embeddings - 1]
. Some models use other types of positional embeddings, such as sinusoidal position embeddings or relative position embeddings.
preprocessing
pretrained model
Speech and vision models have their own pretraining objectives. For example, Wav2Vec2 is a speech model pretrained on a contrastive task which requires the model to identify the βtrueβ speech representation from a set of βfalseβ speech representations. On the other hand, BEiT is a vision model pretrained on a masked image modeling task which masks some of the image patches and requires the model to predict the masked patches (similar to the masked language modeling objective).
R
recurrent neural network (RNN)
A type of model that uses a loop over a layer to process texts.
representation learning
A subfield of machine learning which focuses on learning meaningful representations of raw data. Some examples of representation learning techniques include word embeddings, autoencoders, and Generative Adversarial Networks (GANs).
S
sampling rate
A measurement in hertz of the number of samples (the audio signal) taken per second. The sampling rate is a result of discretizing a continuous signal such as speech.
self-attention
Each element of the input finds out which other elements of the input they should attend to.
self-supervised learning
semi-supervised learning
An example of a semi-supervised learning approach is βself-trainingβ, in which a model is trained on labeled data, and then used to make predictions on the unlabeled data. The portion of the unlabeled data that the model predicts with the most confidence gets added to the labeled dataset and used to retrain the model.
sequence-to-sequence (seq2seq)
stride
supervised learning
A form of model training that directly uses labeled data to correct and instruct model performance. Data is fed into the model being trained, and its predictions are compared to the known labels. The model updates its weights based on how incorrect its predictions were, and the process is repeated to optimize model performance.
T
token
A part of a sentence, usually a word, but can also be a subword (non-common words are often split in subwords) or a punctuation symbol.
token Type IDs
Some modelsβ purpose is to do classification on pairs of sentences or question answering.
These require two different sequences to be joined in a single βinput_idsβ entry, which usually is performed with the help of special tokens, such as the classifier ([CLS]
) and separator ([SEP]
) tokens. For example, the BERT model builds its two sequence input as such:
Copied
We can use our tokenizer to automatically generate such a sentence by passing the two sequences to tokenizer
as two arguments (and not a list, like before) like this:
Copied
which will return:
Copied
This is enough for some models to understand where one sequence ends and where another begins. However, other models, such as BERT, also deploy token type IDs (also called segment IDs). They are represented as a binary mask identifying the two types of sequence in the model.
The tokenizer returns this mask as the βtoken_type_idsβ entry:
Copied
The first sequence, the βcontextβ used for the question, has all its tokens represented by a 0
, whereas the second sequence, corresponding to the βquestionβ, has all its tokens represented by a 1
.
transfer learning
A technique that involves taking a pretrained model and adapting it to a dataset specific to your task. Instead of training a model from scratch, you can leverage knowledge obtained from an existing model as a starting point. This speeds up the learning process and reduces the amount of training data needed.
transformer
Self-attention based deep learning model architecture.
U
unsupervised learning
A form of model training in which data provided to the model is not labeled. Unsupervised learning techniques leverage statistical information of the data distribution to find patterns useful for the task at hand.
Last updated