T5v1.1
T5v1.1
Overview
T5v1.1 was released in the google-research/text-to-text-transfer-transformer repository by Colin Raffel et al. Itβs an improved version of the original T5 model.
One can directly plug in the weights of T5v1.1 into a T5 model, like so:
Copied
T5 Version 1.1 includes the following improvements compared to the original T5 model:
GEGLU activation in the feed-forward hidden layer, rather than ReLU. See this paper.
Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
Pre-trained on C4 only without mixing in the downstream tasks.
No parameter sharing between the embedding and classifier layer.
βxlβ and βxxlβ replace β3Bβ and β11Bβ. The model shapes are a bit different - larger
d_model
and smallernum_heads
andd_ff
.
Note: T5 Version 1.1 was only pre-trained on C4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5 model. Since t5v1.1 was pre-trained unsupervisedly, thereβs no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix.
Google has released the following variants:
One can refer to T5βs documentation page for all tips, code examples and notebooks.
This model was contributed by patrickvonplaten. The original code can be found here.
Last updated