FLAN-UL2
FLAN-UL2
Overview
Flan-UL2 is an encoder decoder model based on the T5 architecture. It uses the same configuration as the UL2 model released earlier last year. It was fine tuned using the βFlanβ prompt tuning and dataset collection. Similiar to Flan-T5
, one can directly use FLAN-UL2 weights without finetuning the model:
According ot the original blog here are the notable improvements:
The original UL2 model was only trained with receptive field of 512, which made it non-ideal for N-shot prompting where N is large.
The Flan-UL2 checkpoint uses a receptive field of 2048 which makes it more usable for few-shot in-context learning.
The original UL2 model also had mode switch tokens that was rather mandatory to get good performance. However, they were a little cumbersome as this requires often some changes during inference or finetuning. In this update/change, we continue training UL2 20B for an additional 100k steps (with small batch) to forget βmode tokensβ before applying Flan instruction tuning. This Flan-UL2 checkpoint does not require mode tokens anymore. Google has released the following variants:
One can refer to T5βs documentation page for all tips, code examples and notebooks. As well as the FLAN-T5 model card for more details regarding training and evaluation of the model.
The original checkpoints can be found here.
Running on low resource devices
The model is pretty heavy (~40GB in half precision) so if you just want to run the model, make sure you load your model in 8bit, and use device_map="auto"
to make sure you donβt have any OOM issue!
Copied
Inference
The inference protocol is exaclty the same as any T5
model, please have a look at the T5βs documentation page for more details.
Last updated