Quantization
Quantize 🌍 Transformers models
AutoGPTQ Integration
🌍 Transformers has integrated optimum
API to perform GPTQ quantization on language models. You can load and quantize your model in 8, 4, 3 or even 2 bits without a big drop of performance and faster inference speed! This is supported by most GPU hardwares.
To learn more about the the quantization model, check out:
Requirements
You need to have the following requirements installed to run the code below:
Install latest
AutoGPTQ
librarypip install auto-gptq
Install latest
optimum
from sourcepip install git+https://github.com/boincai/optimum.git
Install latest
transformers
from sourcepip install git+https://github.com/boincai/transformers.git
Install latest
accelerate
librarypip install --upgrade accelerate
Note that GPTQ integration supports for now only text models and you may encounter unexpected behaviour for vision, speech or multi-modal models.
Load and quantize a model
GPTQ is a quantization method that requires weights calibration before using the quantized models. If you want to quantize transformers model from scratch, it might take some time before producing the quantized model (~5 min on a Google colab for facebook/opt-350m
model).
Hence, there are two different scenarios where you want to use GPTQ-quantized models. The first use case would be to load models that has been already quantized by other users that are available on the Hub, the second use case would be to quantize your model from scratch and save it or push it on the Hub so that other users can also use it.
GPTQ Configuration
In order to load and quantize a model, you need to create a GPTQConfig. You need to pass the number of bits
, a dataset
in order to calibrate the quantization and the tokenizer
of the model in order prepare the dataset.
Copied
Note that you can pass your own dataset as a list of string. However, it is highly recommended to use the dataset from the GPTQ paper.
Copied
Quantization
You can quantize a model by using from_pretrained
and setting the quantization_config
.
Copied
Note that you will need a GPU to quantize a model. We will put the model in the cpu and move the modules back and forth to the gpu in order to quantize them.
If you want to maximize your gpus usage while using cpu offload, you can set device_map = "auto"
.
Copied
Note that disk offload is not supported. Furthermore, if you are out of memory because of the dataset, you may have to pass max_memory
in from_pretained
. Checkout this guide to learn more about device_map
and max_memory
.
GPTQ quantization only works for text model for now. Futhermore, the quantization process can a lot of time depending on one's hardware (175B model = 4 gpu hours using NVIDIA A100). Please check on the hub if there is not a GPTQ quantized version of the model. If not, you can submit a demand on github.
Push quantized model to 🌍 Hub
You can push the quantized model like any 🌍 model to Hub with push_to_hub
. The quantization config will be saved and pushed along the model.
Copied
If you want to save your quantized model on your local machine, you can also do it with save_pretrained
:
Copied
Note that if you have quantized your model with a device_map
, make sure to move the entire model to one of your gpus or the cpu
before saving it.
Copied
Load a quantized model from the 🌍 Hub
You can load a quantized model from the Hub by using from_pretrained
. Make sure that the pushed weights are quantized, by checking that the attribute quantization_config
is present in the model configuration object.
Copied
If you want to load a model faster and without allocating more memory than needed, the device_map
argument also works with quantized model. Make sure that you have accelerate
library installed.
Copied
Exllama kernels for faster inference
For 4-bit model, you can use the exllama kernels in order to a faster inference speed. It is activated by default. You can change that behavior by passing disable_exllama
in GPTQConfig. This will overwrite the quantization config stored in the config. Note that you will only be able to overwrite the attributes related to the kernels. Furthermore, you need to have the entire model on gpus if you want to use exllama kernels.
Copied
Note that only 4-bit models are supported for now. Furthermore, it is recommended to deactivate the exllama kernels if you are finetuning a quantized model with peft.
Fine-tune a quantized model
With the official support of adapters in the BOINC AI ecosystem, you can fine-tune models that have been quantized with GPTQ. Please have a look at peft
library for more details.
Example demo
Check out the Google Colab notebook to learn how to quantize your model with GPTQ and how finetune the quantized model with peft.
GPTQConfig
class transformers.GPTQConfig
( bits: inttokenizer: typing.Any = Nonedataset: typing.Union[str, typing.List[str], NoneType] = Nonegroup_size: int = 128damp_percent: float = 0.1desc_act: bool = Falsesym: bool = Truetrue_sequential: bool = Trueuse_cuda_fp16: bool = Falsemodel_seqlen: typing.Optional[int] = Noneblock_name_to_quantize: typing.Optional[str] = Nonemodule_name_preceding_first_block: typing.Optional[typing.List[str]] = Nonebatch_size: int = 1pad_token_id: typing.Optional[int] = Nonedisable_exllama: bool = Falsemax_input_length: typing.Optional[int] = None**kwargs )
Parameters
bits (
int
) — The number of bits to quantize to, supported numbers are (2, 3, 4, 8).tokenizer (
str
orPreTrainedTokenizerBase
, optional) — The tokenizer used to process the dataset. You can pass either:A custom tokenizer object.
A string, the model id of a predefined tokenizer hosted inside a model repo on boincai.com. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing vocabulary files required by the tokenizer, for instance saved using the save_pretrained() method, e.g.,
./my_model_directory/
.
dataset (
Union[List[str]]
, optional) — The dataset used for quantization. You can provide your own dataset in a list of string or just use the original datasets used in GPTQ paper [‘wikitext2’,‘c4’,‘c4-new’,‘ptb’,‘ptb-new’]group_size (
int
, optional, defaults to 128) — The group size to use for quantization. Recommended value is 128 and -1 uses per-column quantization.damp_percent (
float
, optional, defaults to 0.1) — The percent of the average Hessian diagonal to use for dampening. Recommended value is 0.1.desc_act (
bool
, optional, defaults toFalse
) — Whether to quantize columns in order of decreasing activation size. Setting it to False can significantly speed up inference but the perplexity may become slightly worse. Also known as act-order.sym (
bool
, optional, defaults toTrue
) — Whether to use symetric quantization.true_sequential (
bool
, optional, defaults toTrue
) — Whether to perform sequential quantization even within a single Transformer block. Instead of quantizing the entire block at once, we perform layer-wise quantization. As a result, each layer undergoes quantization using inputs that have passed through the previously quantized layers.use_cuda_fp16 (
bool
, optional, defaults toFalse
) — Whether or not to use optimized cuda kernel for fp16 model. Need to have model in fp16.model_seqlen (
int
, optional) — The maximum sequence length that the model can take.block_name_to_quantize (
str
, optional) — The transformers block name to quantize.module_name_preceding_first_block (
List[str]
, optional) — The layers that are preceding the first Transformer block.batch_size (
int
, optional, defaults to 1) — The batch size used when processing the datasetpad_token_id (
int
, optional) — The pad token id. Needed to prepare the dataset whenbatch_size
> 1.disable_exllama (
bool
, optional, defaults toFalse
) — Whether to use exllama backend. Only works withbits
= 4.max_input_length (
int
, optional) — The maximum input length. This is needed to initialize a buffer that depends on the maximum expected input length. It is specific to the exllama backend with act-order.
This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using optimum
api for gptq quantization relying on auto_gptq backend.
post_init
( )
Safety checker that arguments are correct
bitsandbytes Integration
🌍 Transformers is closely integrated with most used modules on bitsandbytes
. You can load your model in 8-bit precision with few lines of code. This is supported by most of the GPU hardwares since the 0.37.0
release of bitsandbytes
.
Learn more about the quantization method in the LLM.int8() paper, or the blogpost about the collaboration.
Since its 0.39.0
release, you can load any model that supports device_map
using 4-bit quantization, leveraging FP4 data type.
If you want to quantize your own pytorch model, check out this documentation from 🌍 Accelerate library.
Here are the things you can do using bitsandbytes
integration
General usage
You can quantize a model by using the load_in_8bit
or load_in_4bit
argument when calling the from_pretrained() method as long as your model supports loading with 🌍 Accelerate and contains torch.nn.Linear
layers. This should work for any modality as well.
Copied
By default all other modules (e.g. torch.nn.LayerNorm
) will be converted in torch.float16
, but if you want to change their dtype
you can overwrite the torch_dtype
argument:
Copied
FP4 quantization
Requirements
Make sure that you have installed the requirements below before running any of the code snippets below.
Latest
bitsandbytes
librarypip install bitsandbytes>=0.39.0
Install latest
accelerate
pip install --upgrade accelerate
Install latest
transformers
pip install --upgrade transformers
Tips and best practices
Advanced usage: Refer to this Google Colab notebook for advanced usage of 4-bit quantization with all the possible options.
Faster inference with
batch_size=1
: Since the0.40.0
release of bitsandbytes, forbatch_size=1
you can benefit from fast inference. Check out these release notes and make sure to have a version that is greater than0.40.0
to benefit from this feature out of the box.Training: According to QLoRA paper, for training 4-bit base models (e.g. using LoRA adapters) one should use
bnb_4bit_quant_type='nf4'
.Inference: For inference,
bnb_4bit_quant_type
does not have a huge impact on the performance. However for consistency with the model’s weights, make sure you use the samebnb_4bit_compute_dtype
andtorch_dtype
arguments.
Load a large model in 4bit
By using load_in_4bit=True
when calling the .from_pretrained
method, you can divide your memory use by 4 (roughly).
Copied
Note that once a model has been loaded in 4-bit it is currently not possible to push the quantized weights on the Hub. Note also that you cannot train 4-bit weights as this is not supported yet. However you can use 4-bit models to train extra parameters, this will be covered in the next section.
Load a large model in 8bit
You can load a model by roughly halving the memory requirements by using load_in_8bit=True
argument when calling .from_pretrained
method
Copied
Then, use your model as you would usually use a PreTrainedModel.
You can check the memory footprint of your model with get_memory_footprint
method.
Copied
With this integration we were able to load large models on smaller devices and run them without any issue.
Note that once a model has been loaded in 8-bit it is currently not possible to push the quantized weights on the Hub except if you use the latest transformers
and bitsandbytes
. Note also that you cannot train 8-bit weights as this is not supported yet. However you can use 8-bit models to train extra parameters, this will be covered in the next section. Note also that device_map
is optional but setting device_map = 'auto'
is prefered for inference as it will dispatch efficiently the model on the available ressources.
Advanced use cases
Here we will cover some advanced use cases you can perform with FP4 quantization
Change the compute dtype
The compute dtype is used to change the dtype that will be used during computation. For example, hidden states could be in float32
but computation can be set to bf16 for speedups. By default, the compute dtype is set to float32
.
Copied
Using NF4 (Normal Float 4) data type
You can also use the NF4 data type, which is a new 4bit datatype adapted for weights that have been initialized using a normal distribution. For that run:
Copied
Use nested quantization for more memory efficient inference
We also advise users to use the nested quantization technique. This saves more memory at no additional performance - from our empirical observations, this enables fine-tuning llama-13b model on an NVIDIA-T4 16GB with a sequence length of 1024, batch size of 1 and gradient accumulation steps of 4.
Copied
Push quantized models on the 🌍 Hub
You can push a quantized model on the Hub by naively using push_to_hub
method. This will first push the quantization configuration file, then push the quantized model weights. Make sure to use bitsandbytes>0.37.2
(at this time of writing, we tested it on bitsandbytes==0.38.0.post1
) to be able to use this feature.
Copied
Pushing 8bit models on the Hub is strongely encouraged for large models. This will allow the community to benefit from the memory footprint reduction and loading for example large models on a Google Colab.
Load a quantized model from the 🌍 Hub
You can load a quantized model from the Hub by using from_pretrained
method. Make sure that the pushed weights are quantized, by checking that the attribute quantization_config
is present in the model configuration object.
Copied
Note that in this case, you don’t need to specify the arguments load_in_8bit=True
, but you need to make sure that bitsandbytes
and accelerate
are installed. Note also that device_map
is optional but setting device_map = 'auto'
is prefered for inference as it will dispatch efficiently the model on the available ressources.
Advanced use cases
This section is intended to advanced users, that want to explore what it is possible to do beyond loading and running 8-bit models.
Offload between cpu and gpu
One of the advanced use case of this is being able to load a model and dispatch the weights between CPU
and GPU
. Note that the weights that will be dispatched on CPU will not be converted in 8-bit, thus kept in float32
. This feature is intended for users that want to fit a very large model and dispatch the model between GPU and CPU.
First, load a BitsAndBytesConfig from transformers
and set the attribute llm_int8_enable_fp32_cpu_offload
to True
:
Copied
Let’s say you want to load bigscience/bloom-1b7
model, and you have just enough GPU RAM to fit the entire model except the lm_head
. Therefore write a custom device_map as follows:
Copied
And load your model as follows:
Copied
And that’s it! Enjoy your model!
Play with llm_int8_threshold
You can play with the llm_int8_threshold
argument to change the threshold of the outliers. An “outlier” is a hidden state value that is greater than a certain threshold. This corresponds to the outlier threshold for outlier detection as described in LLM.int8()
paper. Any hidden states value that is above this threshold will be considered an outlier and the operation on those values will be done in fp16. Values are usually normally distributed, that is, most values are in the range [-3.5, 3.5], but there are some exceptional systematic outliers that are very differently distributed for large models. These outliers are often in the interval [-60, -6] or [6, 60]. Int8 quantization works well for values of magnitude ~5, but beyond that, there is a significant performance penalty. A good default threshold is 6, but a lower threshold might be needed for more unstable models (small models, fine-tuning). This argument can impact the inference speed of the model. We suggest to play with this parameter to find which one is the best for your use case.
Copied
Skip the conversion of some modules
Some models has several modules that needs to be not converted in 8-bit to ensure stability. For example Jukebox model has several lm_head
modules that should be skipped. Play with llm_int8_skip_modules
Copied
Fine-tune a model that has been loaded in 8-bit
With the official support of adapters in the BOINC AI ecosystem, you can fine-tune models that have been loaded in 8-bit. This enables fine-tuning large models such as flan-t5-large
or facebook/opt-6.7b
in a single google Colab. Please have a look at peft
library for more details.
Note that you don’t need to pass device_map
when loading the model for training. It will automatically load your model on your GPU. You can also set the device map to a specific device if needed (e.g. cuda:0
, 0
, torch.device('cuda:0')
). Please note that device_map=auto
should be used for inference only.
BitsAndBytesConfig
class transformers.BitsAndBytesConfig
( load_in_8bit = Falseload_in_4bit = Falsellm_int8_threshold = 6.0llm_int8_skip_modules = Nonellm_int8_enable_fp32_cpu_offload = Falsellm_int8_has_fp16_weight = Falsebnb_4bit_compute_dtype = Nonebnb_4bit_quant_type = 'fp4'bnb_4bit_use_double_quant = False**kwargs )
Parameters
load_in_8bit (
bool
, optional, defaults toFalse
) — This flag is used to enable 8-bit quantization with LLM.int8().load_in_4bit (
bool
, optional, defaults toFalse
) — This flag is used to enable 4-bit quantization by replacing the Linear layers with FP4/NF4 layers frombitsandbytes
.llm_int8_threshold (
float
, optional, defaults to 6) — This corresponds to the outlier threshold for outlier detection as described inLLM.int8() : 8-bit Matrix Multiplication for Transformers at Scale
paper: https://arxiv.org/abs/2208.07339 Any hidden states value that is above this threshold will be considered an outlier and the operation on those values will be done in fp16. Values are usually normally distributed, that is, most values are in the range [-3.5, 3.5], but there are some exceptional systematic outliers that are very differently distributed for large models. These outliers are often in the interval [-60, -6] or [6, 60]. Int8 quantization works well for values of magnitude ~5, but beyond that, there is a significant performance penalty. A good default threshold is 6, but a lower threshold might be needed for more unstable models (small models, fine-tuning).llm_int8_skip_modules (
List[str]
, optional) — An explicit list of the modules that we do not want to convert in 8-bit. This is useful for models such as Jukebox that has several heads in different places and not necessarily at the last position. For example forCausalLM
models, the lastlm_head
is kept in its originaldtype
.llm_int8_enable_fp32_cpu_offload (
bool
, optional, defaults toFalse
) — This flag is used for advanced use cases and users that are aware of this feature. If you want to split your model in different parts and run some parts in int8 on GPU and some parts in fp32 on CPU, you can use this flag. This is useful for offloading large models such asgoogle/flan-t5-xxl
. Note that the int8 operations will not be run on CPU.llm_int8_has_fp16_weight (
bool
, optional, defaults toFalse
) — This flag runs LLM.int8() with 16-bit main weights. This is useful for fine-tuning as the weights do not have to be converted back and forth for the backward pass.bnb_4bit_compute_dtype (
torch.dtype
or str, optional, defaults totorch.float32
) — This sets the computational type which might be different than the input time. For example, inputs might be fp32, but computation can be set to bf16 for speedups.bnb_4bit_quant_type (
str
, {fp4, nf4}, defaults tofp4
) — This sets the quantization data type in the bnb.nn.Linear4Bit layers. Options are FP4 and NF4 data types which are specified byfp4
ornf4
.bnb_4bit_use_double_quant (
bool
, optional, defaults toFalse
) — This flag is used for nested quantization where the quantization constants from the first quantization are quantized again.kwargs (
Dict[str, Any]
, optional) — Additional parameters from which to initialize the configuration object.
This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using bitsandbytes
.
This replaces load_in_8bit
or load_in_4bit
therefore both options are mutually exclusive.
Currently only supports LLM.int8()
, FP4
, and NF4
quantization. If more methods are added to bitsandbytes
, then more arguments will be added to this class.
is_quantizable
( )
Returns True
if the model is quantizable, False
otherwise.
post_init
( )
Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.
quantization_method
( )
This method returns the quantization method used for the model. If the model is not quantizable, it returns None
.
to_diff_dict
( ) → Dict[str, Any]
Returns
Dict[str, Any]
Dictionary of all the attributes that make up this configuration instance,
Removes all attributes from config which correspond to the default config attributes for better readability and serializes to a Python dictionary.
Quantization with 🌍 optimum
Please have a look at Optimum documentation to learn more about quantization methods that are supported by optimum
and see if these are applicable for your use case.
Last updated