Utility functions and classes
Helpful Utilities
Below are a variety of utility functions that ๐ Accelerate provides, broken down by use-case.
Constants
Constants used throughout ๐ Accelerate for reference
The following are constants used when utilizing Accelerator.save_state()
utils.MODEL_NAME
: "pytorch_model"
utils.OPTIMIZER_NAME
: "optimizer"
utils.RNG_STATE_NAME
: "random_states"
utils.SCALER_NAME
: "scaler.pt
utils.SCHEDULER_NAME
: "scheduler
The following are constants used when utilizing Accelerator.save_model()
utils.WEIGHTS_NAME
: "pytorch_model.bin"
utils.SAFE_WEIGHTS_NAME
: "model.safetensors"
utils.WEIGHTS_INDEX_NAME
: "pytorch_model.bin.index.json"
utils.SAFE_WEIGHTS_INDEX_NAME
: "model.safetensors.index.json"
Data Classes
These are basic dataclasses used throughout ๐ Accelerate and they can be passed in as parameters.
class accelerate.DistributedType
( valuenames = Nonemodule = Nonequalname = Nonetype = Nonestart = 1 )
Represents a type of distributed environment.
Values:
NO โ Not a distributed environment, just a single process.
MULTI_CPU โ Distributed on multiple CPU nodes.
MULTI_GPU โ Distributed on multiple GPUs.
MULTI_NPU โ Distributed on multiple NPUs.
MULTI_XPU โ Distributed on multiple XPUs.
DEEPSPEED โ Using DeepSpeed.
TPU โ Distributed on TPUs.
class accelerate.utils.DynamoBackend
( valuenames = Nonemodule = Nonequalname = Nonetype = Nonestart = 1 )
Represents a dynamo backend (see https://github.com/pytorch/torchdynamo).
Values:
NO โ Do not use torch dynamo.
EAGER โ Uses PyTorch to run the extracted GraphModule. This is quite useful in debugging TorchDynamo issues.
AOT_EAGER โ Uses AotAutograd with no compiler, i.e, just using PyTorch eager for the AotAutogradโs extracted forward and backward graphs. This is useful for debugging, and unlikely to give speedups.
INDUCTOR โ Uses TorchInductor backend with AotAutograd and cudagraphs by leveraging codegened Triton kernels. Read more
AOT_TS_NVFUSER โ nvFuser with AotAutograd/TorchScript. Read more
NVPRIMS_NVFUSER โ nvFuser with PrimTorch. Read more
CUDAGRAPHS โ cudagraphs with AotAutograd. Read more
OFI โ Uses Torchscript optimize_for_inference. Inference only. Read more
FX2TRT โ Uses Nvidia TensorRT for inference optimizations. Inference only. Read more
ONNXRT โ Uses ONNXRT for inference on CPU/GPU. Inference only. Read more
TENSORRT โ Uses ONNXRT to run TensorRT for inference optimizations. Read more
IPEX โ Uses IPEX for inference on CPU. Inference only. Read more.
TVM โ Uses Apach TVM for inference optimizations. Read more
class accelerate.utils.LoggerType
( valuenames = Nonemodule = Nonequalname = Nonetype = Nonestart = 1 )
Represents a type of supported experiment tracker
Values:
ALL โ all available trackers in the environment that are supported
TENSORBOARD โ TensorBoard as an experiment tracker
WANDB โ wandb as an experiment tracker
COMETML โ comet_ml as an experiment tracker
class accelerate.utils.PrecisionType
( valuenames = Nonemodule = Nonequalname = Nonetype = Nonestart = 1 )
Represents a type of precision used on floating point values
Values:
NO โ using full precision (FP32)
FP16 โ using half precision
BF16 โ using brain floating point precision
class accelerate.utils.ProjectConfiguration
( project_dir: str = Nonelogging_dir: str = Noneautomatic_checkpoint_naming: bool = Falsetotal_limit: int = Noneiteration: int = 0save_on_each_node: bool = False )
Configuration for the Accelerator object based on inner-project needs.
set_directories
( project_dir: str = None )
Sets self.project_dir
and self.logging_dir
to the appropriate values.
Plugins
These are plugins that can be passed to the Accelerator object. While they are defined elsewhere in the documentation, for convience all of them are available to see here:
class accelerate.DeepSpeedPlugin
( ba_ds_config: typing.Any = Nonegradient_accumulation_steps: int = Nonegradient_clipping: float = Nonezero_stage: int = Noneis_train_batch_min: str = Trueoffload_optimizer_device: bool = Noneoffload_param_device: bool = Noneoffload_optimizer_nvme_path: str = Noneoffload_param_nvme_path: str = Nonezero3_init_flag: bool = Nonezero3_save_16bit_model: bool = None )
This plugin is used to integrate DeepSpeed.
deepspeed_config_process
( prefix = ''mismatches = Noneconfig = Nonemust_match = True**kwargs )
Process the DeepSpeed config with the values from the kwargs.
class accelerate.FullyShardedDataParallelPlugin
( sharding_strategy: typing.Any = Nonebackward_prefetch: typing.Any = Nonemixed_precision_policy: typing.Any = Noneauto_wrap_policy: typing.Optional[typing.Callable] = Nonecpu_offload: typing.Any = Noneignored_modules: typing.Optional[typing.Iterable[torch.nn.modules.module.Module]] = Nonestate_dict_type: typing.Any = Nonestate_dict_config: typing.Any = Noneoptim_state_dict_config: typing.Any = Nonelimit_all_gathers: bool = Falseuse_orig_params: bool = Falseparam_init_fn: typing.Optional[typing.Callable[[torch.nn.modules.module.Module]], NoneType] = Nonesync_module_states: bool = Trueforward_prefetch: bool = Falseactivation_checkpointing: bool = False )
This plugin is used to enable fully sharded data parallelism.
get_module_class_from_name
( modulename )
Parameters
module (
torch.nn.Module
) โ The module to get the class from.name (
str
) โ The name of the class.
Gets a class from a module by its name.
class accelerate.utils.GradientAccumulationPlugin
( num_steps: int = Noneadjust_scheduler: bool = Truesync_with_dataloader: bool = True )
A plugin to configure gradient accumulation behavior.
class accelerate.utils.MegatronLMPlugin
( tp_degree: int = Nonepp_degree: int = Nonenum_micro_batches: int = Nonegradient_clipping: float = Nonesequence_parallelism: bool = Nonerecompute_activation: bool = Noneuse_distributed_optimizer: bool = Nonepipeline_model_parallel_split_rank: int = Nonenum_layers_per_virtual_pipeline_stage: int = Noneis_train_batch_min: str = Truetrain_iters: int = Nonetrain_samples: int = Noneweight_decay_incr_style: str = 'constant'start_weight_decay: float = Noneend_weight_decay: float = Nonelr_decay_style: str = 'linear'lr_decay_iters: int = Nonelr_decay_samples: int = Nonelr_warmup_iters: int = Nonelr_warmup_samples: int = Nonelr_warmup_fraction: float = Nonemin_lr: float = 0consumed_samples: typing.List[int] = Noneno_wd_decay_cond: typing.Optional[typing.Callable] = Nonescale_lr_cond: typing.Optional[typing.Callable] = Nonelr_mult: float = 1.0megatron_dataset_flag: bool = Falseseq_length: int = Noneencoder_seq_length: int = Nonedecoder_seq_length: int = Nonetensorboard_dir: str = Noneset_all_logging_options: bool = Falseeval_iters: int = 100eval_interval: int = 1000return_logits: bool = Falsecustom_train_step_class: typing.Optional[typing.Any] = Nonecustom_train_step_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = Nonecustom_model_provider_function: typing.Optional[typing.Callable] = Nonecustom_prepare_model_function: typing.Optional[typing.Callable] = Noneother_megatron_args: typing.Union[typing.Dict[str, typing.Any], NoneType] = None )
Plugin for Megatron-LM to enable tensor, pipeline, sequence and data parallelism. Also to enable selective activation recomputation and optimized fused kernels.
class accelerate.utils.TorchDynamoPlugin
( backend: DynamoBackend = Nonemode: str = Nonefullgraph: bool = Nonedynamic: bool = Noneoptions: typing.Any = Nonedisable: bool = False )
This plugin is used to compile a model with PyTorch 2.0
Data Manipulation and Operations
These include data operations that mimic the same torch
ops but can be used on distributed processes.
accelerate.utils.broadcast
( tensorfrom_process: int = 0 )
Parameters
tensor (nested list/tuple/dictionary of
torch.Tensor
) โ The data to gather.from_process (
int
, optional, defaults to 0) โ The process from which to send the data
Recursively broadcast tensor in a nested list/tuple/dictionary of tensors to all devices.
accelerate.utils.concatenate
( datadim = 0 )
Parameters
data (nested list/tuple/dictionary of lists of tensors
torch.Tensor
) โ The data to concatenate.dim (
int
, optional, defaults to 0) โ The dimension on which to concatenate.
Recursively concatenate the tensors in a nested list/tuple/dictionary of lists of tensors with the same shape.
accelerate.utils.gather
( tensor )
Parameters
tensor (nested list/tuple/dictionary of
torch.Tensor
) โ The data to gather.
Recursively gather tensor in a nested list/tuple/dictionary of tensors from all devices.
accelerate.utils.pad_across_processes
( tensordim = 0pad_index = 0pad_first = False )
Parameters
tensor (nested list/tuple/dictionary of
torch.Tensor
) โ The data to gather.dim (
int
, optional, defaults to 0) โ The dimension on which to pad.pad_index (
int
, optional, defaults to 0) โ The value with which to pad.pad_first (
bool
, optional, defaults toFalse
) โ Whether to pad at the beginning or the end.
Recursively pad the tensors in a nested list/tuple/dictionary of tensors from all devices to the same size so they can safely be gathered.
accelerate.utils.reduce
( tensorreduction = 'mean'scale = 1.0 )
Parameters
tensor (nested list/tuple/dictionary of
torch.Tensor
) โ The data to reduce.reduction (
str
, optional, defaults to"mean"
) โ A reduction method. Can be of โmeanโ, โsumโ, or โnoneโscale (
float
, optional) โ A default scaling value to be applied after the reduce, only valied on XLA.
Recursively reduce the tensors in a nested list/tuple/dictionary of lists of tensors across all processes by the mean of a given operation.
accelerate.utils.send_to_device
( tensordevicenon_blocking = Falseskip_keys = None )
Parameters
tensor (nested list/tuple/dictionary of
torch.Tensor
) โ The data to send to a given device.device (
torch.device
) โ The device to send the data to.
Recursively sends the elements in a nested list/tuple/dictionary of tensors to a given device.
Environment Checks
These functionalities check the state of the current working environment including information about the operating system itself, what it can support, and if particular dependencies are installed.
accelerate.utils.is_bf16_available
( ignore_tpu = False )
Checks if bf16 is supported, optionally ignoring the TPU
accelerate.utils.is_ipex_available
( )
accelerate.utils.is_mps_available
( )
accelerate.utils.is_npu_available
( check_device = False )
Checks if torch_npu
is installed and potentially if a NPU is in the environment
accelerate.utils.is_torch_version
( operation: strversion: str )
Parameters
operation (
str
) โ A string representation of an operator, such as">"
or"<="
version (
str
) โ A string version of PyTorch
Compares the current PyTorch version to a given reference with an operation.
accelerate.utils.is_tpu_available
( check_device = True )
Checks if torch_xla
is installed and potentially if a TPU is in the environment
accelerate.utils.is_xpu_available
( check_device = False )
check if user disables it explicitly
Environment Manipulation
accelerate.utils.patch_environment
( **kwargs )
A context manager that will add each keyword argument passed to os.environ
and remove them when exiting.
Will convert the values in kwargs
to strings and upper-case all the keys.
Example:
Copied
accelerate.utils.clear_environment
( )
A context manager that will cache origin os.environ
and replace it with a empty dictionary in this context.
When this context exits, the cached os.environ
will be back.
Example:
Copied
accelerate.commands.config.default.write_basic_config
( mixed_precision = 'no'save_location: str = '/github/home/.cache/boincai/accelerate/default_config.yaml'use_xpu: bool = False )
Parameters
mixed_precision (
str
, optional, defaults to โnoโ) โ Mixed Precision to use. Should be one of โnoโ, โfp16โ, or โbf16โsave_location (
str
, optional, defaults todefault_json_config_file
) โ Optional custom save location. Should be passed to--config_file
when usingaccelerate launch
. Default location is inside the boincai cache folder (~/.cache/boincai
) but can be overriden by setting theBA_HOME
environmental variable, followed byaccelerate/default_config.yaml
.use_xpu (
bool
, optional, defaults toFalse
) โ Whether to use XPU if available.
Creates and saves a basic cluster config to be used on a local machine with potentially multiple GPUs. Will also set CPU if it is a CPU-only machine.
When setting up ๐ Accelerate for the first time, rather than running accelerate config
[~utils.write_basic_config] can be used as an alternative for quick configuration.
Memory
accelerate.utils.get_max_memory
( max_memory: typing.Union[typing.Dict[typing.Union[int, str], typing.Union[int, str]], NoneType] = None )
Get the maximum memory available if nothing is passed, converts string to int otherwise.
accelerate.find_executable_batch_size
( function: callable = Nonestarting_batch_size: int = 128 )
Parameters
function (
callable
, optional) โ A function to wrapstarting_batch_size (
int
, optional) โ The batch size to try and fit into memory
A basic decorator that will try to execute function
. If it fails from exceptions related to out-of-memory or CUDNN, the batch size is cut in half and passed to function
function
must take in a batch_size
parameter as its first argument.
Example:
Copied
Modeling
These utilities relate to interacting with PyTorch models
accelerate.utils.extract_model_from_parallel
( modelkeep_fp32_wrapper: bool = True ) โ torch.nn.Module
Parameters
model (
torch.nn.Module
) โ The model to extract.keep_fp32_wrapper (
bool
, optional) โ Whether to remove mixed precision hooks from the model.
Returns
torch.nn.Module
The extracted model.
Extract a model from its distributed containers.
accelerate.utils.get_max_layer_size
( modules: typing.List[typing.Tuple[str, torch.nn.modules.module.Module]]module_sizes: typing.Dict[str, int]no_split_module_classes: typing.List[str] ) โ Tuple[int, List[str]]
Parameters
modules (
List[Tuple[str, torch.nn.Module]]
) โ The list of named modules where we want to determine the maximum layer size.module_sizes (
Dict[str, int]
) โ A dictionary mapping each layer name to its size (as generated bycompute_module_sizes
).no_split_module_classes (
List[str]
) โ A list of class names for layers we donโt want to be split.
Returns
Tuple[int, List[str]]
The maximum size of a layer with the list of layer names realizing that maximum size.
Utility function that will scan a list of named modules and return the maximum size used by one full layer. The definition of a layer being:
a module with no direct children (just parameters and buffers)
a module whose class name is in the list
no_split_module_classes
accelerate.utils.offload_state_dict
( save_dir: typing.Union[str, os.PathLike]state_dict: typing.Dict[str, torch.Tensor] )
Parameters
save_dir (
str
oros.PathLike
) โ The directory in which to offload the state dict.state_dict (
Dict[str, torch.Tensor]
) โ The dictionary of tensors to offload.
Offload a state dict in a given folder.
Parallel
These include general utilities that should be used when working in parallel.
accelerate.utils.extract_model_from_parallel
( modelkeep_fp32_wrapper: bool = True ) โ torch.nn.Module
Parameters
model (
torch.nn.Module
) โ The model to extract.keep_fp32_wrapper (
bool
, optional) โ Whether to remove mixed precision hooks from the model.
Returns
torch.nn.Module
The extracted model.
Extract a model from its distributed containers.
accelerate.utils.save
( objfsave_on_each_node: bool = Falsesafe_serialization: bool = False )
Parameters
save_on_each_node (
bool
, optional, defaults toFalse
) โ Whether to only save on the global main processsafe_serialization (
bool
, optional, defaults toFalse
) โ Whether to saveobj
usingsafetensors
Save the data to disk. Use in place of torch.save()
.
accelerate.utils.wait_for_everyone
( )
Introduces a blocking point in the script, making sure all processes have reached this point before continuing.
Make sure all processes will reach this instruction otherwise one of your processes will hang forever.
Random
These utilities relate to setting and synchronizing of all the random states.
accelerate.utils.set_seed
( seed: intdevice_specific: bool = False )
Parameters
seed (
int
) โ The seed to set.device_specific (
bool
, optional, defaults toFalse
) โ Whether to differ the seed on each device slightly withself.process_index
.
Helper function for reproducible behavior to set the seed in random
, numpy
, torch
.
accelerate.utils.synchronize_rng_state
( rng_type: typing.Optional[accelerate.utils.dataclasses.RNGType] = Nonegenerator: typing.Optional[torch._C.Generator] = None )
accelerate.synchronize_rng_states
( rng_types: typing.List[typing.Union[str, accelerate.utils.dataclasses.RNGType]]generator: typing.Optional[torch._C.Generator] = None )
PyTorch XLA
These include utilities that are useful while using PyTorch with XLA.
accelerate.utils.install_xla
( upgrade: bool = False )
Parameters
upgrade (
bool
, optional, defaults toFalse
) โ Whether to upgradetorch
and install the latesttorch_xla
wheels.
Helper function to install appropriate xla wheels based on the torch
version in Google Colaboratory.
Example:
Copied
Loading model weights
These include utilities that are useful to load checkpoints.
accelerate.load_checkpoint_in_model
( model: Modulecheckpoint: typing.Union[str, os.PathLike]device_map: typing.Union[typing.Dict[str, typing.Union[int, str, torch.device]], NoneType] = Noneoffload_folder: typing.Union[str, os.PathLike, NoneType] = Nonedtype: typing.Union[str, torch.dtype, NoneType] = Noneoffload_state_dict: bool = Falseoffload_buffers: bool = Falsekeep_in_fp32_modules: typing.List[str] = Noneoffload_8bit_bnb: bool = False )
Parameters
model (
torch.nn.Module
) โ The model in which we want to load a checkpoint.checkpoint (
str
oros.PathLike
) โ The folder checkpoint to load. It can be:a path to a file containing a whole model state dict
a path to a
.json
file containing the index to a sharded checkpointa path to a folder containing a unique
.index.json
file and the shards of a checkpoint.a path to a folder containing a unique pytorch_model.bin or a model.safetensors file.
device_map (
Dict[str, Union[int, str, torch.device]]
, optional) โ A map that specifies where each submodule should go. It doesnโt need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device.offload_folder (
str
oros.PathLike
, optional) โ If thedevice_map
contains any value"disk"
, the folder where we will offload weights.dtype (
str
ortorch.dtype
, optional) โ If provided, the weights will be converted to that type when loaded.offload_state_dict (
bool
, optional, defaults toFalse
) โ IfTrue
, will temporarily offload the CPU state dict on the hard drive to avoid getting out of CPU RAM if the weight of the CPU state dict + the biggest shard does not fit.offload_buffers (
bool
, optional, defaults toFalse
) โ Whether or not to include the buffers in the weights offloaded to disk.keep_in_fp32_modules(
List[str]
, optional) โ A list of the modules that we keep intorch.float32
dtype.offload_8bit_bnb (
bool
, optional) โ Whether or not to enable offload of 8-bit modules on cpu/disk.
Loads a (potentially sharded) checkpoint inside a model, potentially sending weights to a given device as they are loaded.
Once loaded across devices, you still need to call dispatch_model() on your model to make it able to run. To group the checkpoint loading and dispatch in one single call, use load_checkpoint_and_dispatch().
Quantization
These include utilities that are useful to quantize model.
accelerate.utils.load_and_quantize_model
( model: Modulebnb_quantization_config: BnbQuantizationConfigweights_location: typing.Union[str, os.PathLike] = Nonedevice_map: typing.Union[typing.Dict[str, typing.Union[int, str, torch.device]], NoneType] = Noneno_split_module_classes: typing.Optional[typing.List[str]] = Nonemax_memory: typing.Union[typing.Dict[typing.Union[int, str], typing.Union[int, str]], NoneType] = Noneoffload_folder: typing.Union[str, os.PathLike, NoneType] = Noneoffload_state_dict: bool = False ) โ torch.nn.Module
Parameters
model (
torch.nn.Module
) โ Input model. The model can be already loaded or on the meta devicebnb_quantization_config (
BnbQuantizationConfig
) โ The bitsandbytes quantization parametersweights_location (
str
oros.PathLike
) โ The folder weights_location to load. It can be:a path to a file containing a whole model state dict
a path to a
.json
file containing the index to a sharded checkpointa path to a folder containing a unique
.index.json
file and the shards of a checkpoint.a path to a folder containing a unique pytorch_model.bin file.
device_map (
Dict[str, Union[int, str, torch.device]]
, optional) โ A map that specifies where each submodule should go. It doesnโt need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device.no_split_module_classes (
List[str]
, optional) โ A list of layer class names that should never be split across device (for instance any layer that has a residual connection).max_memory (
Dict
, optional) โ A dictionary device identifier to maximum memory. Will default to the maximum memory available if unset.offload_folder (
str
oros.PathLike
, optional) โ If thedevice_map
contains any value"disk"
, the folder where we will offload weights.offload_state_dict (
bool
, optional, defaults toFalse
) โ IfTrue
, will temporarily offload the CPU state dict on the hard drive to avoid getting out of CPU RAM if the weight of the CPU state dict + the biggest shard does not fit.
Returns
torch.nn.Module
The quantized model
This function will quantize the input model with the associated config passed in bnb_quantization_config
. If the model is in the meta device, we will load and dispatch the weights according to the device_map
passed. If the model is already loaded, we will quantize the model and put the model on the GPU,
class accelerate.utils.BnbQuantizationConfig
( load_in_8bit: bool = Falsellm_int8_threshold: float = 6.0load_in_4bit: bool = Falsebnb_4bit_quant_type: str = 'fp4'bnb_4bit_use_double_quant: bool = Falsebnb_4bit_compute_dtype: bool = 'fp16'torch_dtype: dtype = Noneskip_modules: typing.List[str] = Nonekeep_in_fp32_modules: typing.List[str] = None )
A plugin to enable BitsAndBytes 4bit and 8bit quantization
Last updated