# Fully Sharded Data Parallelism Utilities

## Utilities for Fully Sharded Data Parallelism

#### class accelerate.FullyShardedDataParallelPlugin

[\<source>](https://github.com/huggingface/accelerate/blob/v0.24.0/src/accelerate/utils/dataclasses.py#L801)

( sharding\_strategy: typing.Any = Nonebackward\_prefetch: typing.Any = Nonemixed\_precision\_policy: typing.Any = Noneauto\_wrap\_policy: typing.Optional\[typing.Callable] = Nonecpu\_offload: typing.Any = Noneignored\_modules: typing.Optional\[typing.Iterable\[torch.nn.modules.module.Module]] = Nonestate\_dict\_type: typing.Any = Nonestate\_dict\_config: typing.Any = Noneoptim\_state\_dict\_config: typing.Any = Nonelimit\_all\_gathers: bool = Falseuse\_orig\_params: bool = Falseparam\_init\_fn: typing.Optional\[typing.Callable\[\[torch.nn.modules.module.Module]], NoneType] = Nonesync\_module\_states: bool = Trueforward\_prefetch: bool = Falseactivation\_checkpointing: bool = False )

This plugin is used to enable fully sharded data parallelism.

**get\_module\_class\_from\_name**

[\<source>](https://github.com/huggingface/accelerate/blob/v0.24.0/src/accelerate/utils/dataclasses.py#L937)

( modulename )

Parameters

* **module** (`torch.nn.Module`) — The module to get the class from.
* **name** (`str`) — The name of the class.

Gets a class from a module by its name.
