Loaders
Loaders
Adapters (textual inversion, LoRA, hypernetworks) allow you to modify a diffusion model to generate images in a specific style without training or finetuning the entire model. The adapter weights are typically only a tiny fraction of the pretrained model’s which making them very portable. 🌍 Diffusers provides an easy-to-use LoaderMixin
API to load adapter weights.
🧪 The LoaderMixins
are highly experimental and prone to future changes. To use private or gated models, log-in with boincai-cli login
.
UNet2DConditionLoadersMixin
class diffusers.loaders.UNet2DConditionLoadersMixin
( )
load_attn_procs
( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]]**kwargs )
Parameters
pretrained_model_name_or_path_or_dict (
str
oros.PathLike
ordict
) — Can be either:A string, the model id (for example
google/ddpm-celebahq-256
) of a pretrained model hosted on the Hub.A path to a directory (for example
./my_model_directory
) containing the model weights saved with ModelMixin.save_pretrained().
cache_dir (
Union[str, os.PathLike]
, optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used.force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) — Whether or not to resume downloading the model weights and configuration files. If set toFalse
, any incompletely downloaded files are deleted.proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, for example,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.local_files_only (
bool
, optional, defaults toFalse
) — Whether to only load local model weights and configuration files or not. If set toTrue
, the model won’t be downloaded from the Hub.use_auth_token (
str
or bool, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, the token generated fromdiffusers-cli login
(stored in~/.boincai
) is used.low_cpu_mem_usage (
bool
, optional, defaults toTrue
if torch version >= 1.9.0 elseFalse
) — Speed up model loading only loading the pretrained weights and not initializing the weights. This also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this argument toTrue
will raise an error.revision (
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git.subfolder (
str
, optional, defaults to""
) — The subfolder location of a model file within a larger model repository on the Hub or locally.mirror (
str
, optional) — Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not guarantee the timeliness or safety of the source, and you should refer to the mirror site for more information.
Load pretrained attention processor layers into UNet2DConditionModel. Attention processor layers have to be defined in attention_processor.py
and be a torch.nn.Module
class.
save_attn_procs
( save_directory: typing.Union[str, os.PathLike]is_main_process: bool = Trueweight_name: str = Nonesave_function: typing.Callable = Nonesafe_serialization: bool = True**kwargs )
Parameters
save_directory (
str
oros.PathLike
) — Directory to save an attention processor to. Will be created if it doesn’t exist.is_main_process (
bool
, optional, defaults toTrue
) — Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, setis_main_process=True
only on the main process to avoid race conditions.save_function (
Callable
) — The function to use to save the state dictionary. Useful during distributed training when you need to replacetorch.save
with another method. Can be configured with the environment variableDIFFUSERS_SAVE_MODE
.safe_serialization (
bool
, optional, defaults toTrue
) — Whether to save the model usingsafetensors
or the traditional PyTorch way withpickle
.
Save an attention processor to a directory so that it can be reloaded using the load_attn_procs() method.
TextualInversionLoaderMixin
class diffusers.loaders.TextualInversionLoaderMixin
( )
Load textual inversion tokens and embeddings to the tokenizer and text encoder.
load_textual_inversion
( pretrained_model_name_or_path: typing.Union[str, typing.List[str], typing.Dict[str, torch.Tensor], typing.List[typing.Dict[str, torch.Tensor]]]token: typing.Union[str, typing.List[str], NoneType] = Nonetokenizer: typing.Optional[transformers.tokenization_utils.PreTrainedTokenizer] = Nonetext_encoder: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None**kwargs )
Parameters
pretrained_model_name_or_path (
str
oros.PathLike
orList[str or os.PathLike]
orDict
orList[Dict]
) — Can be either one of the following or a list of them:A string, the model id (for example
sd-concepts-library/low-poly-hd-logos-icons
) of a pretrained model hosted on the Hub.A path to a directory (for example
./my_text_inversion_directory/
) containing the textual inversion weights.A path to a file (for example
./my_text_inversions.pt
) containing textual inversion weights.
token (
str
orList[str]
, optional) — Override the token to use for the textual inversion weights. Ifpretrained_model_name_or_path
is a list, thentoken
must also be a list of equal length.text_encoder (
CLIPTextModel
, optional) — Frozen text-encoder (clip-vit-large-patch14). If not specified, function will take self.tokenizer.tokenizer (
CLIPTokenizer
, optional) — ACLIPTokenizer
to tokenize text. If not specified, function will take self.tokenizer.weight_name (
str
, optional) — Name of a custom weight file. This should be used when:The saved textual inversion file is in 🌍 Diffusers format, but was saved under a specific weight name such as
text_inv.bin
.The saved textual inversion file is in the Automatic1111 format.
cache_dir (
Union[str, os.PathLike]
, optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used.force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) — Whether or not to resume downloading the model weights and configuration files. If set toFalse
, any incompletely downloaded files are deleted.proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, for example,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.local_files_only (
bool
, optional, defaults toFalse
) — Whether to only load local model weights and configuration files or not. If set toTrue
, the model won’t be downloaded from the Hub.use_auth_token (
str
or bool, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, the token generated fromdiffusers-cli login
(stored in~/.boincai
) is used.revision (
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git.subfolder (
str
, optional, defaults to""
) — The subfolder location of a model file within a larger model repository on the Hub or locally.mirror (
str
, optional) — Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not guarantee the timeliness or safety of the source, and you should refer to the mirror site for more information.
Load textual inversion embeddings into the text encoder of StableDiffusionPipeline (both 🌍 Diffusers and Automatic1111 formats are supported).
Example:
To load a textual inversion embedding vector in 🌍 Diffusers format:
Copied
To load a textual inversion embedding vector in Automatic1111 format, make sure to download the vector first (for example from civitAI) and then load the vector
locally:
Copied
maybe_convert_prompt
( prompt: typing.Union[str, typing.List[str]]tokenizer: PreTrainedTokenizer ) → str
or list of str
Parameters
prompt (
str
or list ofstr
) — The prompt or prompts to guide the image generation.tokenizer (
PreTrainedTokenizer
) — The tokenizer responsible for encoding the prompt into input tokens.
Returns
str
or list of str
The converted prompt
Processes prompts that include a special token corresponding to a multi-vector textual inversion embedding to be replaced with multiple special tokens each corresponding to one of the vectors. If the prompt has no textual inversion token or if the textual inversion token is a single vector, the input prompt is returned.
LoraLoaderMixin
class diffusers.loaders.LoraLoaderMixin
( )
Load LoRA layers into UNet2DConditionModel and CLIPTextModel
.
fuse_lora
( fuse_unet: bool = Truefuse_text_encoder: bool = Truelora_scale: float = 1.0 )
Parameters
fuse_unet (
bool
, defaults toTrue
) — Whether to fuse the UNet LoRA parameters.fuse_text_encoder (
bool
, defaults toTrue
) — Whether to fuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the LoRA parameters then it won’t have any effect.lora_scale (
float
, defaults to 1.0) — Controls how much to influence the outputs with the LoRA parameters.
Fuses the LoRA parameters into the original parameters of the corresponding blocks.
This is an experimental API.
load_lora_into_text_encoder
( state_dictnetwork_alphastext_encoderprefix = Nonelora_scale = 1.0low_cpu_mem_usage = None )
Parameters
state_dict (
dict
) — A standard state dict containing the lora layer parameters. The key should be prefixed with an additionaltext_encoder
to distinguish between unet lora layers.network_alphas (
Dict[str, float]
) — SeeLoRALinearLayer
for more details.text_encoder (
CLIPTextModel
) — The text encoder model to load the LoRA layers into.prefix (
str
) — Expected prefix of thetext_encoder
in thestate_dict
.lora_scale (
float
) — How much to scale the output of the lora linear layer before it is added with the output of the regular lora layer.low_cpu_mem_usage (
bool
, optional, defaults toTrue
if torch version >= 1.9.0 elseFalse
) — Speed up model loading only loading the pretrained weights and not initializing the weights. This also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this argument toTrue
will raise an error.
This will load the LoRA layers specified in state_dict
into text_encoder
load_lora_into_unet
( state_dictnetwork_alphasunetlow_cpu_mem_usage = None )
Parameters
state_dict (
dict
) — A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additionalunet
which can be used to distinguish between text encoder lora layers.network_alphas (
Dict[str, float]
) — SeeLoRALinearLayer
for more details.unet (
UNet2DConditionModel
) — The UNet model to load the LoRA layers into.low_cpu_mem_usage (
bool
, optional, defaults toTrue
if torch version >= 1.9.0 elseFalse
) — Speed up model loading only loading the pretrained weights and not initializing the weights. This also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this argument toTrue
will raise an error.
This will load the LoRA layers specified in state_dict
into unet
.
load_lora_weights
( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]]**kwargs )
Parameters
pretrained_model_name_or_path_or_dict (
str
oros.PathLike
ordict
) — See lora_state_dict().kwargs (
dict
, optional) — See lora_state_dict().
Load LoRA weights specified in pretrained_model_name_or_path_or_dict
into self.unet
and self.text_encoder
.
All kwargs are forwarded to self.lora_state_dict
.
See lora_state_dict() for more details on how the state dict is loaded.
See load_lora_into_unet() for more details on how the state dict is loaded into self.unet
.
See load_lora_into_text_encoder() for more details on how the state dict is loaded into self.text_encoder
.
lora_state_dict
( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]]**kwargs )
Parameters
pretrained_model_name_or_path_or_dict (
str
oros.PathLike
ordict
) — Can be either:A string, the model id (for example
google/ddpm-celebahq-256
) of a pretrained model hosted on the Hub.A path to a directory (for example
./my_model_directory
) containing the model weights saved with ModelMixin.save_pretrained().
cache_dir (
Union[str, os.PathLike]
, optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used.force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) — Whether or not to resume downloading the model weights and configuration files. If set toFalse
, any incompletely downloaded files are deleted.proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, for example,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.local_files_only (
bool
, optional, defaults toFalse
) — Whether to only load local model weights and configuration files or not. If set toTrue
, the model won’t be downloaded from the Hub.use_auth_token (
str
or bool, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, the token generated fromdiffusers-cli login
(stored in~/.boincai
) is used.revision (
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git.subfolder (
str
, optional, defaults to""
) — The subfolder location of a model file within a larger model repository on the Hub or locally.low_cpu_mem_usage (
bool
, optional, defaults toTrue
if torch version >= 1.9.0 elseFalse
) — Speed up model loading only loading the pretrained weights and not initializing the weights. This also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this argument toTrue
will raise an error.mirror (
str
, optional) — Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not guarantee the timeliness or safety of the source, and you should refer to the mirror site for more information.
Return state dict for lora weights and the network alphas.
We support loading A1111 formatted LoRA checkpoints in a limited capacity.
This function is experimental and might change in the future.
save_lora_weights
( save_directory: typing.Union[str, os.PathLike]unet_lora_layers: typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = Nonetext_encoder_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = Noneis_main_process: bool = Trueweight_name: str = Nonesave_function: typing.Callable = Nonesafe_serialization: bool = True )
Parameters
save_directory (
str
oros.PathLike
) — Directory to save LoRA parameters to. Will be created if it doesn’t exist.unet_lora_layers (
Dict[str, torch.nn.Module]
orDict[str, torch.Tensor]
) — State dict of the LoRA layers corresponding to theunet
.text_encoder_lora_layers (
Dict[str, torch.nn.Module]
orDict[str, torch.Tensor]
) — State dict of the LoRA layers corresponding to thetext_encoder
. Must explicitly pass the text encoder LoRA state dict because it comes from 🌍 Transformers.is_main_process (
bool
, optional, defaults toTrue
) — Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, setis_main_process=True
only on the main process to avoid race conditions.save_function (
Callable
) — The function to use to save the state dictionary. Useful during distributed training when you need to replacetorch.save
with another method. Can be configured with the environment variableDIFFUSERS_SAVE_MODE
.safe_serialization (
bool
, optional, defaults toTrue
) — Whether to save the model usingsafetensors
or the traditional PyTorch way withpickle
.
Save the LoRA parameters corresponding to the UNet and text encoder.
unfuse_lora
( unfuse_unet: bool = Trueunfuse_text_encoder: bool = True )
Parameters
unfuse_unet (
bool
, defaults toTrue
) — Whether to unfuse the UNet LoRA parameters.unfuse_text_encoder (
bool
, defaults toTrue
) — Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the LoRA parameters then it won’t have any effect.
Reverses the effect of pipe.fuse_lora()
.
This is an experimental API.
unload_lora_weights
( )
Unloads the LoRA parameters.
Examples:
Copied
FromSingleFileMixin
class diffusers.loaders.FromSingleFileMixin
( )
Load model weights saved in the .ckpt
format into a DiffusionPipeline.
from_single_file
( pretrained_model_link_or_path**kwargs )
Parameters
pretrained_model_link_or_path (
str
oros.PathLike
, optional) — Can be either:A link to the
.ckpt
file (for example"https://boincai.com/<repo_id>/blob/main/<path_to_file>.ckpt"
) on the Hub.A path to a file containing all pipeline weights.
torch_dtype (
str
ortorch.dtype
, optional) — Override the defaulttorch.dtype
and load the model with another dtype. If"auto"
is passed, the dtype is automatically derived from the model’s weights.force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.cache_dir (
Union[str, os.PathLike]
, optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used.resume_download (
bool
, optional, defaults toFalse
) — Whether or not to resume downloading the model weights and configuration files. If set toFalse
, any incompletely downloaded files are deleted.proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, for example,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.local_files_only (
bool
, optional, defaults toFalse
) — Whether to only load local model weights and configuration files or not. If set toTrue
, the model won’t be downloaded from the Hub.use_auth_token (
str
or bool, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, the token generated fromdiffusers-cli login
(stored in~/.boincai
) is used.revision (
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git.use_safetensors (
bool
, optional, defaults toNone
) — If set toNone
, the safetensors weights are downloaded if they’re available and if the safetensors library is installed. If set toTrue
, the model is forcibly loaded from safetensors weights. If set toFalse
, safetensors weights are not loaded.extract_ema (
bool
, optional, defaults toFalse
) — Whether to extract the EMA weights or not. PassTrue
to extract the EMA weights which usually yield higher quality images for inference. Non-EMA weights are usually better for continuing finetuning.upcast_attention (
bool
, optional, defaults toNone
) — Whether the attention computation should always be upcasted.image_size (
int
, optional, defaults to 512) — The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable Diffusion v2 base model. Use 768 for Stable Diffusion v2.prediction_type (
str
, optional) — The prediction type the model was trained on. Use'epsilon'
for all Stable Diffusion v1 models and the Stable Diffusion v2 base model. Use'v_prediction'
for Stable Diffusion v2.num_in_channels (
int
, optional, defaults toNone
) — The number of input channels. IfNone
, it is automatically inferred.scheduler_type (
str
, optional, defaults to"pndm"
) — Type of scheduler to use. Should be one of["pndm", "lms", "heun", "euler", "euler-ancestral", "dpm", "ddim"]
.load_safety_checker (
bool
, optional, defaults toTrue
) — Whether to load the safety checker or not.text_encoder (
CLIPTextModel
, optional, defaults toNone
) — An instance ofCLIPTextModel
to use, specifically the clip-vit-large-patch14 variant. If this parameter isNone
, the function loads a new instance ofCLIPTextModel
by itself if needed.vae (
AutoencoderKL
, optional, defaults toNone
) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. If this parameter isNone
, the function will load a new instance of [CLIP] by itself, if needed.tokenizer (
CLIPTokenizer
, optional, defaults toNone
) — An instance ofCLIPTokenizer
to use. If this parameter isNone
, the function loads a new instance ofCLIPTokenizer
by itself if needed.original_config_file (
str
) — Path to.yaml
config file corresponding to the original architecture. IfNone
, will be automatically inferred by looking for a key that only exists in SD2.0 models.kwargs (remaining dictionary of keyword arguments, optional) — Can be used to overwrite load and saveable variables (for example the pipeline components of the specific pipeline class). The overwritten components are directly passed to the pipelines
__init__
method. See example below for more information.
Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt
or .safetensors
format. The pipeline is set in evaluation mode (model.eval()
) by default.
Examples:
Copied
FromOriginalControlnetMixin
class diffusers.loaders.FromOriginalControlnetMixin
( )
from_single_file
( pretrained_model_link_or_path**kwargs )
Parameters
pretrained_model_link_or_path (
str
oros.PathLike
, optional) — Can be either:A link to the
.ckpt
file (for example"https://boincai.com/<repo_id>/blob/main/<path_to_file>.ckpt"
) on the Hub.A path to a file containing all pipeline weights.
torch_dtype (
str
ortorch.dtype
, optional) — Override the defaulttorch.dtype
and load the model with another dtype. If"auto"
is passed, the dtype is automatically derived from the model’s weights.force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.cache_dir (
Union[str, os.PathLike]
, optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used.resume_download (
bool
, optional, defaults toFalse
) — Whether or not to resume downloading the model weights and configuration files. If set toFalse
, any incompletely downloaded files are deleted.proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, for example,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.local_files_only (
bool
, optional, defaults toFalse
) — Whether to only load local model weights and configuration files or not. If set to True, the model won’t be downloaded from the Hub.use_auth_token (
str
or bool, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, the token generated fromdiffusers-cli login
(stored in~/.boincai
) is used.revision (
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git.use_safetensors (
bool
, optional, defaults toNone
) — If set toNone
, the safetensors weights are downloaded if they’re available and if the safetensors library is installed. If set toTrue
, the model is forcibly loaded from safetensors weights. If set toFalse
, safetensors weights are not loaded.image_size (
int
, optional, defaults to 512) — The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable Diffusion v2 base model. Use 768 for Stable Diffusion v2.upcast_attention (
bool
, optional, defaults toNone
) — Whether the attention computation should always be upcasted.kwargs (remaining dictionary of keyword arguments, optional) — Can be used to overwrite load and saveable variables (for example the pipeline components of the specific pipeline class). The overwritten components are directly passed to the pipelines
__init__
method. See example below for more information.
Instantiate a ControlNetModel from pretrained controlnet weights saved in the original .ckpt
or .safetensors
format. The pipeline is set in evaluation mode (model.eval()
) by default.
Examples:
Copied
FromOriginalVAEMixin
class diffusers.loaders.FromOriginalVAEMixin
( )
from_single_file
( pretrained_model_link_or_path**kwargs )
Parameters
pretrained_model_link_or_path (
str
oros.PathLike
, optional) — Can be either:A link to the
.ckpt
file (for example"https://boincai.com/<repo_id>/blob/main/<path_to_file>.ckpt"
) on the Hub.A path to a file containing all pipeline weights.
torch_dtype (
str
ortorch.dtype
, optional) — Override the defaulttorch.dtype
and load the model with another dtype. If"auto"
is passed, the dtype is automatically derived from the model’s weights.force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.cache_dir (
Union[str, os.PathLike]
, optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used.resume_download (
bool
, optional, defaults toFalse
) — Whether or not to resume downloading the model weights and configuration files. If set toFalse
, any incompletely downloaded files are deleted.proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, for example,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.local_files_only (
bool
, optional, defaults toFalse
) — Whether to only load local model weights and configuration files or not. If set to True, the model won’t be downloaded from the Hub.use_auth_token (
str
or bool, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, the token generated fromdiffusers-cli login
(stored in~/.boincai.com
) is used.revision (
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git.image_size (
int
, optional, defaults to 512) — The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable Diffusion v2 base model. Use 768 for Stable Diffusion v2.use_safetensors (
bool
, optional, defaults toNone
) — If set toNone
, the safetensors weights are downloaded if they’re available and if the safetensors library is installed. If set toTrue
, the model is forcibly loaded from safetensors weights. If set toFalse
, safetensors weights are not loaded.upcast_attention (
bool
, optional, defaults toNone
) — Whether the attention computation should always be upcasted.scaling_factor (
float
, optional, defaults to 0.18215) — The component-wise standard deviation of the trained latent space computed using the first batch of the training set. This is used to scale the latent space to have unit variance when training the diffusion model. The latents are scaled with the formulaz = z * scaling_factor
before being passed to the diffusion model. When decoding, the latents are scaled back to the original scale with the formula:z = 1 / scaling_factor * z
. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image Synthesis with Latent Diffusion Models paper.kwargs (remaining dictionary of keyword arguments, optional) — Can be used to overwrite load and saveable variables (for example the pipeline components of the specific pipeline class). The overwritten components are directly passed to the pipelines
__init__
method. See example below for more information.
Instantiate a AutoencoderKL from pretrained controlnet weights saved in the original .ckpt
or .safetensors
format. The pipeline is format. The pipeline is set in evaluation mode (model.eval()
) by default.
Make sure to pass both image_size
and scaling_factor
to from_single_file()
if you want to load a VAE that does accompany a stable diffusion model of v2 or higher or SDXL.
Examples:
Copied
Last updated