espnet2.layers.create_adapter_fn.create_lora_adapter
Less than 1 minute
espnet2.layers.create_adapter_fn.create_lora_adapter
espnet2.layers.create_adapter_fn.create_lora_adapter(model: Module, rank: int = 8, alpha: int = 8, dropout_rate: float = 0.0, target_modules: List[str] = ['query'], bias_type: str | None = 'none')
Create LoRA adapter for the base model.
See: https://arxiv.org/pdf/2106.09685.pdf
- Parameters:
- model (torch.nn.Module) – Base model to be adapted.
- rank (int) – Rank of LoRA matrices. Defaults to 8.
- alpha (int) – Constant number for LoRA scaling. Defaults to 8.
- dropout_rate (float) – Dropout probability for LoRA layers. Defaults to 0.0.
- target_modules (List *[*str ]) – List of module(s) to apply LoRA adaptation. e.g. [“query”, “key”, “value”] for all layers, while [“encoder.encoders.blocks.0.attn.key”] for a specific layer.
- bias_type (str) – Bias training type for LoRA adaptaion, can be one of [“none”, “all”, “lora_only”]. “none” means not training any bias vectors; “all” means training all bias vectors, include LayerNorm biases; “lora_only” means only training bias vectors in LoRA adapted modules.