cerebras.modelzoo.config_manager.config_classes.base.model_config.LoraConfig#
- class cerebras.modelzoo.config_manager.config_classes.base.model_config.LoraConfig[source]#
LoraConfig(r: int = 0, alpha: int = 1, dropout: float = 0.0, fan_in_fan_out: bool = False, merge_weights: bool = True, target_modules: Optional[list] = None)
- r: int = 0#
Rank of LoRA matrix projections
- alpha: int = 1#
Scaling factor (see paper for additional details)
- dropout: float = 0.0#
Dropout to apply to LoRA updates
- fan_in_fan_out: bool = False#
- merge_weights: bool = True#
Determines whether lora weights should be merged/folded into underlying layers
- target_modules: Optional[list] = None#
A list of module names that must all exist in layers that will be converted to LoRA. For example, setting target_modules to [“TransformerDecoderLayer”, “Linear”] would mean that all linear layers that were children of a TransformerDecoderLayer would be converted to LoRA.