cerebras.modelzoo.common.utils.model.lora.LoraConfig#
- class cerebras.modelzoo.common.utils.model.lora.LoraConfig[source]#
- Bases: - object- r: Rank of LoRA matrix projections alpha: Scaling factor (see paper for additional details) dropout: Dropout to apply to LoRA updates fan_in_fan_out: merge_weights: Determines whether lora weights should be merged/folded - into underlying layers - target_modules: A list of module names that must all exist in layers
- that will be converted to LoRA. For example, setting target_modules to [“TransformerDecoderLayer”, “Linear”] would mean that all linear layers that were children of a TransformerDecoderLayer would be converted to LoRA. 
 - Methods - Attributes - alpha- dropout- fan_in_fan_out- merge_weights- r- target_modules- __init__(r: int = 0, alpha: int = 1, dropout: float = 0.0, fan_in_fan_out: bool = False, merge_weights: bool = False, target_modules: Optional[list] = None) None#