1 point
*
People have been training great Flux LoRAs for a while now, haven’t they? Is a LoRA not a finetune, or have I misunderstood something?
0 points
Last I heard, LoRAs cause catastrophic forgetting in the model, and full fine-tuning doesn’t really work.
1 point
*
2 points