You are viewing a single thread.
View all comments View context
1 point
*

People have been training great Flux LoRAs for a while now, haven’t they? Is a LoRA not a finetune, or have I misunderstood something?

permalink
report
parent
reply
0 points

Last I heard, LoRAs cause catastrophic forgetting in the model, and full fine-tuning doesn’t really work.

permalink
report
parent
reply
2 points

Oh well, in practice I’ll just continue to enjoy this (possibly forgetful and not-fully-finetunable) model then, that still gives me amazing results 😊

permalink
report
parent
reply
1 point
*

quite the opposite. Lora’s are very effective against catastrophic forgetting, and full finetuning is very dangerous (but also much more powerful).

permalink
report
parent
reply

Stable Diffusion

!stable_diffusion@lemmy.dbzer0.com

Create post

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

Community stats

  • 221

    Monthly active users

  • 204

    Posts

  • 147

    Comments

Community moderators