Hey all, I am in the process of testing several models for fine-tuning and that question cropped up.

I would like to add new facts to a foundational model and then train it for instruction tuning. Problem is, I will regularly have new data to add. I was wondering if there is a change that I could do a single LORA for the instruction tuning and reapply it each time I finished a new fine-tuning?

You are viewing a single thread.
View all comments
0 points

I don’t think fine tuning works the way you think it does; one does not generally fine tune to “add facts”. This might be useful: https://nextword.substack.com/p/rag-vs-finetuning-llms-what-to-use

I’d advocate for using the RAG pattern to do the lookups for the new facts. If needed, you can fine tune the model on top to output for your specific domain or format.

permalink
report
reply
0 points

Ah I should have made a bit more detailed message explaining the road I wen through already I guess :-)

I know that RAG gets recommended more for adding information. It is the fastest way to retrieve information. However it allows only a shallow understanding of it and the LLM will have problem using information from several different files to give you. You can’t, for example, give it 1000 emails and ask to list the problems encountered in project A and how they were solved.

Fine tuning can add facts. This person added the documentation for Unreal Engine 5 in Llama 7B. Or this company added financial knowledge to Llama 13B. These are my inspiration. When using LORA it requires higher ranks and crucially to do the fine-tuning on a foundation model and only after your own fine-tuning, do the instruction fine-tune.

I am wondering if there is a way to make the last step easier by reapplying the same LORA.

I guess I am also wondering why we can’t directly fine-tune facts into an instruction-tuned model. I tried, it does tend to remember the way to interact with instruct prompts but the format is a bit corrupted by the new dataset. I find it a bit weird the speed at which such models forget past things as they are fed new tokens.

permalink
report
parent
reply
0 points

At least in stable diffusion Loras are composable. You can combine different loras and have both effects applied to the resulting image.

permalink
report
parent
reply

Free Open-Source Artificial Intelligence

!fosai@lemmy.world

Create post

Community stats

  • 70

    Monthly active users

  • 84

    Posts

  • 106

    Comments