The article is about Kyutai, a French AI lab with an objective to compete with chatgpt and others with full open source (research papers, models, and training data).
They are aiming to also include the capability to use sound, image, etc… (according to this article (French) https://www.clubic.com/actualite-509350-intelligence-artificielle-xavier-niel-free-et-l-ancien-pdg-de-google-lancent-kyutai-un-concurrent-europeen-a-openai.html )
The post article also talks about some French context.
The context is that LLMs need a big up front capital expenditure to get started, because of the processor time to train these giant neural networks. This is a huge barrier to the development of a fully open source LLM. Once such a foundation model is available, building on top of it is relatively cheaper; one can then envision an explosion of open source models targeting specific applications, which would be amazing.
So if the bulk of this €300M could go into training, it would go a long way to plugging the gap. But in reality, a lot of that sum is going to be dissipated into other expenses, so there’s going to be a lot less than €300M for actual training.
Is there any way we can decentralize the training of neural networks?
I recall something being released awhile ago that let people use their computers for scientific computations. Couldn’t something similar be done for training AI?
There is a project (AI Horde) that allows you to donate compute for inference. I’m not sure why the same doesn’t exist for training. I think the RAM/VRAM requirements just can’t be lowered/split.
Another way to contribute is by helping with training data. LAION, which created the dataset behind Stable Diffusion, is a volunteer effort. Stable Diffusion itself was developed at a tax-funded public university in Germany. However, the cost of the processing for training, etc. was covered by a single rich guy.