Instead of writing captions, the team asked annotators to record 60- to 90-second verbal descriptions answering a list of questions about each image. They then transcribed the descriptions—which often stretched across several pages—and used other large language models to clean up, crunch down, and standardize them.
So those other LLMs are needed to train this one?
And a modern calculator has more computer power than the Apollo program… This is how tech works.
This reads like an ad. They claim to use 1000 times less data than proprietary models, except nobody knows how much data they use or how big proprietary models actually are. Also there’s a giant asterisk here they fail to mention: Molmo outperforms the competition at visual benchmarks, not actual text chat.
Daaaang, Apache license AND open dataset + training tools.
This kind of skill might help developers build AI agents that identify buttons or fields on a webpage
to handle tasks like making a reservation at a restaurant.
… to improve efficiency of click farms and to bypass captchas.