4 points
there’s a research result that the precise tokeniser makes bugger all difference, it’s almost entirely the data you put in
because LLMs are lossy compression for text
3 points
there’s a research result that the precise tokeniser makes bugger all difference, it’s almost entirely the data you put in
because LLMs are lossy compression for text
Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
1.5K
Monthly active users
418
Posts
11K
Comments