My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate. Not sure if I’d call that “reasoning” but I guess it could potentially improve results in some cases. With OpenAI not being so open it is hard to tell though. They’ve been overpromising a lot already so it may as well be just complete bullshit.
My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate.
Didn’t the previous models already do this?
they do say that, yes. it’s as bullshit as all the other claims they’ve been making