BetaDoggo_
If she actually did this the right would start calling her a terrorist and she would lose any chance at winning over any right voters on the fence (who probably didn’t have strong opinions on the issue prior).
More sympathy for squirrels than human beings
Anthropic released an api for the same thing last week.
Every credible wiki has moved away from fandom at this point. All that’s left is the abandoned shells of the former wikis they refuse to delete and kids who don’t know better.
We’re Costco guys
This is actually pretty smart because it switches the context of the action. Most intermediate users avoid clicking random executables by instinct but this is different enough that it doesn’t immediately trigger that association and response.
All signs point to this being a finetune of gpt4o with additional chain of thought steps before the final answer. It has exactly the same pitfalls as the existing model (9.11>9.8 tokenization error, failing simple riddles, being unable to assert that the user is wrong, etc.). It’s still a transformer and it’s still next token prediction. They hide the thought steps to mask this fact and to prevent others from benefiting from all of the finetuning data they paid for.
It’s cool but it’s more or less just a party trick.
How many times is this same article going to be written? Model collapse from synthetic data is not a concern at any scale when human data is in the mix. We have entire series of models now trained with mostly synthetic data: https://huggingface.co/docs/transformers/main/model_doc/phi3. When using entirely unassisted outputs error accumulates with each generation but this isn’t a concern in any real scenarios.