Avatar

BetaDoggo_

BetaDoggo_@lemmy.world
Joined
0 posts • 27 comments
Direct message

If she actually did this the right would start calling her a terrorist and she would lose any chance at winning over any right voters on the fence (who probably didn’t have strong opinions on the issue prior).

permalink
report
reply

Every credible wiki has moved away from fandom at this point. All that’s left is the abandoned shells of the former wikis they refuse to delete and kids who don’t know better.

permalink
report
reply

We’re Costco guys

permalink
report
reply

I’d guess the 3 key staff members leaving all at once without notice had something to do with it.

permalink
report
parent
reply

This is actually pretty smart because it switches the context of the action. Most intermediate users avoid clicking random executables by instinct but this is different enough that it doesn’t immediately trigger that association and response.

permalink
report
reply

All signs point to this being a finetune of gpt4o with additional chain of thought steps before the final answer. It has exactly the same pitfalls as the existing model (9.11>9.8 tokenization error, failing simple riddles, being unable to assert that the user is wrong, etc.). It’s still a transformer and it’s still next token prediction. They hide the thought steps to mask this fact and to prevent others from benefiting from all of the finetuning data they paid for.

permalink
report
parent
reply

It’s cool but it’s more or less just a party trick.

permalink
report
reply

How many times is this same article going to be written? Model collapse from synthetic data is not a concern at any scale when human data is in the mix. We have entire series of models now trained with mostly synthetic data: https://huggingface.co/docs/transformers/main/model_doc/phi3. When using entirely unassisted outputs error accumulates with each generation but this isn’t a concern in any real scenarios.

permalink
report
reply