200fifty
I like how the assumption seems to be that the thing users object to about “websites track your browsing history around the web in order to show you targeted ads” is… the “websites” part
During the interview, Kat openly admitted to not being productive but shared that she still appeared to be productive because she gets others to do work for her. She relies on volunteers who are willing to do free work for her, which is her top productivity advice.
Productivity pro tip: you can get a lot more done if you can just convince other people to do your work for you for free
When I was a kid (Nat Nanny)[https://en.wikipedia.org/wiki/Net_Nanny] was totally and completely lame, but the whole millennial generation grew up to adore content moderation. A strange authoritarian impulse.
Me when the mods unfairly ban me from my favorite video game forum circa 2009
(source: first HN thread)
Like, seriously, get a hobby or something.
For real. I don’t even necessarily disagree with the broad-strokes idea of “if you’re comfortable, it’s good to take on challenges and get outside of your comfort zone because that’s how you grow as a person,” but why can’t he just apply this energy to writing a terrible novel or learning to paint watercolors or something, like a normal person? Why does the fact his life is comfortable mean he has to become a Nazi? :/
Look, you gotta forgive this guy for coming up with an insane theory that doesn’t make sense. After all, his brain was poisoned by testosterone, so his thinking skills have atrophied. An XXL hat size can only do so much, you know.
I think they were responding to the implication in self’s original comment that LLMs were claiming to evaluate code in-model and that calling out to an external python evaluator is ‘cheating.’ But actually as far as I know it is pretty common for them to evaluate code using an external interpreter. So I think the response was warranted here.
That said, that fact honestly makes this vulnerability even funnier because it means they are basically just letting the user dump whatever code they want into eval() as long as it’s laundered by the LLM first, which is like a high-school level mistake.