Avatar

self

self@awful.systems
Joined
31 posts • 778 comments
Direct message

really stretching the meaning of the word release past breaking if it’s only going to be available to companies friendly with OpenAI

Orion has been teased by an OpenAI executive as potentially up to 100 times more powerful than GPT-4; it’s separate from the o1 reasoning model OpenAI released in September. The company’s goal is to combine its LLMs over time to create an even more capable model that could eventually be called artificial general intelligence, or AGI.

so I’m calling it now, this absolute horseshit’s only purpose is desperate critihype. as with previous rounds of this exact same thing, it’ll only exist to give AI influencers a way to feel superior in conversation and grift more research funds. oh of course Strawberry fucks up that prompt but look, my advance access to Orion does so well I’m sure you’ll agree with me it’s AGI! no you can’t prompt it yourself or know how many times I ran the prompt why would I let you do that

That timing lines up with a cryptic post on X by OpenAI Altman, in which he said he was “excited for the winter constellations to rise soon.” If you ask ChatGPT o1-preview what Altman’s post is hiding, it will tell you that he’s hinting at the word Orion, which is the winter constellation that’s most visible in the night sky from November to February (but it also hallucinates that you can rearrange the letters to spell “ORION”).

there’s something incredibly embarrassing about the fact that Sammy announced the name like a lazy ARG based on a GPT response, which GPT proceeded to absolutely fuck up when asked about. a lot like Strawberry really — there’s so much Binance energy in naming the new version of your product after the stupid shit the last version fucked up, especially if the new version doesn’t fix the problem

permalink
report
reply

In April 2014, Gerard created a RationalWiki article about Effective Altruism, framing the subculture as “well-off libertarians congratulating each other on what wonderful human beings they are for working rapacious [s—]weasel jobs but choosing their charities well, but never in any way questioning the system that the problems are in the context of,” “a mechanism to push the libertarian idea that charity is superior to government action or funding,” and people who “will frequently be seen excusing their choice to work completely [f—]ing evil jobs because they’re so charitable.”

it’s fucking amazing how accurate this is, and almost a decade before SBF started explaining himself and never stopped

permalink
report
reply

god, so this is actually the best the AI researchers can do with the tools they’ve shit out into the world without giving any thought to failure cases or legal liability (beyond their manager on slackTeams claiming it’s been taken care of)

so fuck it, let’s make the defamation machine a non-optional component of windows. we’ll just make it a P0 when someone who could actually get us in legal trouble complains! everyone else is a P2 that never gets assigned.

permalink
report
parent
reply

But don’t worry! Google’s AI summaries will soon have ads!

dear fuck, pasting ads onto the part of Google search that’s already known to be unreliable and annoying at best seems like a terrible idea. for a laugh, let’s see if there’s any justification for this awful shit in the linked citation

Ads have always been an important part of consumers’ information journeys.

oh these people are on the expensive drugs huh

permalink
report
reply

At the same time, most participants felt the LLMs did not succeed as a creativity support tool, by producing bland and biased comedy tropes, akin to ``cruise ship comedy material from the 1950s, but a bit less racist’'.

holy shit that’s a direct quote from the paper

permalink
report
reply

it’s time for you to fuck off back to your self-hosted services that surely aren’t just a stack of constantly broken docker containers running on an old Dell in your closet

but wait, what’s this?

@BaroqueInMind@lemmy.one

oh you poor fucking baby, you couldn’t figure out how to self-host lemmy! and it’s so easy compared with mail too! so much for common sense!

permalink
report
parent
reply

I love Blade Runner, but I don’t know if we want that future. I believe we want that duster he’s wearing, but not the, uh, not the bleak apocalypse.

there’s nothing more painful than when capitalists think they understand cyberpunk

permalink
report
reply

this is a gentle reminder to posters in this thread that the fediverse in general is nowhere near secure from an opsec perspective; don’t post anything that compromises yourself or us.

with that said, happy December 4th to those who celebrate. post commemorative cocktail recipes here.

e: remember, they call it the fediverse cause it’s full of feds

permalink
report
reply

Copilot then listed a string of crimes Bernklau had supposedly committed — saying that he was an abusive undertaker exploiting widows, a child abuser, an escaped criminal mental patient. [SWR, in German]

These were stories Bernklau had written about. Copilot produced text as if he was the subject. Then Copilot returned Bernklau’s phone number and address!

and there’s fucking nothing in place to prevent this utterly obvious failure case, other than if you complain Microsoft will just lazily regex for your name in the result and refuse to return anything if it appears

permalink
report
reply

there’s this type of reply guy on fedi lately who does the “well actually querying LLMs only happens in bursts and training is much more efficient than you’d think and nvidia says their gpus are energy-efficient” thing whenever the topic comes up

and meanwhile a bunch of major companies have violated their climate pledges and say it’s due to AI, they’re planning power plants specifically for data centers expanded for the push into AI, and large GPUs are notoriously the part of a computer that consumes the most power and emits a ton of heat (which notoriously has to be cooled in a way that wastes and pollutes a fuckton of clean water)

but the companies don’t publish smoking gun energy usage statistics on LLMs and generative AI specifically so who can say

permalink
report
reply