with early “grieftech” entepreneur Helena Blavatsky

14 points
*

Jason Rohrer saw a black guy once and it provoked him to write a video game about the castle doctrine

our readers bring us cursèd knowledge like a dead plague rat dropped at our feet

permalink
report
reply
8 points
*

oh fuck he’s that asshole? the one that was so petty about a negative Polygon review for said game that he stalked the reviewer’s Twitter page until he could find a quote to mangle into a recommendation to put on the game’s Steam page? including the reviewer’s full name, against the reviewer’s repeated, explicit wishes for it to be removed?

permalink
report
parent
reply
6 points
*

I suddenly feel a lot less bad about him having his game copied and re-sold because he released it under public domain. Maybe the ‘left’ in copyleft scared him

Hateful and stupid 🤝

permalink
report
parent
reply
14 points

Neat little vignette about a vile sociopath scamming people in mourning. And, of course, zero consideration for the grieving people being scammed. “ChatGPT please help me feel like he’s still here,” “actually he is already in hell.”

permalink
report
reply
13 points

“grieftech”. Fucking “grief tech”.

permalink
report
parent
reply
11 points

@dgerard @barsquid grief grift

permalink
report
parent
reply
7 points

grieft

permalink
report
parent
reply
13 points

Addressing the “in hell” response that made headlines at Sundance, Rohrer said the statement came after 85 back-and-forth exchanges in which Angel and the AI discussed long hours working in the “treatment center,” working with “mostly addicts.”

We know 85 is the upper bound, but I wonder what Rohrer would consider the minimum number of “exchanges” acceptable for telling someone their loved one is in hell? Like, is 20 in “Hey, not cool” territory, but it’s all good once you get to 50? 40?

Rohrer says that when Angel asked if Cameroun was working or haunting the treatment center in heaven, the AI responded, “Nope, in hell.”

“They had already fully established that he wasn’t in heaven,” Rohrer said.

Always a good sign when your best defense of the horrible thing your chatbot says is that it’s in context.

permalink
report
reply
18 points

it’s very telling that 85 messages is considered a lot. your grief better resolve quick before the model loses coherency and starts digging quotes out of a plagiarized horror movie script

fuck it’s gross how one of the common use cases for LLMs is targeting vulnerable people with the hope they’ll develop a parasocial relationship with your service, so you can keep charging them forever

permalink
report
parent
reply
10 points

Oh no. I mainly knew Jason Rohrer for his video games. For a few minutes I hoped this was some elaborate prank, but apparently he drank the Kool-Aid. He wrote that Project December’s chatbot was arguably the first machine with a soul. I preferred the playful minimalist existentialism of Passage.

permalink
report
reply
11 points

ngl his stuff always felt a bit cynical to me, in that it seemed to exist more to say “look, video games can have a deep message!” than it did to just have such a message in the first place. Like it existed more to gesture at the concept of meaningfulness rather than to be meaningful itself.

permalink
report
parent
reply
9 points

Yeah in retrospect I see what you mean.

permalink
report
parent
reply
8 points

always great* when someone does something that sucks so much it makes his previous work suck retrospectively

permalink
report
parent
reply
10 points
*

All my homies hate Helena Blavatsky. Her grifty bullshit has caused so much human misery.

This Rohrer character is a worthy successor.

permalink
report
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.5K

    Monthly active users

  • 418

    Posts

  • 11K

    Comments

Community moderators