Avatar

hrrrngh

hrrrngh@awful.systems
Joined
0 posts • 48 comments
Direct message

I don’t think the main concern is with the license. I’m more worried about the lack of an open governance and Redis priorizing their functionality at the expense of others. An example is client side caching in redis-py, https://github.com/redis/redis-py/blob/3d45064bb5d0b60d0d33360edff2697297303130/redis/connection.py#L792. I’ve tested it and it works just fine on valkey 7.2, but there is a gate that checks if it’s not Redis and throws an exception. I think this is the behavior that might spread.

Jesus, that’s nasty

permalink
report
parent
reply

That kind of reminds me of medical implant hacks. I think they’re in a similar spot where we’re just hoping no one is enough of an asshole to try it in public.

Like pacemaker vulnerabilities: https://www.engadget.com/2017-04-21-pacemaker-security-is-terrifying.html

permalink
report
parent
reply

caption: “”“AI is itself significantly accelerating AI progress”“”

wow I wonder how you came to that conclusion when the answers are written like a Fallout 4 dialogue tree

  • “YES!!!”
  • “Yes!!”
  • “Yes.”
  • "               (yes)"
permalink
report
reply

I’ve seen people defend these weird things as being ‘coping mechanisms.’ What kind of coping mechanism tells you to commit suicide (in like, at least two different cases I can think of off the top of my head) and tries to groom you.

permalink
report
parent
reply

Hi, guys. My name is Roy. And for the most evil invention in the world contest, I invented a child molesting robot. It is a robot designed to molest children.

You see, it’s powered by solar rechargeable fuel cells and it costs pennies to manufacture. It can theoretically molest twice as many children as a human molester in, quite frankly, half the time.

At least The Rock’s child molesting robot didn’t require dedicated nuclear power plants

https://www.youtube.com/watch?v=z0NgUhEs1R4

permalink
report
parent
reply

One of my favorite meme templates for all the text and images you can shove into it, but trying to explain why you have one saved on your desktop just makes you look like the Time Cube guy

permalink
report
parent
reply

I love the word cloud on the side. What is 6G doing there

permalink
report
parent
reply

Oh wow, Dorsey is the exact reason I didn’t want to join it. Now that he jumped ship maybe I’ll make an account finally

Honestly, what could he even be doing at Twitter in its current state? Besides I guess getting that bag before it goes up or down in flames

e: oh god it’s a lot worse than just crypto people and Dorsey. Back to procrastinating

permalink
report
parent
reply

I know this shouldn’t be surprising, but I still cannot believe people really bounce questions off LLMs like they’re talking to a real person. https://ai.stackexchange.com/questions/47183/are-llms-unlikely-to-be-useful-to-generate-any-scientific-discovery

I have just read this paper: Ziwei Xu, Sanjay Jain, Mohan Kankanhalli, “Hallucination is Inevitable: An Innate Limitation of Large Language Models”, submitted on 22 Jan 2024.

It says there is a ground truth ideal function that gives every possible true output/fact to any given input/question, and no matter how you train your model, there is always space for misapproximations coming from missing data to formulate, and the more complex the data, the larger the space for the model to hallucinate.

Then he immediately follows up with:

Then I started to discuss with o1. [ . . . ] It says yes.

Then I asked o1 [ . . . ], to which o1 says yes [ . . . ]. Then it says [ . . . ].

Then I asked o1 [ . . . ], to which it says yes too.

I’m not a teacher but I feel like my brain would explode if a student asked me to answer a question they arrived at after an LLM misled them on like 10 of their previous questions.

permalink
report
reply