hrrrngh
I don’t think the main concern is with the license. I’m more worried about the lack of an open governance and Redis priorizing their functionality at the expense of others. An example is client side caching in redis-py, https://github.com/redis/redis-py/blob/3d45064bb5d0b60d0d33360edff2697297303130/redis/connection.py#L792. I’ve tested it and it works just fine on valkey 7.2, but there is a gate that checks if it’s not Redis and throws an exception. I think this is the behavior that might spread.
Jesus, that’s nasty
That kind of reminds me of medical implant hacks. I think they’re in a similar spot where we’re just hoping no one is enough of an asshole to try it in public.
Like pacemaker vulnerabilities: https://www.engadget.com/2017-04-21-pacemaker-security-is-terrifying.html
caption: “”“AI is itself significantly accelerating AI progress”“”
wow I wonder how you came to that conclusion when the answers are written like a Fallout 4 dialogue tree
- “YES!!!”
- “Yes!!”
- “Yes.”
- " (yes)"
Hi, guys. My name is Roy. And for the most evil invention in the world contest, I invented a child molesting robot. It is a robot designed to molest children.
You see, it’s powered by solar rechargeable fuel cells and it costs pennies to manufacture. It can theoretically molest twice as many children as a human molester in, quite frankly, half the time.
At least The Rock’s child molesting robot didn’t require dedicated nuclear power plants
Oh wow, Dorsey is the exact reason I didn’t want to join it. Now that he jumped ship maybe I’ll make an account finally
Honestly, what could he even be doing at Twitter in its current state? Besides I guess getting that bag before it goes up or down in flames
e: oh god it’s a lot worse than just crypto people and Dorsey. Back to procrastinating
I know this shouldn’t be surprising, but I still cannot believe people really bounce questions off LLMs like they’re talking to a real person. https://ai.stackexchange.com/questions/47183/are-llms-unlikely-to-be-useful-to-generate-any-scientific-discovery
I have just read this paper: Ziwei Xu, Sanjay Jain, Mohan Kankanhalli, “Hallucination is Inevitable: An Innate Limitation of Large Language Models”, submitted on 22 Jan 2024.
It says there is a ground truth ideal function that gives every possible true output/fact to any given input/question, and no matter how you train your model, there is always space for misapproximations coming from missing data to formulate, and the more complex the data, the larger the space for the model to hallucinate.
Then he immediately follows up with:
Then I started to discuss with o1. [ . . . ] It says yes.
Then I asked o1 [ . . . ], to which o1 says yes [ . . . ]. Then it says [ . . . ].
Then I asked o1 [ . . . ], to which it says yes too.
I’m not a teacher but I feel like my brain would explode if a student asked me to answer a question they arrived at after an LLM misled them on like 10 of their previous questions.
Cambridge Analytica even came back from the dead, so that’s still around.
(At least, I think? I’m not really sure what the surviving companies are like or what they were doing without Facebook’s API)
Former staff from scandal-hit Cambridge Analytica (CA) have set up another data analysis company.
[Auspex International] was set up by Ahmed Al-Khatib, a former director of Emerdata.