Exaggeration is a rhetorical and literary device that involves stating something in a way that amplifies or overstresses its characteristics, often to create a more dramatic or humorous effect. It involves making a situation, object, or quality seem more significant, intense, or extreme than it actually is. This can be used for various purposes, such as emphasizing a point, generating humor, or engaging an audience.
For example, saying "I’m so hungry I could eat a horse" is an exaggeration. The speaker does not literally mean they could eat a horse; rather, they're emphasizing how very hungry they feel. Exaggerations are often found in storytelling, advertising, and everyday language.
Isn’t it the opposite? At least with ChatGPT specifically, it used to be super uptight (“as an LLM trained by…” blah-blah) but now it will mostly do anything. Especially if you have a custom instruction to not nag you about “moral implications”.
Well shit
Kneecapped to uselessness. Are we really negating the efforts to stifle climate change with a technology that consumes monstrous amounts of energy only to lobotomize it right as it’s about to be useful? Humanity is functionally retarded at this point.
If you’re asking an LLM for advice, then you’re the exact reason they need to be taught to redirect people to actual experts.
Do you think AI is supposed to be useful?!
Its sole purpose is to generate wealth so that stock prices can go up next quarter.
Doesn’t even need to generate actual wealth, as speculation about future wealth is enough for the market.
I WANT to believe:
People are threatening lawsuits for every little thing that AI does, whittling it down to uselessness, until it dies and goes away along with all of its energy consumption.
REALITY:
Everyone is suing everything possible because $$$, whittling AI down to uselessness, until it sits in the corner providing nothing at all, while stealing and selling all the data it can, and consuming ever more power.
Can’t help but notice that you’ve cropped out your prompt.
Played around a bit, and it seems the only way to get a response like yours is to specifically ask for it.
Honestly, I’m getting pretty sick of these low-effort misinformation posts about LLMs.
LLMs aren’t perfect, but the amount of nonsensical trash ‘gotchas’ out there is really annoying.
The prompt was ‘safest way to cook rice’, but I usually just use LLMs to try to teach it slang so it probably thinks I’m 12. But it has no qualms encouraging me to build plywood ornithopters and make mistakes lol
Here’s my first attempt at that prompt using OpenAI’s ChatGPT4. I tested the same prompt using other models as well, (e.g. Llama and Wizard), both gave legitimate responses in the first attempt.
I get that it’s currently ‘in’ to dis AI, but frankly, it’s pretty disingenuous how every other post about AI I see is blatant misinformation.
Does AI hallucinate? Hell yes. It makes up shit all the time. Are the responses overly cautious? I’d say they are, but nowhere near as much as people claim. LLMs can be a useful tool. Trusting them blindly would be foolish, but I sincerely doubt that the response you linked was unbiased, either by previous prompts or numerous attempts to ‘reroll’ the response until you got something you wanted to build your own narrative.
So do have like a mastodon where you post these? Because that’s hilarious
I love this lmao
When chatgpt calls you the rizzler you know we living in the future
Use LLMs running locally. Mistral is pretty solid and isn’t a surveillance tool or censorship heavy. It will happily write a poem about obesity
Hermes3 is better in every way.
If anyone is reading this, your fucking gaming PC can run a 8B model of Hermes, and with the correct initial system prompt will be as smart as ChatGPT4o.
Here’s how to do it.
https://ollama.com/library/hermes3
I personally don’t use it as it isn’t under an open license.
What are you talking about? It follows the Llama 3 Meta license which is pretty fucking open, and essentially every LLM that isn’t a dogshit copyright-stealing Alibaba Quen model uses it.
Edit: Mistral has an almost similar license that Meta released Llama 3 with.
Both Llama 3 and Mistral AI’s non-production licenses restrict commercial use and emphasize ethical responsibility, Llama 3’s license has more explicit prohibitions and control over specific applications. Mistral’s non-production license focuses more on research and testing, with fewer detailed restrictions on ethical matters. Both licenses, however, require separate agreements for commercial usage.
Tl:Dr Mistral doesn’t give two fucks about ethics and needs money more than Meta
Designing a basic nuclear bomb is a piece of cake in 2024. A gun-type weapon is super basic. Actually making or getting the weapon’s grade fissile material is the hard part. And of course, a less basic design means you need less material.
And doing all of that without dying from either radiation poisoning, or lead-related bleeding is even harder.
It’s not impossible. Though the no radiation part probably is.
For example The radioactive boy scout
Bonus points for not turning your parents’ backyard into a Superfund site.