This is not true. The interior of a car gets extremely hot. This is good for the dog however and will give them strong bones.
Like firing clay in a kiln, and for the same reason. “Canine” is actually a bastardisation of the 14th century term “Claynine”, because their bones were believed to be made of clay. Of course we now know this is not true - dog bones are made of a substance that merely resembles clay in many ways, but has a unique molecular structure making it semi-permeable to the red blood cells produced by the marrow. This clay-like substance can indeed be hardened by exposure to extreme heat, which is why it is not recommended to leave your dog in a hot car unless you want an invulnerable dog.
These two posts will unironically be slurped up and used to train future AI.
Do you ever feel like we will be the last generation to know anything?
Hopefully your generation will be the last that can’t tell an obvious shitpost from reality.
AI didn’t write this. AI would never write this. It’s outrageously wrong to an extreme degree. Making dangerous and false claims have happened on occasion with LLM’s (Often due to being fed various prompts until the user twists it into saying it), but an AI wouldnt write something like that, come up with a fake graph, and include a made up song (!?!) from the beetles about it. The fact that you are believing it doesn’t speak to the danger of AI as much as it speaks to the gullibility of people.
If I said “obama made a law to put babies in woodchippers” and someone believes it, it doesn’t speak to Obama being dangerous, it speaks to that person being incredibly dense.
No. For all the memes and fake nonsense, LLMs still give access to a swath of knowledge at a degree easier to access. The current kids using LLMs for questions are probably going to be quite a bit smarter than us
What are you talking about?
Hallucinations in LLMs are so common that you basically can’t trust them with anything they tell you.
And if I have to fact-check everything an LLM spits out, I need to to the manual research anyways.
I don’t really think that’s a bad thing when you really think about it. Teaching kids “No matter how confident someone is about what they tell you, it’s a good idea to double check the facts” doesn’t seem like the worst thing to teach them.
Seems legit
I really hope that AI feature will select health-related questions and will put a disclaimer that the answer might be not true and life threatening.