He lies to assert power. In his company yesmen say yes because he pays their checks. To the rest of us he generally looks like a loon.
It’s obvious to a daft AI.
Come on guys, this was clearly the work of the Demtards hacking his AI and making it call him names. We all know his superior intellect will totally save the world and make it a better place, you just gotta let him go completely unchecked to do it.
/s
Well then they will have to train their Ai with incorrect informations… politically incorrect, scientifically incorrect, etc… which renders the outputs useless.
Scientifically accurate and as close to the truth as possible never equals conservative talking points… because they are scientifically wrong.
It would be the same with liberal talking points and in general any human talking point.
Humans try to change the reality the way they want it, thus things they say are always incorrect. When they want to increase something, they make it appear less than IRL, usually. Also appearances are not universal.
Humans also simplify things acceptably for one subject, but not for another.
Humans also don’t know what “correct information” is.
A lot of philosophy connected to language starts mattering, when your main approach to “AI” is text extrapolation.
Math is correct without humans. Pi is the same in the whole universe. There are scientific truths. And then there are the the flat earth, 2x2=1, qanon anti vax chematrail loonies, which in different degrees and colour are mostly united under the conservative “anti science” folks.
And you want an Ai that doesn’t offend these folks / is taught based on their output. What use could that be of?
Ahem, well, there are obvious things - that 2x2 modulo 3 is 1, that some vaccines might be bad, that’s why farma industry regulations exist, that pi is also unknown p multiplied by unknown i or some number encoded as ‘pi’ string.
These all matter for language models, do they not?
And you want an Ai that doesn’t offend these folks / is taught based on their output. What use could that be of?
It is already taught on their output among other things.
But I personally don’t think this leads anywhere.
Somebody someplace decided it’s a genial idea to extrapolate text, because humans communicate their thoughts via text, so it’s something that can be used for machines.
Humans don’t just communicate.
So you’re saying you lie to try and change reality or present it in a different way?
That’s horrible and I certainly don’t subscribe to this mentality. I will discuss things with people with an open mind and a willingness to change positions if presented with new information.
We are not arguing out of some tribal belief, we have our morals and we will constantly test them to try and be better humans for our fellow humans.
Tell me more about how your theories of gay people being abominations are backed by science.
My theories?
I mean, this is an example. A liberal trying to start an argument with saying things that are false, but in his opinion will lead to something good.
Only because you are a layer does not conclude that all humans are egoistic layers. Of course there are a lot of them, but it is not a general human thing, it’s cultural and regional. Layers want you to believe that everyone is lying all the time, that makes their lives more easy. But feel free to not believe me 😇.
I think you hurt peoples feelings lmao.
The truth just isnt very catchy. Thanks for trying though. Im still on lemmy for people like you.
Chatbots can’t “admit” things. They regurgitate text that just happens to be information a lot of the time.
That said, the irony is iron clad.
Is “dragged” the new “slammed”?
Last decade it was “destroyed”
https://slatestarcodex.com/2015/01/21/these-are-a-few-more-of-my-least-favorite-things/ point 2