200fifty
Can AI companies legally ingest copyrighted materials found on the internet to train their models, and use them to pump out commercial products that they then profit from? Or, as the tech companies claim, does generative AI output constitute fair use?
This is kind of the central issue to me honestly. I’m not a lawyer, just a (non-professional) artist, but it seems to me like “using artistic works without permission of the original creators in order to create commercial content that directly competes with and destroys the market for the original work” is extremely not fair use. In fact it’s kind of a prototypically unfair use.
Meanwhile Midjourney and OpenAI are over here like “uhh, no copyright infringement intended!!!” as though “fair use” is a magic word you say that makes the thing you’re doing suddenly okay. They don’t seem to have very solid arguments justifying them other than “AI learns like a person!” (false) and “well google books did something that’s not really the same at all that one time”.
I dunno, I know that legally we don’t know which way this is going to go, because the ai people presumably have very good lawyers, but something about the way everyone seems to frame this as “oh, both sides have good points! who will turn out to be right in the end!” really bugs me for some reason. Like, it seems to me that there’s a notable asymmetry here!
Oh my god, I can’t stop laughing out loud at “women evolved small heads because they kept falling over and hitting their big heads on rocks,” based on the fact that his sister hit her head when she was younger. What’s his explanation for why men didn’t do this then?? Absolutely next-level moon logic I love it so much
Not even – it’s a simplified Civilization clone for mobile. (It actually sounds like a pretty neat little game, but, uh, chess it is not!)
The bill mandates safety testing of advanced AI models and the imposition of “guardrails” to ensure they can’t slip out of the control of their developers or users and can’t be employed to create “biological, chemical, and nuclear weapons, as well as weapons with cyber-offensive capabilities.” It’s been endorsed by some AI developers but condemned by others who assert that its constraints will drive AI developers out of California.
Man, if I can’t even build homemade nuclear weapons, what CAN I do? That’s it, I’m moving to Nevada!
ngl his stuff always felt a bit cynical to me, in that it seemed to exist more to say “look, video games can have a deep message!” than it did to just have such a message in the first place. Like it existed more to gesture at the concept of meaningfulness rather than to be meaningful itself.
My main thought reading through this whole thing was like, “okay, in a world where the rationalists weren’t closely tied to the neoreactionaries, and the effective altruists weren’t known by the public mostly for whitewashing the image of a guy who stole a bunch of people’s money, and libertarians and right-wingers were supported by the mainstream consensus, I guess David Gerard would be pretty bad for saying those things about them. Buuuut…”
Clicking through to one of the source articles
Through an algorithm that analyzes troves of student information from multiple sources, the chatbot was designed to offer tailored responses to questions like “what grade does my child have in math?”
Okay, I’m not a big-brain edtech integration admin, but I seem to recall that like fifteen years ago we had a website that my parents could check to see my grade in math. I feel like this was already a solved problem honestly.
It’s so wild how ChatGPT and this “style” of AI literally didn’t exist two years ago yet we’re all expected to believe it’s this essential, indispensable, irreplaceable tool that people can’t live without, and actually you’re the meanie for suggesting people do something the exact same way they would have in 2022 instead of using the environmental-disaster spam machine