cross-posted from: https://hexbear.net/post/3613920
Get fuuuuuuuuuuuuuucked
“This isn’t going to stop,” Allen told the New York Times. “Art is dead, dude. It’s over. A.I. won. Humans lost.”
“But I still want to get paid for it.”
This article is annoyingly one-sided. The tool performs an act of synthesis just like an art student looking at a bunch of art might. Sure, like an art student, it could copy someone’s style or even an exact image if asked (though those asking may be better served by torrent sites). But that’s not how most people use these tools. People create novel things with these tools and should be protected under the law.
The tool performs an act of synthesis just like an art student looking at a bunch of art might.
Lol, no. A student still incorporates their own personality in their work. Art by humans always communicates something. LLMs can’t communicate.
People create novel things with these tools and should be protected under the law.
I thought it’s “the tool” the “performs an act of synthesis”. Do people create things, or the LLM?
the machine learning model creates the picture, and does have a “style”, the “style” has been at least partially removed from most commercial models but still exist.
It doesn’t have a “style”. It stores a statistical correlation of art styles.
It’s deterministic. I can exactly duplicate your “art” by typing in the same sentence. You’re not creative, you’re just playing with toys.
Ok, here’s an image I generated with a random seed:
Here’s the UI showing it as a result:
Then I reused the exact same input parameters. Here you can see it in the middle of generating the image:
Then it finished, and you can see it generated the exact same image:
Here’s the second image, so you can see for yourself compared to the first:
You can download Flux Dev, the model I used for this image, and input the exact same parameters yourself, and you’ll get the same image.
That’s actually fundamentally untrue, like independent of your opinion, I promise that when people generate an image with a phrase it will be different and is not deterministic ( not in the way you mean ) .
You and I cannot type the same prompt into the same AI generative model and receive the same result, no system works with that level of specificity, by design.
They pretty much all use some form of entropy / noise.
It’s literally as true as it can possibly be. Given the same inputs (including the same seed), a diffusion model will produce exactly the same output every time. It’s deterministic in the most fundamental meaning of the word. That’s why when you share an image on CivitAI people like it when you share your input parameters, so they can duplicate the image. I have recreated the exact same images using models from there.
Humans are not deterministic (at least as far as we know). If I give two people exactly the same prompt, and exactly the same “training data” (show them the same references, I guess), they will never produce the same output. Even if I give the same person the same prompt, they won’t be able to reproduce the same image again.
This can actually be true, depending on how the system is configured.
For instance, if you and someone else use the same locally-hosted Stable Diffusion UI, both put the exact same prompt, and are using the same seed, # of steps, and dimensions, you’ll get an identical result.
The only reason outputs are different between prompts is because of the noise from the seed, normally randomly set between generations, which can be easily set to the same value as someone else’s generation, and will yield an identical result unless the prompt is changed.
So what you’re saying is that the AI is the artist, not the prompter. The AI is performing the labor of creating the work, at the request of the prompter, like the hypothetical art student you mentioned did, and the prompter is not the creator any more than I would be if I kindly asked an art student to paint me a picture.
In which case, the AI is the thing that gets the authorial credit, not the prompter. And since AI is not a person, anything it authors cannot be subjected to copyright, just like when that monkey took a selfie.
It should be as copyrightable as the prompt. If the prompt is something super generic, then there’s no real work done by the human. If the prompt is as long and unique as other copyrightable writing (which includes short works like poems) then why shouldn’t it be copyrightable?
If the prompt is as long and unique as other copyrightable writing (which includes short works like poems) then why shouldn’t it be copyrightable?
Okay, so the prompt can be that. But we’re talking about the output, no? My hello-world source code is copyrighted, but the output “hello world” on your machine isn’t really, no?
Because it wasn’t created by a human being.
If I ask an artist to create a work, the artist owns authorship of that work, no matter how long I spent discussing the particulars of the work with them. Hours? Days? Months? Doesn’t matter. They may choose to share or reassign some or all of the rights that go with that, but initial authorship resides with them. Why should that change if that discussion is happening not with an artist, but with an AI?
The only change is that, not being a human being, an AI cannot hold copyright. Which means a work created by an AI is not copyrightable. The prompter owns the prompt, not the final result.
Another idiot who thinks “prompt engineering” is a real skill and not just another step those companies are using idiots for free AI training.
You ask AI to draw a ninja turtle on a skateboard, and that “effort” they put into phrasing their request well enough for the AI to understand makes the AI learn the 10 past attempts were looking for what the 11th got
And now it won’t take ten tries to go that route
Any “skill” by the user has a very short expiration date because the next version won’t need it thanks to all the time users spent developing those “skills”.
But no one impressed with AI is smart enough to realize that. And since they’re the on s training the AI…
Idiots in, idiots out
I use ai when I use search engines. This makes the search engines better. I also use ai when I get spotify suggestions. I use ai when I use autocorrect. I use ai without even realizing I’m using ai and the ai improves from it, and I and many other people get an improved quality of life from it, that’s why nearly everyone uses it just like I do.
So, @givesomefucks , do you also regularly use ai that improves from from your usage? Or are you not a hypocrite who thinks there is something morally bad about specific ais that you don’t like while doing exactly what you claim to be against with other ais? How are your moral lines drawn?
Thanks for the example!
Whether an individual determines AI “smart” depends on how smart the person is. We’re all all our own frame of reference.
I have no doubt AI impresses you every day of your life, even stuff that’s not AI apparently, because not all of your examples were.
Thanks for demonstrating what a useless term “AI” is when you’re not trying to sell snake oil.
Every word in every language changes over time. The term AI changing is the absolute normal. It’s not some mark against it.
Current llms are phenomenally beneficial for some things. Millions of developers have had their entire careers completely changed. Teachers are able to grade work in 10% of the time. Children through to college students and anyone interested in learning have infinitely patient tutors on demand 24 hours a day. The fact that you are completely clueless about what is going on doesn’t by any stretch of the imagination mean it isn’t happening. It just means that you not only feel like you are “beyond learning”, it also means that you don’t even have people in your life that are still interested in personal growth, or you are too shallow to have conversations with anyone who is.
This is just beginning. The more you cling to being in denial of progress, the further you will get behind. You are denying any mode of transportation other than horses even exists, while people are routinely flying around the world. It most likely won’t be too long until your mindset is widely accepted as a mental disorder.
I use ai when I use search engines. This makes the search engines better.
I completely agree. I wonder whether some IT bachelor’s degrees now have lessons in AI prompting. I remember in 2005 there was a course we had to do which could’ve been labeled “[shitty] Google-Fu” or something. “information searching” is what it would more or less translate to. Basically searching using Google and library searches well. And I don’t mean “library” in the IT-context, but actual libraries. With books. Just had to use the search tools the locals libraries had.
Such a fucking filler class.
In my year like 60 started, two classes. After three years like 8 graduated.
It’s kinda dead now due to enshittification but the vast majority of humans I’ve interacted with could use a class on how to use a search engine.
Edit- it could be made more modern by showing how to ignore sponsored stuff, blatantly SEO shit, AI shit, etc
I don’t know about that, in particular, because people generally add more detail, but it teaches the AI what kind of detail to add. So if you’re not picky, then yeah, the AI learns from that kind of thing.
As far as it being a useful skill, I don’t think it was in the first place. “Prompt engineer” has always been a joke. It’s like being a “sandwich artist”. Everyone can do it with one day of practice.
Agreed.
Get fucked, you no talent ass clown.