I’ve recently noticed this opinion seems unpopular, at least on Lemmy.
There is nothing wrong with downloading public data and doing statistical analysis on it, which is pretty much what these ML models do. They are not redistributing other peoples’ works (well, sometimes they do, unintentionally, and safeguards to prevent this are usually built-in). The training data is generally much, much larger than the model sizes, so it is generally not possible for the models to reconstruct random specific works. They are not creating derivative works, in the legal sense, because they do not copy and modify the original works; they generate “new” content based on probabilities.
My opinion on the subject is pretty much in agreement with this document from the EFF: https://www.eff.org/document/eff-two-pager-ai
I understand the hate for companies using data you would reasonably expect would be private. I understand hate for purposely over-fitting the model on data to reproduce people’s “likeness.” I understand the hate for AI generated shit (because it is shit). I really don’t understand where all this hate for using public data for building a “statistical” model to “learn” general patterns is coming from.
I can also understand the anxiety people may feel, if they believe all the AI hype, that it will eliminate jobs. I don’t think AI is going to be able to directly replace people any time soon. It will probably improve productivity (with stuff like background-removers, better autocomplete, etc), which might eliminate some jobs, but that’s really just a problem with capitalism, and productivity increases are generally considered good.
The output of a LLM is analogous to re-saving an image as a lo res JPEG. Data is being processed and altered using statistics, but nothing “new” is being created, only lower quality derivatives. That’s why you can’t train a LLM on the output of a LLM.
This is actually a decent argument, but there has to be a threshold. For instance, if I take the average of all RGB values in an image, and distribute a pixel with the average, is that breaking copyright or somehow immoral?
I recently looked into the speculated model-size and speculated training set size of GPT and Stable Diffusion, and it does appear that if you thought of them as compression algorithms, they’d only be doing something like 1:7 compression. These ratios aren’t outlandish for lossy compression.
Compression and redistribution isn’t the (stated) goal of these models. Hypothetically, these models are learning patterns and associations of things like styles and how humans write text. And they appear to do things a little beyond just copying and pasting. So, hypothetically, a lot of the model size could mostly consist of learned styles and human preferences, rather than just a compressed database of the images it was trained on. I guess the real test is trying to prompt the models to reproduce an item in its training set, and evaluating how similar it is.
For personal or public use, I’m fine with it. If you use it to make money, that’s when I get upsetti spaghetti.
Ok. Devil’s Advocate: how is a software engineer profiting from his AI model different from an artist who leans to draw by mimicking the style of public works? Asking for a friend.
Good question.
Ok, so let’s say the artist does exactly what the AI does, in that they don’t try to do anything unique, just looking around at existing content and trying to mix and mash existing ideas. No developing of their own style, no curiosity of art history, no humanity, nothing. In this case I would say that they are mechanically doing the exact same thing as an AI is doing. Do I think I they should get payed. Yes! They spent a good chunk of their life developing this skill, they are a human, they deserve to get their basic needs met and not die of hunger or exposure. Now, this is a strange case because 99.99% of artists don’t do this. Most develop a unique style and add life experience in their art to generate something new.
A Software Engineer can profit off their AI model by selling it. If they are make money by generating images, then they are making money off of hard working artists that should be payed for their work. That isn’t great. The outcome of allowing this is that art will no longer be something you can do to make a living. This is bad for society.
It also should be noted that a Software Engineer making an AI model from scratch is 0.01% of the AIs being used. Most people, lay people, who have spent very little time developing art or Software Engineering skills can easily use an existing model to create “art”. The result of this is that many talented artists that could bring new and interesting ideas to world are being out competed by one guy with a web browser producing sub-par sloppy work.
Good question!
First, that artist will only learn from a few handful of artists instead of every artist’s entire field of work all at the same time. They will also eventually develop their own unique style and voice–the art they make will reflect their own views in some fashion, instead of being a poor facsimile of someone else’s work.
Second, mimicking the style of other artists is a generally poor way of learning how to draw. Just leaping straight into mimicry doesn’t really teach you any of the fundamentals like perspective, color theory, shading, anatomy, etc. Mimicking an artist that draws lots of side profiles of animals in neutral lighting might teach you how to draw a side profile of a rabbit, but you’ll be fucked the instant you try to draw that same rabbit from the front, or if you want to draw a rabbit at sunset. There’s a reason why artists do so many drawings of random shit like cones casting a shadow, or a mannequin doll doing a ballet pose, and it ain’t because they find the subject interesting.
Third, an artist spends anywhere from dozens to hundreds of hours practicing. Even if someone sets out expressly to mimic someone else’s style, teaches themselves the fundamentals, it’s still months and years of hard work and practice, and a constant cycle of self-improvement, critique, and study. This applies to every artist, regardless of how naturally talented or gifted they are.
Fourth, there’s a sort of natural bottleneck in how much art that artist can produce. The quality of a given piece of art scales roughly linearly with the time the artist spends on it, and even artists that specialize in speed painting can only produce maybe a dozen pieces of art a day, and that kind of pace is simply not sustainable for any length of time. So even in the least charitable scenario, where a hypothetical person explicitly sets out to mimic a popular artist’s style in order to leech off their success, it’s extremely difficult for the mimic to produce enough output to truly threaten their victim’s livelihood. In comparison, an AI can churn out dozens or hundreds of images in a day, easily drowning out the artist’s output.
And one last, very important point: artists who trace other people’s artwork and upload the traced art as their own are almost universally reviled in the art community. Getting caught tracing art is an almost guaranteed way to get yourself blacklisted from every art community and banned from every major art website I know of, especially if you’re claiming it’s your own original work. The only way it’s even mildly acceptable is if the tracer explicitly says “this is traced artwork for practice, here’s a link to the original piece, the artist gave full permission for me to post this.” Every other creative community writing and music takes a similarly dim views of plagiarism, though it’s much harder to prove outright than with art. Given this, why should the art community treat someone differently just because they laundered their plagiarism with some vector multiplication?
if they’re using creative commons licenses (or other sharing licenses) then it’s fine! but the model is then alsp bound by the same licenses because that’s how licenses work
Huh I read your headline in a sarcastic tone so was totally ready to argue with you. But I agree. Not sure if it’s an unpopular opinion though.
This falls squarely into the trap of treating corporations as people.
People have a right to public data.
Corporations should continue to be tolerated only while they carefully walk an ever tightening fine line of acceptable behavior.
Yes. Large groups of people acting in concert, with large amounts of funding and influence, must be held to the highest standards, regardless of whether they’re doing something I personally value highly.
An individual’s rights must be held sacred.
When those two goals are in conflict, we must melt the corporation-in-conflict down for scrap parts, donate all of its intellectual property to the public domain, and try again with forming a new organization with a similar but refined charter.
Shareholders should be, ideally, absolutely fucked by this arrangement, when their corporation fucks up, as an incentive to watch and maintain legal compliance in any companies they hold shares in and influence over.
Investment will still happen, but with more care. We have historically used this model to great innovative success, public good, and lucrative dividends. Some people have forgotten how it can work.
I think they are saying that preventing open source models being trained and released prevents people from using them. Trying to make training these models more difficult doesn’t just affect businesses, it affects individuals too. Essentially you have all been trying to stand in the way of progress, probably because of fears over job security. It’s not really different to being a luddite.