And this is where I split with Lemmy.
There’s a very fragile, fleeting war between shitty, tech bro hyped (but bankrolled) corporate AI and locally runnable, openly licensed, practical tool models without nearly as much funding. Guess which one doesn’t care about breaking the law because everything is proprietary?
The “I don’t care how ethical you claim to be, fuck off” attitude is going to get us stuck with the former. It’s the same argument as Lemmy vs Reddit, compared to a “fuck anything like reddit, just stop using it” attitude.
What if it was just some modder trying a niche model/finetune to restore an old game, for free?
That’s a rhetorical question, as I’ve been there: A few years ago, I used ESRGAN finetunes to help restore a game and (seperately) a TV series. Used some open databases for data. Community loved it. I suggested an update in that same community (who apparently had no idea their beloved “remaster” involved oldschool “AI”), and got banned for the mere suggestion.
So yeah, I understand AI hate, oh do I. Keep shitting on Altman an AI bros. But anyone (like this guy) who wants to bury open weights AI: you are digging your own graves.
Oh, so you deserve to use other people’s data for free, but Musk doesn’t? Fuck off with that one, buddy.
Musk does too, if its openly licensed.
Big difference is:
-
X’s data crawlers don’t give a shit because all their work is closed source. And they have lawyers to just smash anyone that complains.
-
X intends to resell and make money off others’ work. My intent is free, transformative work I don’t make a penny off of, which is legally protected.
That’s another thing that worries me. All this is heading in a direction that will outlaw stuff like fanfics, game mods, fan art, anything “transformative” of an original work and used noncommercially, as pretty much any digital tool can be classified as “AI” in court.
Using open datasets means using data people have made available publicly, for free, for any purpose. So using an AI based on that seems considerably more ethical.
Except gen AI didn’t exist when those people decided on their license. And besides which, it’s very difficult to specify “free to use, except in ways that undermine free access” in a license.
What if it was just some modder trying a niche model/finetune to restore an old game, for free?
As an artist, all y’all need to chill. The problem is capitalism, and it’s not like artists make a living anyway. Democratizing art opens up a lot of possibilities, you technophobes.
think more. if i draw something that looks nice on paper, and at the same time am fine with asking chatgpt to solve a math problem, why would my views on ai affect me being an artist or not?
Because ChatGPT is trained on stolen data and using it for any reason is participating in that theft while simultaneously causing a significant impact to the environment.
So I guess your right; it has no bearing on whether you’re an artist or not, but whether you’re a decent person. Thanks for clearing that up.
all art is stolen. no one has had an original idea since the early 20th century.
The early 20th century? I’d say physical philosophy would beg to differ, and do you see how you just killed your own argument by citing a time period? I think ideas don’t have value and that intellectual property stifles innovation. You had me in the first half, where I assumed you meant that people don’t just intuit new ideas from nowhere, then you cited a date and lost me.
do you see how you just killed your own argument by citing a time period?
no. all art prior to that period was just refinement of forms that go back to pre-history. the 20th century introduced ‘modern art’, which basically solidified the idea that anything can be art.
Ideas are not art.
no one has had an original idea since the early 20th century.
PROJECTING
Easy. Don’t work a job or pay rent. Anarchism already exists. It just exists in the crannies (like right in front of you) where other domineering primates don’t beat you with sticks or boss you around. You don’t fix the system. You ignore it.
Oh boy here we go downvotes again
regardless o the model you’re using, the tech itself was developed and fine-tuned on stolen artwork with the sole purpose of replacing the artists who made it
that’s not how that works. You can train a model on licensed or open data and they didn’t make it to spite you even if a large group of grifters are but those aren’t the ones developing it
If you’re going to hate something at least base it on reality and try to avoid being so black-and-white about it.
Name one that is “ethically” sourced.
And “open data” is a funny thing to say. Why is it open? Could it be open because people who made it didn’t expect it to be abused for ai? When a pornstar posted a nude picture online in 2010, do you think they thought of the idea that someone will use it to create deepfakes of random women? Please be honest. And yes, a picture might not actually be “open data” but it highlights the flaw in your reasoning. People don’t think about what could be done to their stuff in the future as much as they should but they certainly can’t predict the future.
Now ask yourself that same question with any profession. Please be honest and tell us, is that “open data” not just another way to abuse the good intentions of others?
Wow, nevermind, this is way worse than your other comment. Victim blaming and equating the law to morality, name a more popular duo with AI bros.
Link to this noncorporate, ethically sourced ai plz because I’ve heard a lot but I’ve never seen it
I think his argument is that the models initially needed lots of data to verify and validate their current operation. Subsequent advances may have allowed those models to be created cleanly, but those advances relied on tainted data, thus making the advances themselves tainted.
I’m not sure I agree with that argument. It’s like saying that if you invented a cure for cancer that relied on morally bankrupt means you shouldn’t use that cure. I’d say that there should be a legal process involved against the person who did the illegal acts but once you have discovered something it stands on its own two feet. Perhaps there should be some kind of reparations however given to the people who were abused in that process.
I think his argument is that the models initially needed lots of data to verify and validate their current operation. Subsequent advances may have allowed those models to be created cleanly, but those advances relied on tainted data, thus making the advances themselves tainted.
It’s not true; you can just train a model from the ground up on properly licensed or open data, you don’t have to inherit anything. What you’re talking about is called finetuning which is where you “re-train” a model to do something specific because it’s much cheaper than training from the ground up.
I don’t think that’s what they are saying. It’s not that you can’t now, it’s that initially people did need to use a lot of data. Then they found tricks to improve training on less, but these tricks came about after people saw what was possible. Since they initially needed such data, their argument goes, and we wouldn’t have been able to improve upon the techniques if we didn’t know that huge neutral nets trained by lots of data were effective, then subsequent models are tainted by the original sin of requiring all this data.
As I said above, I don’t think that subsequent models are necessarily tainted, but I find it hard to argue with the fact that the original models did use data they shouldn’t have and that without it we wouldn’t be where we are today. Which seems unfair to the uncompensated humans who produced the data set.
You CAN train a model on licensed or open data. But we all know they didn’t keep it to just that.
Yeah the corporations didn’t, that doesn’t mean you can’t and that people aren’t doing that.
Is everyone posting ghibli-style memes using ethical, licensed or open data models?
The more I see dishonest, blindly reactionary rhetoric from anti-AI people - especially when that rhetoric is identical to classic RIAA brainrot - the more I warm up to (some) AI.
I’m certainly creative enough not to resort to childish ad hominems immediately.
Yes, I like the unethical thing… but it’s the fault of people who are against it. You see, I thought they were annoying, and that justifies anything the other side does, really.
In my new podcast, I explain how I used this same approach to reimagine my stance on LGBT rights. You see, a person with the trans flag was mean to me on twitter, so I voted for—
Wow, using a marginalized group who are actively being persecuted as your mouthpiece, in a way that doesn’t make sense as an analogy. Attacking LGBTQI+ rights is unethical, period. Where your analogy falls apart is in categorically rejecting a broad suite of technologies as “unethical” even as plenty of people show plenty of examples of when that’s not the case. It’s like when people point to studies showing that sugar can sometimes be harmful and then saying, “See! Carbs are all bad!”
So thank you for exemplifying exactly the kind of dishonesty I’m talking about.
My comment is too short to fit the required nuance, but my point is clear, and it’s not that absurd false dichotomy. You said you’re warming up to some AI because of how some people criticize it. That shouldn’t be how a reasonable person decides whether something is OK or not. I just provided an example of how that doesn’t work.
If you want to talk about marginalized groups, I’m open to discussing how GenAI promotion and usage is massively harming creative workers worldwide—the work of which is often already considered lesser than that of their STEM peers—many of whom are part of that very marginalized group you’re defending.
Obviously not all AI, nor all GenAI, are bad. That said, current trends for GenAI are harmful, and if you promote them as they are, without accountability, or needlessly attack people trying to resist them and protect the victims, you’re not making things better.
I know that broken arguments of people who don’t understand all the details of the tech can get tiring. But at this stage, I’ll take someone who doesn’t entirely understand how AI works but wants to help protect people over someone who only cares about technology marching onwards, the people it’s hurting be dammed.
Hurt, desperate people lash out, sometimes wrongly. I think a more respectable attitude here would be helping steer their efforts, rather than diminishing them and attacking their integrity because you don’t like how they talk.
It is in fact the opposite of reactionary to not uncritically embrace your energy guzzling, disinformation spreading, proft-driven “AI”.
As much as you don’t care about language, it actually means something and you should take some time to look inwards, as you will notice who is the reactionary in this scenario.
“Disinformation spreading” is irrelevant in this discussion. LLM’s are a whole separate conversation. This is about image generators. And on that topic, you position the tech as “energy guzzling” when that’s not necessarily always the case, as people often show; and profit-driven, except what about the cases where it’s being used to contribute to the free information commons?
And lastly you’re leaving out the ableism in being blindly anti-AI. People with artistic skills are still at an advantage over people who either lack them, are too poor to hire skilled artists, and/or are literally disabled whether physically or cognitively. The fact is AI is allowing more people than ever to bring their ideas into reality, where otherwise they would have never been able to.
Listen, if you want to argue for facilitating image creation for people who aren’t skilled artists, I—and many more people—are willing to listen. But this change cannot be built on top of the exploitation of worldwide artists. That’s beyond disrespectful, it’s outright cruel.
I could talk about the other points you’re making, but if you were to remember one single thing from this conversation, please let it be this: supporting the AI trend as it is right now is hurting people. Talk to artists, to writers, even many programmers.
We can still build the tech ethically when the bubble pops, when we all get a moment to breathe, and talk about how to do it right, without Sam Altman and his million greedy investors trying to drive the zeitgeist for the benefit of their stocks, at the cost of real people.
Astroturfing? That implies I’m getting paid or compensated in any way, which I’m not. Does your commenting have anything to do with anything?
Astroturf is fake grass. Are you positive that guy in the picture is being paid? He might just be a proud boy.
This is what luddites destroying factories must have been like lmao
luddites destroying factories
Every time a techbro parrots the word ‘luddite’ I want to cause them physical harm.
No, it is not - there is empirical evidence that AI is accelerating climate change. Every AI model has been trained on stolen or unethically sourced data. Your strong desire to create something that you can make money off of is not a moral justification for using AI.
Everything is accelerating climate change. I don’t see how your wish to hurt people you don’t know (kinda weird btw?) has anything to do with it.
Weird considering at least 3 other people did see the point.
If I’m a luddite you’re a troglodyte.
You might be replying to an idiot, but did you really need to report him?