Neat, how about actually making some sweeping regulations tackling corporate EULA-washed malware?
Why do companies get to keep injecting spyware and even rootkits into their OS/software without ever explaining the consequences in a way a lay person can understand?
Used to be when companies did that they got punished. Anyone remember that Sony BMG case with rootkit enabled DRM, or BonziBuddy who’s EULA allowed developers to sell your information to advertisers?
Remember the fucking stink people threw over them? Remember the fucking lawsuits? This shit is just a normal Tuesday for MFAANG. Shit even fucking video games are pushing rootkits down your throat these days. They need to be spanked BAD.
It raises the question what does or doesn’t count as an addictive feed. I bet this doesn’t specify any particular dark pattern or monetization model.
If we gave half a fuck about mental wellnes regarding mobile use, we would have addressed all this when it was particular to mobile games.
No, this is about our kids learning early how fucked society is, and how their own generation is being fed a pro-ownership-class indoctrination regimen before being appointed a string of dead-end toxic jobs.
Social media is how we learn about the genocide in Gaza, police officer-involved homicide rates, and unionization efforts. and that is why we want kids off social media.
Don’t make me put up the koala cartoon again.
I agree with the first two paragraphs, but the rest really feels like you projecting your political and social outlook on the situation.
Social media provides and strengthens biased worldviews and confirmation bias. If you stick to social media, you’ll think violent crime is rapidly increasing (in the US), but it’s actually down. It’s still a problem, but it’s being used to push political agendas that don’t actually solve the problem. For example, banning bump stocks, which are almost never used in mass shootings (most of those are handguns), and make guns way less accurate. Only enthusiasts get them, and pretty much only for range use. And you also have to pull the trigger each time, it just makes that easier (can get the same effect with a rubber band…). It’s also how we got the anti-vax movement and various other conspiracies.
Social media is one way to get less censored news, but it’s unreliable and tends to lead to echo chambers. We should instead be pushing to get government and political bias out of news reporting (or at least make bias explicit), not protect the less trustworthy, biased social media based news sources. There are countless examples of large social media sources providing incorrect information, and never correcting it, and the false information gets more views than the correct information. Social media drives people toward radicalization, and it’s largely how we got Trump.
Social media is a liability. Mobile games are too. Parents should be restricting their children’s access to both (we do), and instead teaching children to recognize bias and find good information (we’re working on that, but they’re still young).
This is just going to end one of two ways:
- companies storing and selling even more personally identifiable information
- kids lying
Probably both.
So I’m going with no. I’m a responsible parent and I’m preventing my kids from accessing social media and teaching them how to find reliable information. As they earn my trust with other services, I’ll slowly remove restrictions. If I think my kids are ready for SM, I’ll let them have access, using a VPN to avoid state restrictions as needed.
For scenario one, they totally need to delete the data used for age verification after they collect it according to the law (unless another law says they have to keep it) and you can trust every company to follow the law.
For scenario two, that’s where the age verification requirements of the law come in.
The data protection laws are good, but a lot of the other bills for banning dark patterns and other annoying “features” sound difficult to enforce
Eh, whackamole enforcement usually cuts mustard with this kind of stuff.
Like yeah someone’s gonna do it anyways just because, but then all it takes is enough people raising an alarm to bring it down, and as a side effect, remove more shitass developers from the market.
You end up with an equilibrium where not every example is getting the hammers of justice, but enough examples are that the average consumer still feels the benefit of a noticeably less toxic internet.
The effectiveness of bans has always hinged on two factors:
- The likelihood of being caught
- The severity of punishment if caught
For example, everyone knows that the odds of being caught speeding are pretty low, but if the punishment for speeding is ten years imprisonment, then very few people will risk speeding.
Similarly, even if the odds of getting caught violating this law is only 1%, if the punishment is banning the platform and shutting down the company along with a fine equal to a year’s worth of revenue, then companies will probably not want to risk it.
Why are “addictive feeds” OK for adults?