3 points
*

Neat, how about actually making some sweeping regulations tackling corporate EULA-washed malware?

Why do companies get to keep injecting spyware and even rootkits into their OS/software without ever explaining the consequences in a way a lay person can understand?

Used to be when companies did that they got punished. Anyone remember that Sony BMG case with rootkit enabled DRM, or BonziBuddy who’s EULA allowed developers to sell your information to advertisers?

Remember the fucking stink people threw over them? Remember the fucking lawsuits? This shit is just a normal Tuesday for MFAANG. Shit even fucking video games are pushing rootkits down your throat these days. They need to be spanked BAD.

permalink
report
reply
0 points

The data protection laws are good, but a lot of the other bills for banning dark patterns and other annoying “features” sound difficult to enforce

permalink
report
reply
0 points

Eh, whackamole enforcement usually cuts mustard with this kind of stuff.

Like yeah someone’s gonna do it anyways just because, but then all it takes is enough people raising an alarm to bring it down, and as a side effect, remove more shitass developers from the market.

You end up with an equilibrium where not every example is getting the hammers of justice, but enough examples are that the average consumer still feels the benefit of a noticeably less toxic internet.

permalink
report
parent
reply
0 points

The effectiveness of bans has always hinged on two factors:

  • The likelihood of being caught
  • The severity of punishment if caught

For example, everyone knows that the odds of being caught speeding are pretty low, but if the punishment for speeding is ten years imprisonment, then very few people will risk speeding.

Similarly, even if the odds of getting caught violating this law is only 1%, if the punishment is banning the platform and shutting down the company along with a fine equal to a year’s worth of revenue, then companies will probably not want to risk it.

permalink
report
parent
reply
0 points

I’ve heard the severity actually doesn’t work as deterrent, people tend to assume they don’t get caught

permalink
report
parent
reply
0 points

Why are “addictive feeds” OK for adults?

permalink
report
reply
1 point

They aren’t, but adults are allowed to decide about that addiction

permalink
report
parent
reply
-3 points

Because you don’t have the votes for your fascist nanny state.

permalink
report
parent
reply
0 points

How do they prove your age? Non-technical savvy people probably just give their kids a phone and don’t do much to lock it down.

permalink
report
reply
0 points
*

From the description of the bill law (bold added):

https://legislation.nysenate.gov/pdf/bills/2023/S7694A

To limit access to addictive feeds, this act will require social media companies to use commercially reasonable methods to determine user age. Regulations by the attorney general will provide guidance, but this flexible standard will be based on the totality of the circumstances, including the size, financial resources, and technical capabilities of a given social media company, and the costs and effectiveness of available age determination techniques for users of a given social media platform. For example, if a social media company is technically and financially capable of effectively determining the age of a user based on its existing data concerning that user, it may be commercially reasonable to present that as an age determination option to users. Although the legislature considered a statutory mandate for companies to respect automated browser or device signals whereby users can inform a covered operator that they are a covered minor, we determined that the attorney general would already have discretion to promulgate such a mandate through its rulemaking authority related to commercially reasonable and technologically feasible age determination methods. The legislature believes that such a mandate can be more effectively considered and tailored through that rulemaking process. Existing New York antidiscrimination laws and the attorney general’s regulations will require, regardless, that social media companies provide a range of age verification methods all New Yorkers can use, and will not use age assurance methods that rely solely on biometrics or require government identification that many New Yorkers do not possess.

In other words: sites will have to figure it out and make sure that it’s both effective and non-discriminatory, and the safe option would be for sites to treat everyone like children until proven otherwise.

permalink
report
parent
reply
0 points

So they’re all going to request, store, and sell even more personally identifiable information.

permalink
report
parent
reply
2 points

No, no, no, it’s super secure you see, they have this in the law too:

Information collected for the purpose of determining a covered user’s age under paragraph (a) of subdivision one of this section shall not be used for any purpose other than age determination and shall be deleted immediately after an attempt to determine a covered user’s age, except where necessary for compliance with any applicable provisions of New York state or federal law or regulation.

And they’ll totally never be hacked.

permalink
report
parent
reply
-2 points

This shit applies directly to lemmy. Y’all seem to be blinded by your hate of TikTok.

permalink
report
reply
0 points

The problem is algorithmically driven content feeds and the lack of transparency around them. These algorithms drive engagement which prioritizes content that makes people angry, not content that make people happy. These feeds are full of misinformation, conspiratorial thinking, rage bait, and other negativity with very little user control to protect themselves, curate the feed or to have neutral access to news and politics.

Lemmy sorts content very simply based on user upvotes. If you want to know why you’re seeing a post you can see exactly who upvoted it and what instances that traffic came from. It’s not immune to being manipulated but it can’t be done secretly or in a centralized way.

Yet based on their actions we already know that Facebook has levers they can pull to directly affect the amount of news people see about a specific topic, let alone the source of information on that topic. These big social media companies guard these proprietary algorithms that are directly determining what news people see on a massive scale. Sure they claim to be a neutral arbiter of content that just gives people what they want but why would anyone believe them?

Lemmy is not the same thing, though it’s not without its own problems.

permalink
report
parent
reply
-3 points

Lemmy has hot and top. All of these fall into addicting algorithms.

permalink
report
parent
reply
1 point
*

Here is a bit of information on how Lemmy’s “Hot” sorting works.

I’m not going to argue about how addictive any specific feed or sorting method is, but this method is content neutral, does not adjust based on user behavior (besides which communities you subscribe to) and is completely transparent as all post interactions are public. With this type of sorting users can be sure that certain content is not prioritized over others (outside of mod actions which are also public). Having a more neutral straightforward ranking system that isn’t based on user behavior reduces addictiveness and is less likely to form echo chambers. This makes it easier to see more diverse content, reduces the spread of misinformation and is much more difficult to manipulate.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 5.1K

    Posts

  • 91K

    Comments