59 points

But in a separate Fortune editorial from earlier this month, Stanford computer science professor and AI expert Fei-Fei Liargued that the “well-meaning” legislation will “have significant unintended consequences, not just for California but for the entire country.”

The bill’s imposition of liability for the original developer of any modified model will “force developers to pull back and act defensively,” Li argued. This will limit the open-source sharing of AI weights and models, which will have a significant impact on academic research, she wrote.

Holy shit this is a fucking terrible idea.

permalink
report
reply
11 points

I read that as “incentivizing keeping AI in labs and out of the hands of people who shouldn’t be using it”.

That said, you’d think they would learn by now from Piracy: once it’s out there, it’s out there. Can’t put it back in the jar.

permalink
report
parent
reply
30 points

They should be doing the exact opposite and making it incredibly difficult not to open source it. Major platforms open sourcing much of their systems is basically the only good part of the AI space.

permalink
report
parent
reply
10 points

Also, they used our general knowledge and culture to train the damn things. They should be open sourced for that reason alone. Llms should be seen and treated like libraries, as collections of our common intellect, accessible by everyone.

permalink
report
parent
reply
14 points
*
Removed by mod
permalink
report
parent
reply
-4 points

Yeah what do I care if Jimmy down the street enjoys using his Ollama chatbot? I’m too busy worrying about Terminator panning out

permalink
report
parent
reply
2 points

I haven’t yet read Li’s editorial, but I’m generally more inclined to trust her take on these issues than Hinton and Bengio’s.

permalink
report
parent
reply
2 points

Same energy as PirateSoftware’s “If AAA companies can’t kill games due to always online DRM then small indie devs have to support their games forever, thus bankrupting them” argument.

permalink
report
parent
reply
32 points

Wtf does a kill switch even mean? PCs have kill switches on them already, in the form of a power switch.

permalink
report
reply
41 points

I’m afraid the AI has become self-aware and put a piece of tape over the power switch it is now unstoppable.

permalink
report
parent
reply
15 points

The legislator tried pressing the button on the monitor but the computer kept whirring!!! It’s alive and has a mind of its own!!!

permalink
report
parent
reply
-9 points
*
Deleted by creator
permalink
report
parent
reply
2 points

I feel like the actual problem is all the cancer not the warnings.

permalink
report
parent
reply
19 points

IRL, arms manufacturers claim they’re not culpable when their products are used to blow up civilians. They point at the people making decisions to drop the bombs as the ones responsible, not them.

This legislature tries to get ahead of that argument, by putting reponsibility for downstream harm on the manufacturers instead of their corporate or government customers. Even if the manufacturer moves their munitions plants elsewhere, they’re still responsible for the impact if it harms California residents. So the alternative isn’t to move your company out of state. It’s to stop offering your products in one of the largest economies in the world.

The intent is to make manufacturers stop and put up more guardrails in place instead of blasting ahead, damn the consequences, then going, oops 🤷🏻‍♂️

There will be intense lobbying with the Governor to get him to veto it. If it does get signed, it’ll be interesting to see if it has the intended effect.

permalink
report
reply
4 points

This is interesting analysis that I hadn’t considered. Thanks for clarifying.

permalink
report
parent
reply
3 points

Have you ever seen Terminator?

permalink
report
parent
reply
13 points

You would have assumed that legislators in California of all places would have access to experts that could explain to them why this won’t work.

permalink
report
reply
8 points

As we’ve previously explored in depth, SB-1047 asks AI model creators to implement a “kill switch” that can be activated if that model starts introducing “novel threats to public safety and security,”

A model may only be one component of a larger system. Like, there may literally be no way to get unprocessed input through to the model. How can the model creator even do anything about that?

permalink
report
reply
2 points

It just says can be activated. Not “automatically activates”.

Kill switches are overly dramatic silliness. Anything with a power button has a kill switch. It sounds impressive but it’s just theatre.

permalink
report
parent
reply
2 points

They’re safety washing. If AI has this much potential to be that dangerous, it never ever should have been released. There’s so much in-industry arguing, it’s concerning.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 6K

    Posts

  • 128K

    Comments