You are viewing a single thread.
View all comments 8 points
As we’ve previously explored in depth, SB-1047 asks AI model creators to implement a “kill switch” that can be activated if that model starts introducing “novel threats to public safety and security,”
A model may only be one component of a larger system. Like, there may literally be no way to get unprocessed input through to the model. How can the model creator even do anything about that?
2 points
It just says can be activated. Not “automatically activates”.
Kill switches are overly dramatic silliness. Anything with a power button has a kill switch. It sounds impressive but it’s just theatre.
2 points