Mmmhmmmmm
The safeguards, for anyone like me who didn’t know about them until now.
Basically, the guidelines include:
- Ensuring AI systems are safe before public release.
- Building AI systems to address issues like bias and discrimination.
- Using AI to enhance security and protect privacy.
- Sharing best practices across the industry.
- Increasing transparency and providing clarity about AI’s capabilities and limitations.
- Reporting on the risks and impacts of AI.
I think they’re making a joke about how AI generated code is ridiculously insecure and shouldn’t be used by anyone.
That said, AIs with the ability to pen test will be a hell of a lot better at finding obscure exploits than any human, so the joke is kind of damaging.
I mean it holds a kernel of truth, but only in one specific use case.
And I can tell you from personal experience if enough people bandwagon the joke, it will kill any interest in developing actually useful AI penetration testing products.
Just like how you chucklefucks broke NFTs.
They could have been THE SOLUTION to protect content creators from platform abuse, but because everyone focused on ONE use case (links to pictures) and joked about it, all the actual useful NFT development to secure creator’s rights and force cross platform compatibility has been completely abandoned and a shitton of you will downvote me for even mentioning it.
Well if they agree to do this vague stuff without any possible enforcement, I’m sure it’ll work out for the best. \s
Right, because they’ve done so well about keeping their word on everything else.
“We pinky promise we don’t need to be regulated.”