
boatswain
Sure, but to me that means the latest information is that AI assistants help produce insecure code. If someone wants to perform a study with more recent models to show that’s no longer the case, I’ll revisit my opinion. Until then, I’m assuming that the study holds true. We can’t do security based on “it’s probably fine now.”
I was really excited about this game when I first heard about it, right up until I learned it was Ubisoft. Their involvement makes me pretty dubious, so I’ll wait and see how hard they crank the monetization handle, and also what the reviews look like.
I’m still on mp3s. I have gigs of music on my Plex server and just use that. Fuck subscriptions.
There are lawsuits: https://techcrunch.com/2024/09/02/crowdstrike-faces-onslaught-of-legal-action-from-faulty-software-update/
These things will probably take years to play out.
A coworker of mine has worked with CrowdStrike in the past; I haven’t. He said that the releases he was familiar with from them in the past were all staged into groups and customers were encouraged to test internally before applying them; not sure if this is a different product or what, but it seems like a big step backwards of what he’s saying is right.
As a cybersecurity guy, it’s things like this study, which said:
Overall, we find that participants who had access to an AI assistant based on OpenAI’s codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant.
Making a profit from healthcare and health insurance.
Or even just make private health insurance illegal.