Avatar

madsen

madsen@lemmy.world
Joined
0 posts • 19 comments
Direct message

Damn. I would probably try a more mainstream distro for Optimus support, like Pop_OS! or Debian/Ubuntu with non-free repos enabled.
I remember Bumblebee was a thing back in 2013, but it seems that it hasn’t been updated since then: https://www.bumblebee-project.org/

permalink
report
parent
reply

Enterprise licensing for self-hosted setups is priced per chunk of 64 GB of RAM in your cluster. I.e. if you run Elastic on 2 machines of 32 GB RAM each, you pay for 1 node. It sounds like there may have been some poor communication going on, because they definitely don’t base the pricing for self-hosted setups on the number of employees or anything like that.

They’re also not super uptight about you going over the licensing limit for a while. We’ve been running a couple of licenses short since we scaled our cluster up a while back. Our account manager knows and doesn’t care.

permalink
report
parent
reply
so it’s probably just some points assigned for the answers and maybe some simple arithmetic.

Why yes, that’s all that machine learning is, a bunch of statistics :)

I know, but that’s not what I meant. I mean literally something as simple and mundane as assigning points per answer and evaluating the final score:

// Pseudo code
risk = 0
if (Q1 == true) {
    risk += 20
}
if (Q2 == true) {
    risk += 10
}
// etc...
// Maybe throw in a bit of
if (Q28 == true) {
    if (Q22 == true and Q23 == true) {
        risk *= 1.5
    } else {
        risk += 10
    }
}

// And finally, evaluate the risk:
if (risk < 10) {
    return "negligible"
} else if (risk >= 10 and risk < 40) {
    return "low risk"
}
// etc... You get the picture.

And yes, I know I can just write if (Q1) {, but I wanted to make it a bit more accessible for non-programmers.

The article gives absolutely no reason for us to assume it’s anything more than that, and I apparently missed the part of the article that mentioned that the system had been in use since 2007. I know we had machine learning too back then, but looking at the project description here: https://eucpn.org/sites/default/files/document/files/Buena practica VIOGEN_0.pdf it looks more like they looked at a bunch of cases (2159) and came up with the 35 questions and a scoring system not unlike what I just described above.

Edit: I managed to find this, which has apparently been taken down since (but thanks to archive.org it’s still available): https://web.archive.org/web/20240227072357/https://eticasfoundation.org/gender/the-external-audit-of-the-viogen-system/

VioGén’s algorithm uses classical statistical models to perform a risk evaluation based on the weighted sum of all the responses according to pre-set weights for each variable. It is designed as a recommendation system but, even though the police officers are able to increase the automatically assigned risk score, they maintain it in 95% of the cases.

… which incidentally matches what the article says (that police maintain the VioGen risk score in 95% of the cases).

permalink
report
parent
reply

but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language.

Last time I checked (because I was writing Bash scripts based on the same assumption), Python was actually present on more Linux systems out of the box than Bash.

permalink
report
reply

Your point is valid regardless but the article mentions nothing about AI. (“Algorithm” doesn’t mean “AI”.)

permalink
report
parent
reply

Yup. I remember it from when Atlanta hosted the Olympic games some time in the '90s. Despicable.

permalink
report
parent
reply

I don’t think there’s any AI involved. The article mentions nothing of the sort, it’s at least 8 17 years old (according to the article) and the input is 35 yes/no questions, so it’s probably just some points assigned for the answers and maybe some simple arithmetic.

Edit: Upon a closer read I discovered the algorithm was much older than I first thought.

permalink
report
parent
reply

Didn’t something similar happen in Turkey with Erdogan a few years back? Pretty sure he was accused of being behind it himself too; don’t know what the final verdict was though.

I think it’s a pretty common accusation, just like when a politician is attacked, someone will invariably suggest that they staged it in order to get more support.

permalink
report
parent
reply

I think they vastly underestimate how many things Meta tracks besides ad tracking. They’re likely tracking how long you look at a given post in your feed and will use that to rank similar posts higher. They know your location, what wifi network you’re on and will use that to make assumptions based on others on the same network and/or in the same location. They know what times you’re browsing at and can correlate that with what’s trending in the area at those times, etc.

I have no doubt that their algorithm is biased towards all that crap, but these kinds of investigations need to be more informed in order for them to be useful.

permalink
report
reply

The article mentions that one woman (Stefany González Escarraman) went for a restraining order the day after the system deemed her at “low risk” and the judge denied it referring to the VioGen score.

One was Stefany González Escarraman, a 26-year-old living near Seville. In 2016, she went to the police after her husband punched her in the face and choked her. He threw objects at her, including a kitchen ladle that hit their 3-year-old child. After police interviewed Ms. Escarraman for about five hours, VioGén determined she had a negligible risk of being abused again.

The next day, Ms. Escarraman, who had a swollen black eye, went to court for a restraining order against her husband. Judges can serve as a check on the VioGén system, with the ability to intervene in cases and provide protective measures. In Ms. Escarraman’s case, the judge denied a restraining order, citing VioGén’s risk score and her husband’s lack of criminal history.

About a month later, Ms. Escarraman was stabbed by her husband multiple times in the heart in front of their children.

It also says:

Spanish police are trained to overrule VioGén’s recommendations depending on the evidence, but accept the risk scores about 95 percent of the time, officials said. Judges can also use the results when considering requests for restraining orders and other protective measures.

You could argue that the problem isn’t so much the algorithm itself as it is the level of reliance upon it. The algorithm isn’t unproblematic though. The fact that it just spits out a simple score: “negligible”, “low”, “medium”, “high”, “extreme” is, IMO, an indicator that someone’s trying to conflate far too many factors into a single dimension. I have a really hard time believing that anyone knowledgeable in criminal psychology and/or domestic abuse would agree that 35 yes or no questions would be anywhere near sufficient to evaluate the risk of repeated abuse. (I know nothing about domestic abuse or criminal psychology, so I could be completely wrong.)

Apart from that, I also find this highly problematic:

[The] victims interviewed by The Times rarely knew about the role the algorithm played in their cases. The government also has not released comprehensive data about the system’s effectiveness and has refused to make the algorithm available for outside audit.

permalink
report
parent
reply