Avatar

antihumanitarian

antihumanitarian@lemmy.world
Joined
0 posts • 14 comments
Direct message

I really don’t blame them, security and privacy minded folk are more likely to use niche configs. Feels like for Linux stuff companies may be better served making APIs and letting the community handle it. Rclone for example implements a bunch, and last I knew had an unstable Proton plugin.

permalink
report
reply

The comments from that article are some of the most vitriolic I’ve ever seen on a technical issue. Goes to prove the maintainer’s point though.

Some are good for a laugh though, like assertions that Rust in the kernel is a Microsoft sabotage op or LLVM is for grifters and thieves.

permalink
report
reply

FOSS in general needs better means of financial support. While the software is free and libre, developer time is not, and ultimately they gotta eat and pay bills. I hope they get positive results and don’t catch much unnecessary flak.

permalink
report
reply

Given the ease of implantation of end to end encryption now, it’s a reasonable assumption that anything not e2ee is being data mined. E2ee has extensive security benefits, for example even if your data is dumped the info is still useless. So, there has to be a compelling reason to not use it.

permalink
report
reply

People haven’t really changed. As always, power corrupts. When the rewards are great enough, it seems people are often enough willing to compromise their integrity.

permalink
report
reply

My first programming experience, an online class, was in a Linux VM. Linux made programming easy and delightful, Windows always made it a huge pain. As time went on, more of what I did was easier on Linux, and now everything is.

permalink
report
reply

Key detail in the actual memo is that they’re not using just an LLM. “Wallach anticipates proposals that include novel combinations of software analysis, such as static and dynamic analysis, and large language models.”

They also are clearly aware of scope limitations. They explicitly call out some software, like entire kernels or pointer arithmetic heavy code, as being out of scope. They also seem to not anticipate 100% automation.

So with context, they seem open to any solutions to “how can we convert legacy C to Rust.” Obviously LLMs and machine learning are attractive avenues of investigation, current models are demonstrably able to write some valid Rust and transliterate some code. I use them, they work more often than not for simpler tasks.

TL;DR: they want to accelerate converting C to Rust. LLMs and machine learning are some techniques they’re investigating as components.

permalink
report
reply

I have LTS and zen kernels installed in addition to the default Arch one, that should prevent this yes?

permalink
report
reply

What do you mean by “this stuff?” Machine learning models are a fundamental part of spam prevention, have been for years. The concept is just flipping it around for use by the individual, not the platform.

permalink
report
parent
reply

If by reliably you mean 99% certainty of one particular review, yeah I wouldn’t believe it either. 95% confidence interval of what proportion of a given page’s reviews are bots, now that’s plausible. If a human can tell if a review was botted you can certainly train a model to do so as well.

permalink
report
parent
reply