Also, this is just an impromptu addendum to my extended ramble on the AI bubble crippling tech’s image, but I can easily see military involvement in AI building public resentment/stigma against the industry further.
Any military use of AI is already gonna be seen in a warcrimey light thanks to Israel using it in their Gaza Geneva Checklist Speedrun - add in the public being fully aware of your average LLM’s, shall we say, tenuous connection to reality, and you have a recipe for people immediately assuming the worst.
That was the current example we were thinking of, though we did look up war crimes law thinking on the subject tl;dr you risk war crimes if there isn’t a human in the loop. e.g., think of a minefield as the simplest possible stationary autonomous weapon system, the rest is that with computers.
As a personal sidenote, part of me says the “Self-Aware AI Doomsday” criti-hype might end up coming back to bite OpenAI in the arse if/when one of those DoD tests goes sideways.
Plenty of time and money’s been spent building up this idea of spicy autocomplete suddenly turning on humanity and trying to kill us all. If and when one of those spectacular disasters you and Amy predicted does happen, I can easily see it leading to wild stories of ChatGPT going full Terminator or some shit like that.
let’s delve for five minutes in that fantasyland where military considers that current genai has some military application. ai-powered murderbots are still not gonna happen, and something closer to whatever surveilling hellscape peter thiel is cooking seems more likely (which goes with the same set of problems as current israeli ai-“aided” targeting, and is imo fig leaf for diffusing responsibility. but anyway)
DoD is no stranger to overpriced useless boondoggles, so they should have some sense to throw it out, but if, and that’s massive, load-bearing if, military adopts some variant of commandergpt, then it’s gonna happen iff it provides some new capability that wasn’t there before, or improvement is so vast that it’s worth adopting over whatever side effects, costs of tech, training etc. if it provides some new critical capability, then it will be tightly classified, because under no circumstances it can find its way to the most probable adversary. bunch of software sitting in some server farm in Utah is mostly safe there, unless mormon separatism becomes a thing overnight. putting it in a drone or missile, well it’s not impossible but it’s much harder. but it has been done:
one way it was solved can be seen in this FGM-148 Javelin ATGM guidance module teardown. don’t ask me why it’s on youtube or on how many watchlists this will get you. you’ll notice that there’s no permanent storage there, and lots of actual processing happens in bunch of general-purpose FPGAs. the way it maybe perhaps works is that during cooldown of IR sensor actual software and configuration of these FPGAs is uploaded from CLU (command launch unit) (which is classified item) to missile (which is not), and even if it’s a dud, enemy can’t find out how missile works because power is lost in seconds, RAM is wiped and missile reverts to bricked state. this is to avoid what happened to AIM-9B that got cloned as K-13/AA-2 Atoll.
power, space, and weight on missile are limited. in order to make it work, that software has to be small, elegant, power-efficient, fast, reliable, redundant, reliable, hardened to conditions possible and impossible, sanely-behaving and comprehensible for maintainer. whatever openai has is anything but
What really struck me was how Microsoft’s big pitch for defense applications of LLMs was … corporate slop. Just the same generic shit.
The US military has many corporate characteristics, and I’m quite sure the military has even more use cases for text that nobody wanted to write and nobody wanted to read than your average corporation. But I’d also have thought that a lying bullshit machine was an obvious bad fit for when the details matter because the enemy is trying to fucking kill you. Evidently I’m not quite up on modern military thought.
they have to, otherwise they risk interfering with something real that has real-life consequences, starting with things like not being in specification and getting reports that this shit doesn’t work, breaks something mission-critical or worse yet contributed to fatal incident
perun’s take. some of these things are already out there if you want to stretch lethal autonomous weapons system, such as anti-radiation missiles or SAM in self-defense mode (few civilians emit signature of X-band radar or go at mach 3 at ballistic trajectory)
one use of autonomy in drones - as one option out of many - was supposed to be avoiding jamming. for now (since tuesday in Kursk oblast) Ukrainians for example make good use of using frequencies that are not jammed for some reasons and some experimental deployment of drones with spool of optic fibre, no drones with even limited autonomy were fielded
Saw the Boston Dynamics robot dogs with a gun, and I don’t think we have to worry, due to it using a regular magazine and not having opposable thumbs it is has a preset kill maximum of 30 people.
Granted, whoever tries to put these into production is probably gonna give it a belt-fed or some shit like that. A gunbot isn’t much of a gunbot unless you’ve got at least a couple hundred rounds ready to go.
sweet jesus in apricots that wired article is definitely something
i take that that HEO thing was a brainchild of the last true believer in metaverse that still wears google glass every day
yeah let’s drop SOF that’s supposed to blend in and ??do something??, but they apparently don’t know language, and entire intelligence operation missed shit that would be obvious after picking any local newspaper. yeah let’s replace actual training and 101 about place that you’re getting dropped into with a lying box that weighs 20kg, needs 10kg batteries and boils your back in +40C weather, whose interface is, and i remind you, there’s a need to remain inconspicuous, an unusual looking piece of glass highlighted in complex patterns hanging right in front of your face
they put a subtitle like More Brains, Less Brawn, and then introduce a device that’s supposed to turn every crayon eater into high speed low drag operator
i write off anyone that feels a need to prop up their thonking with john boyd as a fraud who has nothing of substance to say. certain gay pig’s take on that man (tldw: core sneer: “in short, most of what Boyd is known for is a repackage of other people’s work presented in a simplified way that appeals to people with limited to no understanding of the subject”) (there’s sponsored content before that mark you’re not missing anything) because he doesn’t deserve being taken any more seriously