10 points

the computational cost of operating over a matrix is always going to be convex relative to its size

This makes no sense - “convex” doesn’t mean fast-growing. For instance a constant function is convex.

permalink
report
reply
10 points

you will be pleased to know that the original text said “superlinear”; i just couldn’t remember if the lower bound of multiplying a sufficiently sparse matrix was actually lower than O(n²) (because you could conceivably skip over big chunks of it) and didn’t feel like going and digging that fact out. i briefly felt “superlinear” was too clunky though and switched it to “convex” and that is when you saw it.

permalink
report
parent
reply
4 points

Hell, so is 1/x for positive values of x. Or any linear function, including those with negative slope.

permalink
report
parent
reply
13 points

Cosigned by the author I also include my two cents expounding on the cheque checker ML.

The most consequential failure mode — that both the text (…) and the numeric (…) converge on the same value that happens to be wrong (…) — is vanishingly unlikely. Even if that does happen, it’s still not the end of the world.

I think extremely important is that this is a kind of error that even a human operator could conceivably make. It’s not some unexplainable machine error, likely the scribbles were just exceedingly illegible on that one cheque. We’re not introducing a completely new dangerous failure mode.

Compare that to, for example, using an LLM in lieu of a person in customer service. The failure mode here is that the system can manufacture things whole cloth and tell you to do a stupid and/or dangerous thing. Like tell you to put glue on pizza. No human operator would ever do that, and even if, then that’s straight-up a prosecutable crime with a clear person responsible. Per previous analogy, it’d be a human operator that knowingly inputs fraudulent information from a cheque. But then again, there would be a human signature on the transaction and a person responsible.

So not only is a gigantic LLM matrix a terrible heuristic for most tasks - eg “how to solve my customer problem” - it introduces failure modes that are outlandish, essentially impossible with a human (or a specialised ML system) and leave no chain of responsibility. It’s a real stinky ball of bull.

permalink
report
reply
5 points
*

indeed. the recent air canada matter underscores this.

permalink
report
parent
reply
10 points

This is like asking what your probability is of being run over by a car while sitting in your living room in your high-rise apartment…

I actually remember a 2015 study from Toretto et al. showing that this is a really more plausible than you might think. Other than that this is a great piece. I particularly appreciated one of the better breakdowns of what people mean by “ChatGPT is just a giant table of numbers” for someone who doesn’t have technical background in the area.

permalink
report
reply
2 points

lol

permalink
report
parent
reply
10 points

A computer can never be held accountable

Therefore a computer must never always make a management decision

permalink
report
parent
reply
7 points

There’s one things these models can never understand: Family

permalink
report
parent
reply
6 points
*

Some nitpicks. some of which are serious are some of which are sneers…

consternating about the policy implications of Sam Altman’s speculative fan fiction

Hey, the fanfiction is actually Eliezer’s (who in turn copied it from older scifi), Sam Altman just popularized it as a way of milking the doom for hype!

So, for starters, in order to fit something as powerful as ChatGPT onto ordinary hardware you could buy in a store, you would need to see at least three more orders of magnitude in the density of RAM chips—​leaving completely aside for now the necessary vector compute.

Well actually, you can get something close to as powerful on a personal computer… because the massive size of ChatGPT and the like don’t actually improve their performance that much (the most useful thing I think is the longer context window?).

I actually liked one of the lawfare AI articles recently (even though it did lean into a light fantasy scenario)… https://www.lawfaremedia.org/article/tort-law-should-be-the-centerpiece-of-ai-governance . Their main idea is that corporations should be liable for near-misses. Like if it can be shown that the corporation nearly caused a much bigger disaster, they get fined in accordance with the bigger disaster. Of course, US courts routinely fail to properly penalize (either in terms of incentives of in terms of compensation) corporations for harms they actually cause, so this seems like a distant fantasy to me.

AI has no initiative. It doesn’t want anything

That’s next on the roadmap though, right? AI agents?

Well… if the way corporations have tried to use ChatGPT has taught me anything, its that they’ll misapply AI in any and every way that looks like it might save or make a buck. So they’ll slap an API to a AI it into a script to turn it into an “agent” despite that being entirely outside the use case of spewing words. It won’t actually be agentic, but I bet it could cause a disaster all the same!

permalink
report
reply
4 points
*

@scruiser @dgerard wait which fanfic are we talking about here? Roko?

permalink
report
parent
reply
3 points

Short fiction of AGI takeover is a lesswrong tradition! And some longer fics too! Are you actually looking for specific examples and/or links? Lots of them are fun, in a sci-fi short form kind of way. The goofier ones and cringer ones are definitely sneerable.

permalink
report
parent
reply
3 points

@scruiser I assumed Dorian and you were referring to a single piece and was curious.

permalink
report
parent
reply
10 points

So if it turns out, as people like Penrose assert, that the brain has a certain quantum je-ne-sais-quoi, then all bets for representing the totality of even the simplest neural state with conventional computing hardware are off.

No, that’s not what Penrose asserts. His whole thing has been to say that quantum mechanics needs to be changed, that quantum mechanics is wrong in a way that matters for understanding brains.

permalink
report
reply

SneerClub

!sneerclub@awful.systems

Create post

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it’s amusing debate.

[Especially don’t debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

Community stats

  • 305

    Monthly active users

  • 121

    Posts

  • 1.9K

    Comments