You are viewing a single thread.
View all comments View context
20 points

are there mechanisms known to researchers that Microsoft’s not using that can prevent this type of failure case in an LLM without resorting to whack-a-mole with a regex?

permalink
report
parent
reply
9 points

To be blunt, LLMs are one of the stupider ways to try and use AI. There is incredible potential in many other applications which don’t attempt to interface with something as irrational and unpredictable as people.

permalink
report
parent
reply
20 points

I agree; LLMs and generative AI are indelibly a product of capitalism, and they can’t exist without widespread theft, exploitation of labor, massive concentrations of capital, and a willingness to destroy the environment. they are the stupidest use of technology I’ve ever seen, and after cryptocurrencies the bar for stupid was pretty fucking high. that the products themselves obscure the theft and exploitation that went into training them is a feature for the corporations developing this horseshit, not a bug.

and that’s why it’s notable that the self-described AI researchers behind these garbage products can’t even do basic shit like have the LLM not call a journalist a pedophile without resorting to an absolute hack that won’t scale. there’s no fixing LLMs; systemically, they are what they are. and now this absolute horseshit is a component of what’s unfortunately still the dominant desktop operating system.

permalink
report
parent
reply
9 points
*

The really fucking dumb part of it, you can believe me or not, is that this appears to all circle back to ancient misunderstandings about the nature of man, and attempts to create automatons which behave like men but are perfectly obedient. There is a subset of the population which tries this exact same bullshit with every new technology we create.

permalink
report
parent
reply
10 points
*

I’m ngl I think crypto is even stupider. it’s a real competition though

EDIT: idea. a tech bullshit bracket

permalink
report
parent
reply
4 points

indelibly a product of capitalism

They’re being funded by the capitalists that want to replace all those annoying human workers with the cheapest possible alternative.

Of course, the problem is that while a LLM is the cheapest possible option, it’s turning out that it’s the most useless and garbage one too.

(Also, I’m shockingly infuriated that the tech workers that would end up being the ones replaced the soonest are so busy licking boots rather than throwing their shoes into the machinery.)

permalink
report
parent
reply
-7 points

Yeah there’s already a lot of this in play.

You run the same query multiple times through multiple models and do a web search looking for conflicting data.

I’ve had copilot answer a query, then erase the output and tell me it couldn’t answer it after about 5 seconds.

I’ve also seen responses contradict themselves later paragraphs saying there are other points of view.

It would be a simple matter to have it summarize the output it’s about to give you and dump the output of it paints the subject in a negative light.

permalink
report
parent
reply
9 points

It would be a simple matter to have it summarize the output it’s about to give you and dump the output of it paints the subject in a negative light.

lol. like that’s a fix

(Hindenburg, hitler, great depression, ronald reagan, stalin, modi, putin, decades of north korea life, …)

permalink
report
parent
reply
7 points

Hindenburg, hitler, great depression, ronald reagan, stalin, modi, putin, decades of north korea life, …

🎶 we didn’t start the fire 🎶

permalink
report
parent
reply
11 points

It would be a simple matter to have it summarize the output it’s about to give you and dump the output of it paints the subject in a negative light.

“it can’t be that stupid, you must be prompting it wrong”

permalink
report
parent
reply
6 points
*

Exactly, and all of this is a simple matter of having multiple models trained on different instances of the entire public internet and determining whether their outputs contradict each other or a web search.

I wonder how they prevented search engine results from contradicting data found through web search before LLMs became a thing?

permalink
report
parent
reply
-6 points

They didn’t really have to before LLM. Search engine results, in the heyday we’re backlink driven. You could absolutely search disinformation and find it. But if you searched for a credible article on someone, chances are more people would have links to the good article than the disinformation. However, conspiracy theories often leaked through into search results. And in that case they just gave you the web pages and you had to decide for yourself.

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.5K

    Monthly active users

  • 418

    Posts

  • 11K

    Comments

Community moderators