80 points

LLMs should die in a fire.

permalink
report
reply
44 points

I’d be on board if it was actually useful and accurate. But it has proven time and time again to be hot garbage 99% of the time as they shove it down everyone’s throat. They keep talking about it being a new age of AI and how it’s going to change the world but it’s only made the internet a worse place and changed nothing or made things worse.

permalink
report
parent
reply
16 points

They keep talking about it being a new age of AI and how it’s going to change the world but it’s only made the internet a worse place and changed nothing or made things worse.

Just like with crypto and NFTs.

permalink
report
parent
reply
4 points
*

TBH an open source multiple ledger system for financial transactions was an amazing idea which has helped stabilize a lot of economies around the world whose currencies became undervalued to extremes.

NFTs was fucking dumb tho, lol, basically a way to print a receipt for traded goods without any legal enforcement on tying the property to the receipt.

permalink
report
parent
reply
8 points

For me it’s useful, the 99% garbage is hype and misuse. I’d like the exploitative nature of llms to die, rther than the technology itself

permalink
report
parent
reply
15 points
*

Most people have no issue with what we were calling AI before the LLM fad hellscape we’re currently in.

No one sane is going to object to using machine learning to optimize the performance of an antenna, or crash safety of a car frame. People aren’t against the existence of AI opponents in video games. No one was ranting about fuzzy search algorithms, or neural nets on their own. Beyond that, data science has been a thing for ages with no contreversy.

The issue is generative AI and how it is being used. The best case use scenarios are just supplanting tech that already exists at higher cost and delivering worse results. The worst case use scenarios are attempting to cannibalize multiple creative pursuits to remove the need for humans and maximize profits.

permalink
report
parent
reply
5 points

Honestly the problem is that it simultaneously works too well and not well enough.

The truth is, it’s proven time and time again to be hot garbage about 85% of the time. But that 15% of the time that it works great, that’s why it’s being shoved down our throats. That’s what’s ruining this for everyone, that fact that on rare occasions, it does actually work…

permalink
report
parent
reply
4 points

Yeah, I agree with that. And their solution instead of actually fixing the problem is throwing money and computing power at it in the hope that brute force will make it “better.” When in reality it hasn’t even changed that much in the past year besides more eloquently saying complete bullshit. Call it a conspiracy, but I think with nobody ever telling the truth on the internet, LLM’s have only taught themselves to bullshit everyone into believing them.

permalink
report
parent
reply
4 points

Schedules reinforcement. Destined to cause addiction. Like vegas gambling casinos. Or drugs. Or the economy.

permalink
report
parent
reply
2 points

It’s changing the world, but not in the way we want it to.

permalink
report
parent
reply
17 points

From the project page:

The purpose of this project is not to restrict or ban the use of AI in articles, but to verify that its output is acceptable and constructive, and to fix or remove it otherwise.

There’s nothing fundamentally wrong with LLMs. Users just need to know their capabilities and limitations and use them correctly. Just like any other tool.

permalink
report
parent
reply
6 points

There’s nothing fundamentally wrong with LLMs.

Disagree.

permalink
report
parent
reply
24 points

Compelling argument.

permalink
report
parent
reply
7 points

As far as Wikipedia is concerned, there is pretty much no way to use LLMs correctly, because probably each major model includes Wikipedia in its training dataset, and using WP to improve WP is… not a good idea. It probably doesn’t require an essay to explain why it’s bad to create and mechanise a loop of bias in an encyclopedia.

permalink
report
parent
reply
1 point

You’re probably assuming that someone would just go to an LLM and say “write a Wikipedia article about subject X”? That wouldn’t work well, but that’s very far from the only way to use LLMs for Wikipedia work.

For starters, it doesn’t have to actually write content at all. You could paste an existing article into an LLM and ask it “What facts in this article lack references to back them up? Are there any weasel-worded statements, or statements that don’t appear to follow a neutral point of view?” And get lists of things that require attention.

Or you could paste a poorly-worded article in and tell it to rewrite it with all the same information but better phrasing or structure. You could put a bunch of research materials you’ve gathered into the LLM’s context and tell it to write a summary in the style of a Wikipedia article, with references to the sources for each fact mentioned. Obviously you’d check the LLM’s work afterward and probably do some manual editing, but this would be a great time and effort saver to get a first draft written. You could take an existing article and tell the LLM that some particular fact had changed or been discovered to be incorrect and ask it to rewrite the relevant parts to account for that.

Wikipedia is in many, many languages. You could have a multilingual LLM automatically compare the contents of different language versions of a Wikipedia article and ask it to spot differences in content or tone. You could have an LLM translate an article from one language to another as a starting point for creating an article in that new language.

You could have the LLM check the references of an existing article - look up each referenced work on the web and see whether it genuinely says what the article that’s using it as a reference says. It could flag all manner of subtle problems that way. Perhaps the reference sounds biased, or whoever used it as a reference misinterpreted it, or the link was simply incorrect and points to unrelated material. Being able to have an AI do a first-pass check of all that in a completely automated way would save huge amounts of time.

This is all just brainstorming off the top of my head, so I’m sure there’s plenty of other good uses that aren’t coming to mind.

permalink
report
parent
reply
3 points

and using WP to improve WP is… not a good idea.

That is not inherently true. For example, there was an instance when I read a Wikipedia article, and a chart was simply incomplete, there were entries in the chart left blank, when I knew that data existed. All I had to do was look up those exact items in Wikipedia and the correct numbers were there, readily available.

I think that was when I first created a Wikipedia account for editing. There was an article clearly missing information and I knew it would be both non controversial and quite easy to fill in that information.

My point is, that first article could definitely be meaningfully improved, using only information already available on Wikipedia.

permalink
report
parent
reply
2 points

Just my daily lose of LLMentalist

permalink
report
parent
reply
9 points

Google, Microsoft, OpenAI, Anthropic and co should help fight this fight, their tools are the problem here.

permalink
report
reply
6 points

No thank you. I would prefer if they had as little influence over anything in the world as possible.

permalink
report
parent
reply
1 point

Ok, but we are only here because of their stuff, it’s only fair that they pick up the broom and help clean.

permalink
report
parent
reply
1 point

They dont have brooms, they only have bulldozers. I would prefer my living room un-bulldozed.

permalink
report
parent
reply
7 points

404 does not miss

permalink
report
reply
44 points

No matter how you look at it, Wikipedia is one of the modern wonders of the world; those who maintain and defend it are doing holy work. The availability of free, high quality, publically indexed and equitably accessible information about our modern world is such an under-appreciated gift.

Education is a powerful tool, but when most people hear “knowledge is power” they think of personal success or political might. But its true power is on an evolutionary scale.

No other species in the history of our (known) universe has the capability to study the world, and then share those the conclusions to the next generation with high precision, like we do. It’s absolutely fascinating. It’s what sets us apart from the rest. It defines the human experience.

The reality is that the integrity of this mechanism (or rather, the democratization of said mechanism) is under threat. It always has been, but the nature of the threat has changed, and its scary. I’m glad it is being protected, at least for now.

permalink
report
reply
13 points

I swear there is rarely ever a time when Wikipedia isn’t the best source to at least start looking into something. Definitely a modern wonder, assuming what you’re looking for is on there.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 5.9K

    Posts

  • 122K

    Comments