140 points

the hallucinations will continue until morale improves

permalink
report
reply
54 points

All year long, from January to Decuary.

permalink
report
parent
reply
24 points

The full ten months of the year?

permalink
report
parent
reply
4 points

lmao

permalink
report
parent
reply

Thanks, magical mushroom man

permalink
report
parent
reply
81 points

In addition to pushing something before it’s ready and where it’s not welcome, Apple’s own stinginess completely screwed them over.

What do LLMs need to be smart? RAM, both for their weights and holding real data to reference. What has apple relentlessly price gouged and skimped on for years? Yeah, I’ll give you one guess…

permalink
report
reply
54 points

For 15 years, Apple has always lagged behind Android on implementing new features, preferring to wait until they felt their implementations were ready for mainstream consumption and it’s always worked out for them. They should have stuck to that instead of jumping on the AI bandwagon with a half baked technology that most people don’t want or need.

permalink
report
parent
reply
24 points

Unfortunately, AI is moving at such a pace that this IS the usual Apple delayed-follow. They had to feed the public hype for something like 9 months. And it doesn’t seem like a true fix for hallucination is coming, so they made their choice to move ahead. Frankly I blame Wall Street because at this point they will eviscerate anyone who doesn’t have a demonstrated AI plan and shipped products around it. If anyone is at the core of this craze, it’s investors, because they are still in the “we don’t know how big this thing is going to get” phase with AI. We’re all dealing with the consequences.

Interestingly though, I’m reminded of the early days of the Internet. People did raise the flag that the Internet wouldn’t have the same reliability as traditional media, because anyone could post anything. And that’s remained true. We have mass disinformation campaigns galore, and also specific incidents of false viral stories like “the Pope has died” which are much like this case, just driven by malicious humans instead of hallucinating software.

It makes me wonder if the problems with AI will never be truly solved but we will just digest AI and learn to live with it as we have with the internet in general. There is also a comparison in my mind between AI and self-driving cars, because every time one of those has a big fuck up we all shout and point and cry that the tech will never be trustable, meanwhile human drivers are out there killing by the hundreds of thousands annually and we don’t even blink at that anymore.

permalink
report
parent
reply
3 points

the problems with (the current forms of generative) AI will not be solved, because they cannot be solved. They are intrinsic to the whole framework.

permalink
report
parent
reply
5 points
*

Well they’re half doing the right thing, just collecting app analytics to train on now so they can properly do it later, seeding the open ecosystem with MLX, stuff like that.

But… I don’t know why they shoved it in news and some other places so early.

permalink
report
parent
reply

LLMs

Emphasis on the first L in LLM. Apple’s model is specifically designed to be small to work on phones with 8 gigs of ram (the requirement to run this)

The price gouging for RAM was only ever on computers. With phones you got what got, and you couldn’t pay for more.

permalink
report
parent
reply
7 points
*

Yeah… and it kinda sucks because it’s small.

If Apple shipped with 16GB/24GB like some Android phones did well before the iPhone 16, it would be far more useful. 16-24GB (aka 14B-32B class models) are the current threshold where quantized LLMs really start to feel ‘smart,’ and they could’ve continue trained a great Apache 2.0 model instead of a tiny, meager one from scratch.

permalink
report
parent
reply

I don’t know how much RAM is in my iPhone 14 Pro, but I’ve never thought ooh this is slow I need more RAM.

Perhaps, it’ll be an issue with this stupid Apple Intelligence, but I don’t care about using that on my next upgrade cycle.

permalink
report
parent
reply
2 points

What do you mean couldn’t pay for more? There are plenty of sub-$200 android phones with 8GB of RAM, and 12-16GB are fairly standard on flagships these days. Asus ROG Phone 6 is rather old and already came with 16GB what, three years ago?

It is definitely doable, there only needs to be willingness. Apple is definitely skimping here.

permalink
report
parent
reply

The iPhone has one ram option. If you buy an iPhone 16 your only option is 8 gigs.

permalink
report
parent
reply
1 point

SSDs?

permalink
report
parent
reply
5 points

For RAG data? It works.

But its too slow for the weights. What generative models fundamentally do is run a full pass through the multi-gigabyte weights for every ‘word’ or diffusion step, so even 128-bit DDR5 like you find on desktop CPUs is too slow.

permalink
report
parent
reply
39 points

What I find surprising in the debate about AI and hallucinations is that everyone points the fact that’s it’s very dangerous and it will spread misinformation… But the problem is the inability or unwillingness to fact check our information.

Nobody wants to fact check something they saw on meta or tik tok. Nobody will. There is no difference between someone trusting some random influencer and someone trusting an AI. They are both set to fail the same way. Both lack critical thinking.

Instead of being afraid of AI and hallucinations we should be investing massively in teaching the newer generations on fact checking and critical thinking.

IA is a great assistant but only if you can fact check it. If you can’t or won’t then it’s a terrible assistant that will set you up to fail.

To be clear, I also struggle to fact check stuff and I definitely was misinformed many times in the past. Nobody is really immune to that problem. IMO IA doesn’t change much about that problem.

permalink
report
reply
43 points

If you have to fact check everything it says, we’re better off as a species boiling the developers in dog shit.

“we’ve added a profoundly energy intensive feature that is wrong most of the time” okay, get in the vat.

permalink
report
parent
reply
6 points
permalink
report
parent
reply
20 points

But you shouldn’t have to actively fact check every headline from the BBC because their headline doesn’t actually say what you read.

And there’s very little value to “summarizing messages” if you aren’t actually summarizing messages and the content doesn’t match the summary.

Yes, you should do more critical thinking, but lowering the quality of information of every interaction with the internet very clearly makes things worse.

permalink
report
parent
reply
8 points

What I find surprising is that so many people (i.e. you) still claim to fact check everything. You don’t. I guarantee it.

Most people don’t read news for a living. You can’t fact check everything you read online. That’s physically impossible. And if you’d be honest to yourself, 95% of headlines you read are just noise and you don’t read any further. Not because you’re too stupid, but because you’re not that interested in Trump’s latest shenanigans or Italy’s economic outlook.

permalink
report
parent
reply
6 points

You didn’t even read my entire comment…

Read it entirely and you will see that this aggressive tone wasn’t necessary or justified.

permalink
report
parent
reply
1 point
*
Deleted by creator
permalink
report
parent
reply
5 points

This is true. Its baffling to me that so many people ‘trust’ influencers as much as they do.

permalink
report
parent
reply
1 point

If you have to fact check it every single time you use it, it’s completely fucking useless.

permalink
report
parent
reply
1 point

I generally make a sanity test by starting a new and ether use different words for the same request or tun it around, like trying to get my initial prompt by prompting the results I got in the first chat

permalink
report
parent
reply
34 points

AI-generated products can be a bad fit for news

No shit. The fact they only discovered that once they’ve got burned proves they never even questioned what generative AI does.

Though I’m sure half of the blame is from them asking it to tack the most clickbaity headlines on every article they can. Even human editors all but outright lie in those, of course an AI is going to hallucinate you the best title it can.

permalink
report
reply
11 points

The article doesn’t explicitly state it, but the wording implies that this headline was not created by BBC. This appears to be a service running on Apple products producing its own summary of the news article. So the BBC didn’t get burned by something they did and that’s what they’re complaining about.

permalink
report
parent
reply
4 points

Oh, well that’s different. There is no reason Apple should be editorializing their content like that.

permalink
report
parent
reply

Apple wasn’t directly creating these summaries, it was their on device AI summaries of the articles bastardizing it.

permalink
report
parent
reply
10 points

That’s the BBC criticising Apple for indiscriminately mangling all notifications with AI, like news headlines. The BBC could boycott the Apple platform, but that’s basically their only lever to stop Apple doing this besides asking nicely.

permalink
report
parent
reply
6 points

I didn’t get that from the article but then, yeah, if it’s Apple rewriting BBC headlines like that, what the hell are they thinking.

permalink
report
parent
reply
5 points

They aren’t. They’re just putting AI on everything.

permalink
report
parent
reply
1 point

they didn’t. This is about a properly written headline by BBC being butchered when summarized by apple intelligence, which they have no control over.

permalink
report
parent
reply
27 points

claiming that Luigi Mangione […] had shot himself.

AI-generated content is prone to inaccuracies

Somebody would call that an “inaccuracy”?

If I serve you a pile of lukewarm shit on a plate and say here’s your dinner, would anybody call that an “inaccuracy”??

I say, life and death is even more than that.

permalink
report
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 15K

    Monthly active users

  • 6.7K

    Posts

  • 152K

    Comments