A survey of more than 2,000 smartphone users by second-hand smartphone marketplace SellCell found that 73% of iPhone users and a whopping 87% of Samsung Galaxy users felt that AI adds little to no value to their smartphone experience.

SellCell only surveyed users with an AI-enabled phone – thats an iPhone 15 Pro or newer or a Galaxy S22 or newer. The survey doesn’t give an exact sample size, but more than 1,000 iPhone users and more than 1,000 Galaxy users were involved.

Further findings show that most users of either platform would not pay for an AI subscription: 86.5% of iPhone users and 94.5% of Galaxy users would refuse to pay for continued access to AI features.

From the data listed so far, it seems that people just aren’t using AI. In the case of both iPhone and Galaxy users about two-fifths of those surveyed have tried AI features – 41.6% for iPhone and 46.9% for Galaxy.

So, that’s a majority of users not even bothering with AI in the first place and a general disinterest in AI features from the user base overall, despite both Apple and Samsung making such a big deal out of AI.

218 points

A 100% accurate AI would be useful. A 99.999% accurate AI is in fact useless, because of the damage that one miss might do.

It’s like the French say: Add one drop of wine in a barrel of sewage and you get sewage. Add one drop of sewage in a barrel of wine and you get sewage.

permalink
report
reply
70 points

I think it largely depends on what kind of AI we’re talking about. iOS has had models that let you extract subjects from images for a while now, and that’s pretty nifty. Affinity Photo recently got the same feature. Noise cancellation can also be quite useful.

As for LLMs? Fuck off, honestly. My company apparently pays for MS CoPilot, something I only discovered when the garbage popped up the other day. I wrote a few random sentences for it to fix, and the only thing it managed to consistently do was screw the entire text up. Maybe it doesn’t handle Swedish? I don’t know.

One of the examples I sent to a friend is as follows, but in Swedish;

Microsoft CoPilot is an incredibly poor product. It has a tendency to make up entirely new, nonsensical words, as well as completely mangle the grammar. I really don’t understand why we pay for this. It’s very disappointing.

And CoPilot was like “yeah, let me fix this for you!”

Microsoft CoPilot is a comedy show without a manuscript. It makes up new nonsense words as though were a word-juggler on circus, and the grammar becomes mang like a bulldzer over a lawn. Why do we pay for this? It is buy a ticket to a show where actosorgets their lines. Entredibly disappointing.

permalink
report
parent
reply
25 points

Most AIs struggle with languages other than English, unfortunately, I hate how it reinforces the “defaultness” of English

permalink
report
parent
reply
2 points

I guess there’s not much non English internet to scrape? I’m always surprised how few social media platforms exist outside of the USA. I went looking because I was curious what discourse online would look like without any Americans talking, and the answer was basically “there aren’t any” outside of shit like 2ch.

permalink
report
parent
reply
2 points
Deleted by creator
permalink
report
parent
reply
2 points

That’s so beautifully illustrative of what the LLM is actually doing behind the curtain! What a mess.

permalink
report
parent
reply
1 point

Yeah, it wonks the tokens up.

I actually really like machine learning. It’s been a fun field to follow and play around with for the past decade or so. It’s the corpo-facist BS that’s completely tainted it.

permalink
report
parent
reply
20 points

99.999% accurate would be pretty useful. Theres plenty of misinformation without AI. Nothing and nobody will be perfect.

Trouble is they range from 0-95% accurate depending on the topic and given context while being very confident when they’re wrong.

permalink
report
parent
reply
12 points

The problem really isn’t the exact percentage, it’s the way it behaves.

It’s trained to never say no. It’s trained to never be unsure. In many cases an answer of “You can’t do that” or “I don’t know how to do that” would be extremely useful. But, instead, it’s like an improv performer always saying “yes, and” then maybe just inventing some bullshit.

I don’t know about you guys, but I frequently end up going down rabbit holes where there are literally zero google results matching what I need. What I’m looking for is so specialized that nobody has taken the time to write up an indexable web page on how to do it. And, that’s fine. So, I have to take a step back and figure it out for myself. No big deal. But, Google’s “helpful” AI will helpfully generate some completely believable bullshit. It’s able to take what I’m searching for and match it to something similar and do some search-and-replace function to make it seem like it would work for me.

I’m knowledgeable enough to know that I can just ignore that AI-generated bullshit, but I’m sure there are a lot of other more gullible optimistic people who will take that AI garbage at face value and waste all kinds of time trying to get it working.

To me, the best way to explain LLMs is to say that they’re these absolutely amazing devices that can be used to generate movie props. You’re directing a movie and you want the hero to pull up a legal document submitted to a US federal court? It can generate one in seconds that would take your writers hours. It’s so realistic that you could even have your actors look at it and read from it and it will come across as authentic. It can generate extremely realistic code if you want a hacking scene. It can generate something that looks like a lost Shakespeare play, or an intercept from an alien broadcast, or medical charts that look like exactly what you’d see in a hospital.

But, just like you’d never take a movie prop and try to use it in real life, you should never actually take LLM output at face value. And that’s hard, because it’s so convincing.

permalink
report
parent
reply
14 points

permalink
report
parent
reply
8 points
*

We’re not talking about an AI running a nuclear reactor, this article is about AI assistants on a personal phone. 0.001% failure rates for apps on your phone isn’t that insane, and generally the only consequence of those failures would be you need to try a slightly different query. Tools like Alexa or Siri mishear user commands probably more than 0.001% of the time, and yet those tools have absolutely caught on for a significant amount of people.

The issue is that the failure rate of AI is high enough that you have to vet the outputs which typically requires about as much work as doing whatever you wanted the AI to do yourself, and using AI for creative things like art or videos is a fun novelty, but isn’t something that you’re doing regularly and so your phone trying to promote apps that you only want to use once in a blue moon is annoying. If AI were actually so useful you could query it with anything and 99.999% of the time get back exactly what you wanted, AI would absolutely become much more useful.

permalink
report
parent
reply
-2 points

People love to make these claims.

Nothing is “100% accurate” to begin with. Humans spew constant FUD and outright malicious misinformation. Just do some googling for anything medical, for example.

So either we acknowledge that everything is already “sewage” and this changes nothing or we acknowledge that people already can find value from searching for answers to questions and they just need to apply critical thought toward whether I_Fucked_your_mom_416 on gamefaqs is a valid source or not.

Which gets to my big issue with most of the “AI Assistant” features. They don’t source their information. I am all for not needing to remember the magic incantations to restrict my searches to a single site or use boolean operators when I can instead “ask jeeves” as it were. But I still want the citation of where information was pulled from so I can at least skim it.

permalink
report
parent
reply
21 points

99.999% would be fantastic.

90% is not good enough to be a primary feature that discourages inspection (like a naive chatbot).

What we have now is like…I dunno, anywhere from <1% to maybe 80% depending on your use case and definition of accuracy, I guess?

I haven’t used Samsung’s stuff specifically. Some web search engines do cite their sources, and I find that to be a nice little time-saver. With the prevalence of SEO spam, most results have like one meaningful sentence buried in 10 paragraphs of nonsense. When the AI can effectively extract that tiny morsel of information, it’s great.

Ideally, I don’t ever want to hear an AI’s opinion, and I don’t ever want information that’s baked into the model from training. I want it to process text with an awareness of complex grammar, syntax, and vocabulary. That’s what LLMs are actually good at.

permalink
report
parent
reply
-7 points

Again: What is the percent “accurate” of an SEO infested blog about why ivermectin will cure all your problems? What is the percent “accurate” of some kid on gamefaqs insisting that you totally can see Lara’s tatas if you do this 90 button command? Or even the people who insist that Jimi was talking about wanting to kiss some dude in Purple Haze.

Everyone is hellbent on insisting that AI hallucinates and… it does. You know who else hallucinates? Dumbfucks. And the internet is chock full of them. And guess what LLMs are training on? Its the same reason I always laugh when people talk about how AI can’t do feet or hands and ignore the existence of Rob Liefeld or WHY so many cartoon characters only have four fingers.

Like I said: I don’t like the AI Assistants that won’t tell me where they got information from and it is why I pay for Kagi (they are also AI infested but they put that at higher tiers so I get a better search experience at the tier I pay for). But I 100% use stuff like chatgpt to sift through the ninety bazillion blogs to find me a snippet of a helm chart that I can then deep dive on whether a given function even exists.

But the reality is that people are still benchmarking LLMs against a reality that has never existed. The question shouldn’t be “we need this to be 100% accurate and never hallucinate” and instead be “What web pages or resources were used to create this answer” and then doing what we should always be doing: Checking the sources to see if they at least seem trustworthy.

permalink
report
parent
reply
4 points

I think you nailed it. In the grand scheme of things, critical thinking is always required.

The problem is that, when it comes to LLMs, people seem to use magical thinking instead. I’m not an artist, so I oohd and aahd at some of the AI art I got to see, especially in the early days, when we weren’t flooded with all this AI slop. But when I saw the coding shit it spewed? Thanks, I’ll pass.

The only legit use of AI in my field that I know of is an unit test generator, where tests were measured for stability and code coverage increase before being submitted to dev approval. But actual non-trivial production grade code? Hell no.

permalink
report
parent
reply
-1 points
*

Even those examples are the kinds of things that “fall apart” if you actually think things through.

Art? Actual human artists tend to use a ridiculous amount of “AI” these days and have been for well over a decade (probably closer to two, depending on how you define “AI”). Stuff like magic erasers/brushes are inherently looking at the picture around it (training data) and then extrapolating/magicking what it would look like if you didn’t have that logo on your shirt and so forth. Same with a lot of weathering techniques/algorithms and so forth.

Same with coding. People more or less understand that anyone who is working on something more complex than a coding exercise is going to be googling a lot (even if it is just that you will never ever remember how to do file i/o in python off the top of your head). So a tool that does exactly that is… bad?

Which gets back to the reality of things. Much like with writing a business email or organizing a calendar: If a computer program can do your entire job for you… maybe shut the fuck up about that program? Chatgpt et al aren’t meant to replace the senior or principle software engineer who is in lots of design meetings or optimizing the critical path of your corporate secret sauce.

It is replacing junior engineers and interns (which is gonna REALLY hurt in ten years but…). Chatgpt hallucinated a nonsense function? That is what CI testing and code review is for. Same as if that intern forgot to commit a file or that rockstar from facebook never ran the test suite.

Of course, the problem there is that the internet is chock full of “rock star coders” who just insist the world would be a better place if they never had to talk to anyone and were always given perfectly formed tickets so they could just put their headphones on and work and ignore Sophie’s birthday and never be bothered by someone asking them for help (because, trust me, you ALWAYS want to talk to That Guy about… anything). And they don’t realize that they were never actually hot shit and were mostly always doing entry level work.

Personally? I only trust AI to directly write my code for me if it is in an airgapped environment because I will never trust black box code I pulled off the internet to touch corporate data. But I will 100% use it in place of google to get an example of how to do something that I can use for a utility function or adapt to solving my real problem. And, regardless, I will review and test that just as thoroughly as the code Fred in accounting’s son wrote because I am the one staying late if we break production.


And just to add on, here is what I told a friend’s kid who is an undergrad comp sci:

LLMs are awesome tools. But if the only thing you bring to the table is that you can translate the tickets I assigned to you to a query to chatgpt? Why am I paying you? Why am I not expensing a prompt engineering course on udemy and doing it myself?

Right now? Finding a job is hard but there are a lot of people like me who understand we still need to hire entry level coders to make sure we have staff ready to replace attrition over the next decade (or even five years). But I can only hire so many people and we aren’t a charity: If you can’t do your job we will drop you the moment we get told to trim our budget.

So use LLMs because they are an incredibly useful tool. But also get involved in design and planning as quickly as possible. You don’t want to be the person writing the prompts. You want to be the person figuring out what prompts we need to write.

permalink
report
parent
reply
4 points
*

For real. If a human performs task X with 80% accuracy, an AI needs to perform the same task with 80.1% accuracy to be a better choice - not 100%. Furthermore, we should consider how much time it would take for a human to perform the task versus an AI. That difference can justify the loss of accuracy. It all depends on the problem you’re trying to solve. With that said, it feels like AI on mobile devices hardly solves any problems.

permalink
report
parent
reply
4 points
*

Perplexity is kinda half-decent with showing its sources, and I do rely on it a lot to get me 50% of the way there, at which point I jump into the suggested sources, do some of my own thinking, and do the other 50% myself.

It’s been pretty useful to me so far.

I’ve realised I don’t want complete answers to anything really. Give me a roundabout gist or template, and then tell me where to look for more if I’m interested.

permalink
report
parent
reply
78 points

I hate that i can no longer trust what comes out of my phone camera to be an accurate representation of reality. I turn off all the AI enhancement stuff but who knows what kind of fuckery is baked into the firmware.

NO, i dont want fake AI depth of field. NO, i do not want fake AI “makeup” fixing my ugly face. NO, i do not want AI deleting tourists in the background of my picture of the eiffel tower.

NO, i do not want AI curating my memories and reality. Sure, my vacation photos have shitty lighting and bad composition. But they are MY photos and MY memories of something i experienced personally. AI should not be “fixing” that for me

permalink
report
reply
33 points

@9488fcea02a9 @ForgottenFlux I remember reading a whole article about how Samsung now just shoves a hi-res picture of the moon on top of pictures you take with the moon in so it looks like it takes impressive photos. Not sure if the scandal meant they removed that “feature” or not

permalink
report
parent
reply
3 points

classic techbro overhype

Add new feature into everything without seperating and offering choice to opt out of it

permalink
report
parent
reply
2 points

Is there a black mirror episode for that? A technology, that automatically edits your memories to be inaccurate, but “better”.

permalink
report
parent
reply
59 points

AI is useless and I block it anyway I can.

permalink
report
reply
58 points

This is what happens when companies prioritize hype over privacy and try to monetize every innovation. Why pay €1,500 for a phone only to have basic AI features? AI should solve real problems, not be a cash grab.

Imagine if AI actually worked for users:

  • Show me all settings to block data sharing and maximize privacy.
  • Explain how you optimized my battery last week and how much time it saved.
  • Automatically silence spam calls without selling my data to third parties.
  • Detect and block apps that secretly drain data or access my microphone.
  • Automatically organize my photos by topic without uploading them to the cloud.
  • Make everything i could do with TASKER with only just saying it in plain words.
permalink
report
reply
24 points

Make everything i could do with TASKER with only just saying it in plain words.

Stop, I can only get so hard.

permalink
report
parent
reply
0 points

How could you ensure AI to privately sort your pictures, if the requests to analyze your sensitive imagery need to be made on a server? (that based its knowledge of disrespecting others copyright anyway, lol)

permalink
report
parent
reply
3 points

Why it must connect to a server to do it? Why can not offline? Deepseek showed us that it is possible. The companies want everyone to think that AI only works online. For example AI image enhancements in my mid range Samsung phone work offline.

permalink
report
parent
reply
0 points

oh my bad, sorry im not well versed.

Thats why I asked :p

permalink
report
parent
reply
57 points
*

“Stop trying to make fetch AI happen. It’s not going to happen.”

AI is worse that adding no value, it is an actual detriment.

permalink
report
reply
14 points

I feel like I’m in those years of You really want a 3d TV, right? Right? 3D is what you’ve been waiting for, right? all over again, but with a different technology.

It will be VR’s turn again next.

I admit I’m really rooting for affordable, real-world, daily-use AR though.

permalink
report
parent
reply
2 points

AR pretty much will happen, in my opinion as someone who roughly works in the field. It’s probably going to be the next smartphone level revolution within two decades

I’m not commenting on whether it would be good or bad for society, especially with our current societal situations and capitalism and stuff, but I’m confident it will happen, either way, and change the word drastically again

permalink
report
parent
reply
2 points

I like the idea of AR very much, but for exactly the reasons you stepped around mentioning I’ll wait until I can get my hands on something FLOSS. When I’m buying glasses that are running some KDE AR project licensed under the GPL I’ll feel like it’s trustworthy. :D

permalink
report
parent
reply
1 point
*

i like 3d, too bad it barely has any content, even back in its time.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 8.9K

    Posts

  • 227K

    Comments