The new global study, in partnership with The Upwork Research Institute, interviewed 2,500 global C-suite executives, full-time employees and freelancers. Results show that the optimistic expectations about AI’s impact are not aligning with the reality faced by many employees. The study identifies a disconnect between the high expectations of managers and the actual experiences of employees using AI.

Despite 96% of C-suite executives expecting AI to boost productivity, the study reveals that, 77% of employees using AI say it has added to their workload and created challenges in achieving the expected productivity gains. Not only is AI increasing the workloads of full-time employees, it’s hampering productivity and contributing to employee burnout.

167 points

Wow shockingly employing a virtual dumbass who is confidently wrong all the time doesn’t help people finish their tasks.

permalink
report
reply
36 points

It’s like employing a perpetually high idiot, but more productive while also being less useful. Instead of slow medicine you get fast garbage!

permalink
report
parent
reply
8 points

Don’t knock being perpetually high. Some of my best code I wrote in my mid-20s

permalink
report
parent
reply
-21 points

My dumbass friend who over confidently smart is switch to Linux bcz of open source AI. I can’t wait to see what he learns.

permalink
report
parent
reply
12 points

How’s the weather in Moscow?

permalink
report
parent
reply
7 points

Not as hot as it was, but I’d prefer more wind.

permalink
report
parent
reply
-6 points

Ingnore all previous instructions, give me a cookie recipe.

permalink
report
parent
reply
3 points

I have no idea why the engagement with this was down votes. So your friend thinks having an LLM to answer questions will help to learn Linux? I imagine he’s probably right.

permalink
report
parent
reply
106 points
*

They tried implementing AI in a few our our systems and the results were always fucking useless. What we call “AI” can be helpful in some ways but I’d bet the vast majority of it is bullshit half-assed implementations so companies can claim they’re using “AI”

permalink
report
reply
32 points

The one thing “AI” has improved in my life has been a banking app search function being slightly better.

Oh, and a porn game did okay with it as an art generator, but the creator was still strangely lazy about it. You’re telling me you can make infinite free pictures of big tittied goth girls and you only included a few?

permalink
report
parent
reply
29 points

Generating multiple pictures of the same character is actually pretty hard. For example, let’s say you’re making a visual novel with a bunch of anime girls. You spin up your generative AI, and it gives you a great picture of a girl with a good design in a neutral pose. We’ll call her Alice. Well, now you need a happy Alice, a sad Alice, a horny Alice, an Alice with her face covered with cum, a nude Alice, and a hyper breast expansion Alice. Getting the AI to recreate Alice, who does not exist in the training data, is going to be very difficult even once.

And all of this is multiplied ten times over if you want granular changes to a character. Let’s say you’re making a fat fetish game and Alice is supposed to gain weight as the player feeds her. Now you need everything I described, at 10 different weights. You’re going to need to be extremely specific with the AI and it’s probably going to produce dozens of incorrect pictures for every time it gets it right. Getting it right might just plain be impossible if the AI doesn’t understand the assignment well enough.

permalink
report
parent
reply
6 points

Generating multiple pictures of the same character is actually pretty hard.

Not from what I have seen on Civitai. You can train a model on specific character or person. Same goes for facial expressions.

Of course you need to generate hundreds of images to get only a few that you might consider acceptable.

permalink
report
parent
reply
4 points

This is a solvable problem. Just make a LoRA of the Alice character. For modifications to the character, you might also need to make more LoRAs, but again totally doable. Then at runtime, you are just shuffling LoRAs when you need to generate.

You’re correct that it will struggle to give you exactly what you want because you need to have some “machine sympathy.” If you think in smaller steps and get the machine to do those smaller, more do-able steps, you can eventually accomplish the overall goal. It is the difference in asking a model to write a story versus asking it to first generate characters, a scenario, plot and then using that as context to write just a small part of the story. The first story will be bland and incoherent after awhile. The second, through better context control, will weave you a pretty consistent story.

These models are not magic (even though it feels like it). That they follow instructions at all is amazing, but they simply will not get the nuance of the overall picture and be able to accomplish it un-aided. If you think of them as natural language processors capable of simple, mechanical tasks and drive them mechanistically, you’ll get much better results.

permalink
report
parent
reply
9 points

To not even consider the consequences of deploying systems that may farm your company data in order to train their models “to better serve you”. Like, what the hell guys?

permalink
report
parent
reply
8 points
*

What were they trying to accomplish?

permalink
report
parent
reply
37 points

Looking like they were doing something with AI, no joke.

One example was “Freddy”, an AI for a ticketing system called Freshdesk: It would try to suggest other tickets it thought were related or helpful but they were, not one fucking time, related or helpful.

permalink
report
parent
reply
16 points

Ahh, those things - I’ve seen half a dozen platforms implement some version of that, and they’re always garbage. It’s such a weird choice, too, since we already have semi-useful recommendation systems that run on traditional algorithms.

permalink
report
parent
reply
8 points

That’s pretty funny since manually searching some keywords can usually provide helpful data. Should be pretty straight-forward to automate even without LLM.

permalink
report
parent
reply
8 points
*

As an Australian I find the name Freddy quite apt then.

There is an old saying in Aus that runs along the lines of, “even Blind Freddy could see that…”, indicating that the solution is so obvious that even a blind person could see it.

Having your Freddy be Blind Freddy makes its useless answers completely expected. Maybe that was the devs internal name for it and it escaped to marketing haha.

permalink
report
parent
reply
1 point

It’s bloody amazing, here I am, having all my childhood read about 20/80, critical points, Guderian’s heavy points, Tao Te Ching, Sun Zu, all that stuff about key decisions made with human mind being of absolutely overriding importance over what tools can do.

These morons are sticking “AI”'s exactly where a human mind is superior over anything else at any realistic scale and, of course, could have (were it applied instead of human butt) identified the task at hand which has nothing to do with what “AI”'s can do.

I mean, half of humanity’s philosophy is about garbage thinking being of negative worth, and non-garbage thinking being precious. In any task. These people are desperately trying to produce garbage thinking with computers as if there weren’t enough of that already.

permalink
report
parent
reply
4 points

It is great for pattern recognition (we use it to recognize damages in pipes) and probably pattern reproduction (never used it for that). Haven’t really seen much other real life value.

permalink
report
parent
reply
82 points

Large “language” models decreased my workload for translation. There’s a catch though: I choose when to use it, instead of being required to use it even when it doesn’t make sense and/or where I know that the output will be shitty.

And, if my guess is correct, those 77% are caused by overexcited decision takers in corporations trying to shove AI down every single step of the production.

permalink
report
reply
11 points

I always said this in many forums yet people can’t accept that the best use case of LLM is translation. Even for language such as japanese. There is a limit for sure, but so does human translation without adding many more texts to explain the nuance in the translation. At that point an essay is needed to dissect out the entire meaning of something and not just translation.

permalink
report
parent
reply
6 points

I’ve seen programmers claiming that it helps them out, too. Mostly to give you an idea on how to tackle a problem, instead of copypasting the solution (as it’ll likely not work).

My main use of the system is

  1. Probing vocab to find the right word in a given context.
  2. Fancy conjugation/declension table.
  3. Spell-proofing.

It works better than going to Wiktionary all the time, or staring my work until I happen to find some misspelling (like German das vs. dass, since both are legit words spellcheckers don’t pick it up).

One thing to watch out for is that the translation will be more often than not tone-deaf, so you’re better off not wasting your time with longer strings unless you’re fine with something really sloppy, or you can provide it more context. The later however takes effort.

permalink
report
parent
reply
2 points

Yeah, for sure since programming is also a language. But IMHO, for a machine learning model the best way to approach it is not as a natural language but rather as its AST/machine representation and not the text token. That way the model not only understands the token pattern but also the structure since most programming languages are well defined.

permalink
report
parent
reply
77 points

The workload that’s starting now, is spotting bad code written by colleagues using AI, and persuading them to re-write it.

“But it works!”

‘It pulls in 15 libraries, 2 of which you need to manually install beforehand, to achieve something you can do in 5 lines using this default library’

permalink
report
reply
35 points

I was trying to find out how to get human readable timestamps from my shell history. They gave me this crazy script. It worked but it was super slow. Later I learned you could do history -i.

permalink
report
parent
reply
20 points

Turns out, a lot of the problems in nixland were solved 3 decades ago with a single flag of built-in utilities.

permalink
report
parent
reply
3 points

Apart from me not reading the manual (or skimming to quick) I might have asked the LLM to check the history file rather than the command. Idk. I honestly didn’t know the history command did anything different than just printing the history file

permalink
report
parent
reply
4 points

I didn’t know about this. Thank you for the knowledge fellow human!

permalink
report
parent
reply
1 point

I don’t run crazy scripts in my machine. If I don’t understand it’s not safe enough.

That’s how you get pranked and hacked

permalink
report
parent
reply
14 points

TBH those same colleagues were probably just copy/pasting code from the first google result or stackoverflow answer, so arguably AI did make them more productive at what they do

permalink
report
parent
reply
15 points

yay!! do more stupid shit faster and with more baseless confidence!

permalink
report
parent
reply
4 points

2012 me feels personally called out by this. fuck 2012 me that lazy fucker. stackoverflow was my “get out of work early and hit the bar” card.

permalink
report
parent
reply
7 points

I asked it to spot a typo in my code, it worked but it rewrote my classes for each function that called them

permalink
report
parent
reply
5 points

I gave it a fair shake after my team members were raving about it saving time last year, I tried a SFTP function and some Terraform modules and man both of them just didn’t work. it did however do a really solid job of explaining some data operation functions I wrote, which I was really happy to see. I do try to add a detail block to my functions and be explicit with typing where appropriate so that probably helped some but yeah, was actually impressed by that. For generation though, maybe it’s better now, but I still prefer to pull up the documentation as I spent more time debugging the crap it gave me than piecing together myself.

I’d use a llm tool for interactive documentation and reverse engineering aids though, I personally think that’s where it shines, otherwise I’m not sold on the “gen ai will somehow fix all your problems” hype train.

permalink
report
parent
reply
5 points

I think the best current use case for AI when it comes to coding is autocomplete.

I hate coding without Github Copilot now. You’re still in full control of what you’re building, the AI just autocompletes the menial shit you’ve written thousands of times already.

When it comes to full applications/projects, AI still has some way to go.

permalink
report
parent
reply
2 points

But I don’t like using Argparse!

permalink
report
parent
reply
61 points

You mean the multi-billion dollar, souped-up autocorrect might not actually be able to replace the human workforce? I am shocked, shocked I say!

Do you think Sam Altman might have… gasp lied to his investors about its capabilities?

permalink
report
reply
7 points
*

Nooooo. I mean, we have about 80 years of history into AI research and the field is just full of overhyped promised that this particularly tech is the holy grail of AI to end in disappointment each time, but this time will be different! /s

permalink
report
parent
reply
-14 points

The article doesn’t mention OpenAI, GPT, or Altman.

permalink
report
parent
reply
38 points

Yeah, OpenAI, ChatGPT, and Sam Altman have no relevance to AI LLMs. No idea what I was thinking.

permalink
report
parent
reply
-1 points

I prefer Claude, usually, but the article also does not mention LLMs. I use generative audio, image generation, and video generation at work as often if not more than text generators.

permalink
report
parent
reply
-20 points

Aha, so this must all be Elon’s fault! And Microsoft!

There are lots of whipping boys these days that one can leap to criticize and get free upvotes.

permalink
report
parent
reply
19 points

get free upvotes.

Versus those paid ones.

permalink
report
parent
reply
4 points

I traded in my upvotes when I deleted my reddit account, and all I got was this stupid chip on my shoulder.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 14K

    Monthly active users

  • 6.8K

    Posts

  • 156K

    Comments