The new global study, in partnership with The Upwork Research Institute, interviewed 2,500 global C-suite executives, full-time employees and freelancers. Results show that the optimistic expectations about AI’s impact are not aligning with the reality faced by many employees. The study identifies a disconnect between the high expectations of managers and the actual experiences of employees using AI.

Despite 96% of C-suite executives expecting AI to boost productivity, the study reveals that, 77% of employees using AI say it has added to their workload and created challenges in achieving the expected productivity gains. Not only is AI increasing the workloads of full-time employees, it’s hampering productivity and contributing to employee burnout.

20 points

I have the opposite problem. Gen A.I. has tripled my productivity, but the C-suite here is barely catching up to 2005.

permalink
report
reply
5 points

Same, I’ve automated alot of my tasks with AI. No way 77% is “hampered” by it.

permalink
report
parent
reply
26 points
*
Deleted by creator
permalink
report
parent
reply
4 points

A lot of people are keen to hear that AI is bad, though, so the clicks go through on articles like this anyway.

permalink
report
parent
reply
29 points

I dunno, mishandling of AI can be worse than avoiding it entirely. There’s a middle manager here that runs everything her direct-report copywriter sends through ChatGPT, then sends the response back as a revision. She doesn’t add any context to the prompt, say who the audience is, or use the custom GPT that I made and shared. That copywriter is definitely hampered, but it’s not by AI, really, just run-of-the-mill manager PEBKAC.

permalink
report
parent
reply
11 points

I’m infuriated on their behalf.

permalink
report
parent
reply
3 points

This may come as a shock to you, but the vast majority of the world does not work in tech.

permalink
report
parent
reply
3 points

I’m not working in tech either. Everyone relying on a computer can use this.

Also, medicin and radiology are two areas that will benefit from this - especially the patients.

permalink
report
parent
reply
12 points

What have you actually replaced/automated with AI?

permalink
report
parent
reply
1 point
*

Voiceover recording, noise reduction, rotoscoping, motion tracking, matte painting, transcription - and there’s a clear path forward to automate rough cuts and integrate all that with digital asset management. I used to do all of those things manually/practically.

e: I imagine the downvotes coming from the same people that 20 years ago told me digital video would never match the artistry of film.

permalink
report
parent
reply
25 points

Have you tripled your billing/salary? Stop being a scab lol

permalink
report
parent
reply
5 points
*

The opposite, actually.

permalink
report
parent
reply
3 points

Cool too

permalink
report
parent
reply
8 points

What do you do, just out of interest?

permalink
report
parent
reply
7 points

Soup to nuts video production.

permalink
report
parent
reply
3 points

Cool, enjoy your entire industry going under thanks to cheap and free software and executives telling their middle managers to just shoot and cut it on their phone.

Sincerely,

A former video editor.

permalink
report
parent
reply
2 points

Sounds like a very specific fetish

permalink
report
parent
reply
1 point

I don’t know what that is. What is it?

permalink
report
parent
reply
25 points

AI is stupidly used a lot but this seems odd. For me GitHub copilot has sped up writing code. Hard to say how much but it definitely saves me seconds several times per day. It certainly hasn’t made my workload more…

permalink
report
reply
5 points

Github Copilot is about the only AI tool I’ve used at work so far. I’d say it overall speeds things up, particularly with boilerplate type code that it can just bang out reducing a lot of the tedious but not particularly difficult coding. For more complicated things it can also be helpful, but I find it’s also pretty good at suggesting things that look correct at a glance, but are actually subtly wrong. Leading to either having to carefully double check what it suggests, or having fix bugs in code that I wrote but didn’t actually write.

permalink
report
parent
reply
0 points

Every time I’ve discussed this on Lemmy someone says something like this. I haven’t usually had that problem. If something it suggests seems like more than something I can quickly verify is intended, I just ignore it. I don’t know why I am the only person who has good luck with this tech but I certainly do. Maybe it’s just that I don’t expect it to work perfectly. I expect it to be flawed because how could it not be? Every time it saves me from typing three tedious lines of code it feels like a miracle to me.

permalink
report
parent
reply
2 points

Leading to either having to carefully double check what it suggests, or having fix bugs in code that I wrote but didn’t actually write.

100% this. Recent update from jetbrains turned on the AI shitcomplete (I guess my org decided to pay for it). Not only is it slow af, but in trying it, I discovered that I have to fight the suggestions because they are just wrong. And what is terrible is I know my coworkers will definitely use it and I’ll be stuck fixing their low-skill shit that is now riddled with subtle AI shitcomplete. The tools are simply not ready, and anyone that tells you they are, do not have the skill or experience to back up their assertion.

permalink
report
parent
reply
-5 points
*

Media has been anti AI from the start. They only write hit pieces on it. We all rabble rouse about the headline as if it’s facts. It’s the left version of articles like “locals report uptick of beach shitting”

permalink
report
parent
reply
15 points

For anything more that basic autocomplete, copilot has only given me broken code. Not even subtly broken, just stupidly wrong stuff.

permalink
report
parent
reply
15 points

They’ve got a guy at work whose job title is basically AI Evangelist. This is terrifying in that it’s a financial tech firm handling twelve figures a year of business-- the last place where people will put up with “plausible bullshit” in their products.

I grudgingly installed the Copilot plugin, but I’m not sure what it can do for me better than a snippet library.

I asked it to generate a test suite for a function, as a rudimentary exercise, so it was able to identify “yes, there are n return values, so write n test cases” and “You’re going to actually have to CALL the function under test”, but was unable to figure out how to build the object being fed in to trigger any of those cases; to do so would require grokking much of the code base. I didn’t need to burn half a barrel of oil for that.

I’d be hesitant to trust it with “summarize this obtuse spec document” when half the time said documents are self-contradictory or downright wrong. Again, plausible bullshit isn’t suitable.

Maybe the problem is that I’m too close to the specific problem. AI tooling might be better for open-ended or free-association “why not try glue on pizza” type discussions, but when you already know “send exactly 4-7-Q-unicorn emoji in this field or the transaction is converted from USD to KPW” having to coax the machine to come to that conclusion 100% of the time is harder than just doing it yourself.

I can see the marketing and sales people love it, maybe customer service too, click one button and take one coherent “here’s why it’s broken” sentence and turn it into 500 words of flowery says-nothing prose, but I demand better from my machine overlords.

Tell me when Stable Diffusion figures out that “Carrying battleaxe” doesn’t mean “katana randomly jutting out from forearms”, maybe at that point AI will be good enough for code.

permalink
report
parent
reply
-1 points

Again, plausible bullshit isn’t suitable.

It is suitable when you’re the one producing the bullshit and you only need it accepted.

Which is what people pushing for this are. Their jobs and occupations are tolerant to just imitating, so they think that for some reason it works with airplanes, railroads, computers.

permalink
report
parent
reply
1 point

I’d be hesitant to trust it with “summarize this obtuse spec document” when half the time said documents are self-contradictory or downright wrong. Again, plausible bullshit isn’t suitable.

That’s why I have my doubts when people say it’s saving them a lot of time or effort. I suspect it’s planting bombs that they simply haven’t yet found. Like it generated code and the code seemed to work when they ran it, but it contains a subtle bug that will only be discovered later. And the process of tracking down that bug will completely wreck any gains they got from using the LLM in the first place.

Same with the people who are actually using it on human languages. Like, I heard a story of a government that was overwhelmed with public comments or something, so they were using an LLM to summarize those so they didn’t have to hire additional workers to read the comments and summarize them. Sure… and maybe it’s relatively close to what people are saying 95% of the time. But 5% of the time it’s going to completely miss a critical detail. So, you go from not having time to read all the public comments so not being sure what people are saying, to having an LLM give you false confidence that you know what people are saying even though the LLM screwed up its summary.

permalink
report
parent
reply
3 points
*

Maybe the problem is that I’m too close to the specific problem. AI tooling might be better for open-ended or free-association “why not try glue on pizza” type discussions, but when you already know “send exactly 4-7-Q-unicorn emoji in this field or the transaction is converted from USD to KPW” having to coax the machine to come to that conclusion 100% of the time is harder than just doing it yourself.

I, too, work in fintech. I agree with this analysis. That said, we currently have a large mishmash of regexes doing classification and they aren’t bulletproof. It would be useful to see about using something like a fine-tuned BERT model for doing classification for transactions that passed through the regex net without getting classified. And the PoC would be would be just context stuffing some examples for a few-shot prompt of an LLM and a constrained grammar (just the classification, plz). Because our finance generalists basically have to do this same process, and it would be nice to augment their productivity with a hint: “The computer thinks it might be this kinda transaction”

permalink
report
parent
reply
6 points
*

I’ll say that so far I’ve been pretty unimpressed by Codeium.

At the very most it has given me a few minutes total of value in the last 4 months.

Ive gotten some benefit from various generic chat LLMs like ChatGPT but most of that has been somewhat improved versions of the kind of info I was getting from Stackexchange threads and the like.

There’s been some mild value in some cases but so far nothing earth shattering or worth a bunch of money.

permalink
report
parent
reply
3 points

I presume it depends on the area you would be working with and what technologies you are working with. I assume it does better for some popular things that tend to be very verbose and tedious.

My experience including with a copilot trial has been like yours, a bit underwhelming. But I assume others must be getting benefit.

permalink
report
parent
reply
5 points

I have never heard of Codeium but it says it’s free, which may explain why it sucks. Copilot is excellent. Completely life changing, no. That’s not the goal. The goal is to reduce the manual writing of predictable and boring lines of code and it succeeds at that.

permalink
report
parent
reply
3 points

Cool totally worth burning the planet to the ground for it. Also love that we are spending all this time and money to solve this extremely important problem of coding taking slightly too long.

Think of all the progress being made!

permalink
report
parent
reply
34 points

Probably because the vast majority of the workforce does not work in tech but has had these clunky, failure-prone tools foisted on them by tech. Companies are inserting AI into everything, so what used to be a problem that could be solved in 5 steps now takes 6 steps, with the new step being “figure out how to bypass the AI to get to the actual human who can fix my problem”.

permalink
report
parent
reply
11 points

I’ve thought for a long time that there are a ton of legitimate business problems out there that could be solved with software. Not with AI. AI isn’t necessary, or even helpful, in most of these situations. The problem is that creatibg meaningful solutions requires the people who write the checks to actually understand some of these problems. I can count on one hand the number of business executives that I’ve met who were actually capable of that.

permalink
report
parent
reply
49 points
*

The trick is to be the one scamming your management with AI.

“The model is still training…”

“We will solve this <unsolvable problem> with Machine Learning”

“The performance is great on my machine but we still need to optimize it for mobile devices”

Ever since my fortune 200 employer did a push for AI, I haven’t worked a day in a week.

permalink
report
reply
3 points

That’s nothing. Show them the cloud bill for all this. They’ll probably ask you to slow down.

permalink
report
parent
reply
11 points

Not working and getting paid? Sounds like you just became a high level manager

permalink
report
parent
reply
17 points

If used correctly, AI can be helpful and can assist in easy and menial tasks

permalink
report
reply
22 points

I mean if it’s easy you can probably script it with some other tool.

“I have a list of IDs and need to make them links to our internal tool’s pages” is easy and doesn’t need AI. That’s something a product guy was struggling with and I solved in like 30 seconds with a Google sheet and concatenation

permalink
report
parent
reply
0 points

Yeah but the idea of AI in that kind of workflow is so that the product guy can actually do it themselves without asking you and in less than 30 mins

permalink
report
parent
reply
13 points

Yeah but that’s like using an entire gasoline powered car to play a CD.

Competent product guy should be able to learn some simpler tools like Google sheets.

permalink
report
parent
reply
14 points

It also helps you getting a starting point when you don’t know how ask a search engine the right question.

But people misinterpret its usefulness and think It can handle complex and context heavy problems, which must of the time will result in hallucinated crap.

permalink
report
parent
reply
10 points

And are those use cases common and publicized? Because I see it being advertised as “improves productivity” for a novel tool with myriad uses I expect those trying to sell it to me to give me some vignettes and not to just tell my boss it’ll improve my productivity. And if I was in management I’d want to know how it’ll do that beyond just saying “it’ll assist in easy and menial tasks”. Will it be easier than doing them? Many tools can improve efficiency on a task at a similar time and energy investment to the return. Are those tasks really so common? Will other tools be worse?

permalink
report
parent
reply
1 point

Well yes, but it’s not often I encounter an easy or menial task for which AI is the best solution.

For example, searching documentation us usually more informative than asking a bot trained on said documentation.

permalink
report
parent
reply
3 points

But But But

It’s made my job so much simpler! Obviously it can’t do your whole job and you should never expect it to, but for simple tasks like generating a simple script or setting up an array it BLAH BLAH BLAH, get fucked AI Techbros lmao

permalink
report
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 14K

    Monthly active users

  • 6.8K

    Posts

  • 156K

    Comments