You are viewing a single thread.
View all comments View context
-32 points

For programming it saves insane time.

permalink
report
parent
reply
40 points

Real talk though, I’m seeing more and more of my peers in university ask AI first, then spending time debugging code they don’t understand.

I’ve yet to have chat gpt or copilot solve an actual problem for me. Simple, simple things are good, but any problem solving i find them more effort than just doing the thing.

I asked for instructions on making a KDE Widget to get weather canada information, and it sent me an api that doesn’t exist and python packages that don’t exist. By the time I fixed the instructions, very little of the original output remained.

permalink
report
parent
reply
28 points

As a prof, it’s getting a little depressing. I’ll have students that really seem to be getting to grips with the material, nailing their assignments, and then when they’re brought in for in-person labs… yeah, they can barely declare a function, let alone implement a solution to a fairly novel problem. AI has been hugely useful while programming, I won’t deny that! It really does make a lot of the tedious boilerplate a lot less time-intensive to deal with. But holy crap, when the crutch is taken away people don’t even know how to crawl.

permalink
report
parent
reply
7 points

Seem to be 2 problems. One is obvious, the other is that such tedious boilerplate exists.

I mean, all engineering is divide and conquer. Doing the same thing over and over for very different projects seems to be a fault in paradigm. Like when making a GUI with tcl/tk you don’t really need that, but with qt you do.

I’m biased as an ASD+ADHD person that hasn’t become a programmer despite a lot of trying, because there are a lot of things which don’t seem necessary, but huge, turning off my brain via both overthinking and boredom.

But still - students don’t know which work of what they must do for an assignment is absolutely necessary and important for the core task and which is maybe not, but practically required. So they can’t even correctly interpret the help that an “AI” (or some anonymous helper) is giving them. And thus, ahem, prepare for labs …

permalink
report
parent
reply
5 points

This semester i took a basic database course, and the prof mentioned that LLMs are useful for basic queries. A few weeks later, we had a no-computer closed book paper quiz, and he was like “You can’t use GPT for everything guys!”.

Turns out a huge chunk of the class was relying on gpt for everything.

permalink
report
parent
reply
5 points

When AI achieves sentience, it’ll simply have to wait until the last generation of humans that know how to code die off. No need for machine wars.

permalink
report
parent
reply
10 points

One major problem with the current generation of "AI"seems to be it’s inability to use relevant information that it already has to assess the accuracy of the answers it provides.

Here’s a common scenario I’ve run into: I’m trying to create a complex DAX Measure in Excel. I give ChatGPT the information about the tables I’m working with and the expected Pivot Table column value.

ChatGPT gives me a response in the form of a measure I can use. Except it uses one DAX function in a way that will not work. I point out the error and ChatGPT is like, "Oh, sorry. Yeah that won’t work because [insert correct reason here].

I’ll try adjusting my prompt a few more times before finally giving up and just writing the measure myself. It does not have the ability to reason that an answer is incorrect even though it has all the information to know that the answer is incorrect and can even tell you why the answer is incorrect. It’s a glorified text generator and is definitely not “intelligent”.

It works fine for generating boiler plate code but that problem was already solved years ago with things like code templates.

permalink
report
parent
reply
1 point

I think a part of it is the scale of the data used to train it means there wasn’t likely much curation of that data. So it might complete the text using a wikipedia article, a knowledge forum post, a forum post that derailed the original topic, a forum post written by someone confidently wrong, a troll post, or two people arguing about the answer. In that last case, you might be able to get it to hash out the entire argument by asking if it’s sure about that after each response.

Which is also probably how it can correctly respond to “are you sure?” follow ups in the first place, because it was going off some forum post that someone questioned and then there was a follow-up.

It’s more complicated than that because it’s likely not just rehashing any one single conversation in any response, but all of those were a part of its training, and its training is all it knows.

permalink
report
parent
reply
30 points
*

If you don’t mind a few hundred bugs

permalink
report
parent
reply
16 points

Yup. We passed on a candidate because they didn’t notice the AI making the same mistake twice in a row, and still saying they trust the code. Yeah, no…

permalink
report
parent
reply
19 points
*

AI has absolutely wasted more of my time than it’s saved while programming. Occasionally it’s helpful for doing some repetitive refactor, but for actually solving any novel problems it’s hopeless. It doesn’t help that English is a terrible language for describing programming logic and constraints. That’s why we have programming languages…

The only things AI is competent with are common example problems that are everywhere on the Internet. You may as well just copy paste from StackOverflow. It might even be more reliable.

permalink
report
parent
reply
3 points

It doesn’t do anything that Emmett didn’t do 10 years ago.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 14K

    Monthly active users

  • 6.8K

    Posts

  • 157K

    Comments