Please remove it if unallowed

I see alot of people in here who get mad at AI generated code and I am wondering why. I wrote a couple of bash scripts with the help of chatGPT and if anything, I think its great.

Now, I obviously didnt tell it to write the entire code by itself. That would be a horrible idea, instead, I would ask it questions along the way and test its output before putting it in my scripts.

I am fairly competent in writing programs. I know how and when to use arrays, loops, functions, conditionals, etc. I just dont know anything about bash’s syntax. Now, I could have used any other languages I knew but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language. I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.

I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.

That is where chatGPT helped greatly. I would ask chatGPT to write these pieces of code whenever I encountered them, then test its code with various input to see if it works as expected. If not, I would ask it again with what case failed and it would revise the code before I put it in my scripts.

Thanks to chatGPT, someone who has 0 knowledge about bash can write bash easily and quickly that is fairly advanced. I dont think it would take this quick to write what I wrote if I had to do it the old fashioned way, I would eventually write it but it would take far too long. Thanks to chatGPT I can just write all this quickly and forget about it. If I want to learn Bash and am motivated, I would certainly take time to learn it in a nice way.

What do you think? What negative experience do you have with AI chatbots that made you hate them?

1 point

I have a coworker who is essentially building a custom program in Sheets using AppScript, and has been using CGPT/Gemini the whole way.

While this person has a basic grasp of the fundamentals, there’s a lot of missing information that gets filled in by the bots. Ultimately after enough fiddling, it will spit out usable code that works how it’s supposed to, but honestly it ends up taking significantly longer to guide the bot into making just the right solution for a given problem. Not to mention the code is just a mess - even though it works there’s no real consistency since it’s built across prompts.

I’m confident that in this case and likely in plenty of other cases like it, the amount of time it takes to learn how to ask the bot the right questions in totality would be better spent just reading the documentation for whatever language is being used. At that point it might be worth it to spit out simple code that can be easily debugged.

Ultimately, it just feels like you’re offloading complexity from one layer to the next, and in so doing quickly acquiring tech debt.

permalink
report
reply
6 points

We built a Durable task workflow engine to manage infrastructure and we asked a new hire to add a small feature to it.

I checked on them later and they expressed they were stuck on an aspect of the change.

I could tell the code was ChatGPT. I asked “you wrote this with ChatGPT didn’t you?” And they asked how I could tell.

I explained that ChatGPT doesn’t have the full context and will send you on tangents like it has here.

I gave them the docs to the engine and to the integration point and said "try using only these and ask me questions if you’re stuck for more than 40min.

They went on to become a very strong contributor and no longer uses ChatGPT or copilot.

I’ve tried it myself and it gives me the wrong answers 90% of the time. It could be useful though. If they changed ChatGPT to find and link you docs it finds relevant I would love it but it never does even when asked.

permalink
report
reply
2 points

Phind is better about linking sources. I’ve found that generated code sometimes points me in the right direction, but other times it leads me down a rabbit hole of obsolete syntax or other problems.

Ironically, if you already are familiar with the code then you can easily tell where the LLM went wrong and adapt their generated code.

But I don’t use it much because its almost more trouble than its worth.

permalink
report
parent
reply
8 points

Personally, I’ve found AI is wrong about 80% of the time for questions I ask it.

It’s essentially just a search engine with cleverbot. If the problem you’re dealing with is esoteric and therefore not easily searchable, AI won’t fare any better.

I think AI would be a lot more useful if it gave a percentage indicating how confident it is in its answers, too. It’s very useless to have it constantly give wrong information as though it is correct.

permalink
report
reply
6 points

I use ai, but whenever I do I have to modify it, whether it’s because it gives me errors, is slow, doesn’t fit my current implementation or is going off the wrong foot.

permalink
report
reply
22 points
*
  • issues with model training sources
  • business sending their whole codebase to third party (copilot etc.) instead of local models
  • time gain is not that substantial in most case, as the actual “writing code” part is not the part that takes most time, thinking and checking it is
  • “chatting” in natural language to describe something that have a precise spec is less efficient than just writing code for most tasks as long as you’re half-competent. We’ve known that since customer/developer meetings have existed.
  • the dev have to actually be competent enough to review the changes/output. In a way, “peer reviewing” becomes mandatory; it’s long, can be fastidious, and generated code really needs to be double checked at every corner (talking from experience here; even a generated one-liner can have issues)
  • some business thinking that LLM outputs are “good enough”, firing/moving away people that can actually do said review, leading to more issues down the line
  • actual debugging of non-trivial problems ends up sending me in a lot of directions, getting a useful output is unreliable at best
  • making new things will sometimes confuse LLM, making them a time loss at best, and producing even worst code sometimes
  • using code chatbot to help with common, menial tasks is irrelevant, as these tasks have already been done and sort of “optimized out” in library and reusable code. At best you could pull some of this in your own codebase, making it worst to maintain in the long term

Those are the downside I can think of on the top of my head, for having used AI coding assistance (mostly local solutions for privacy reasons). There are upsides too:

  • sometimes, it does produce useful output in which I only have to edit a few parts to make it works
  • local autocomplete is sometimes almost as useful as the regular contextual autocomplete
  • the chatbot turning short code into longer “natural language” explanations can sometimes act as a rubber duck in aiding for debugging

Note the “sometimes”. I don’t have actual numbers because tracking that would be like, hell, but the times it does something actually impressive are rare enough that I still bother my coworker with it when it happens. For most of the downside, it’s not even a matter of the tool becoming better, it’s the usefulness to begin with that’s uncertain. It does, however, come at a large cost (money, privacy in some cases, time, and apparently ecological too) that is not at all outweighed by the rare “gains”.

permalink
report
reply
0 points

a lot of your issues are effeciency related which i think can realistically be solved given some time for development cycles to take hold on ai. if they were better all around to whatever standard you think is sufficiently useful, would you then think it would be useful? the other side related thing too is that if it can get that level of competence in coding then it most likely can get just as competant in a variety of other domains too.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 5.1K

    Posts

  • 91K

    Comments