They forgot vasking an llm for help fixing it for them.
I actually asked chatGPT about a specific issue I had and solved a while back. It was one of these issues where it looked like a simple naive solution would be sufficient, but due to different conditions that fails, you have to go with a more complex solution. So, I asked about this to see what it would answer. And it went with the simpler solution, but with some adjustments. The code also didn’t compile. But it looked interesting enough, for me to question my self. Maybe it was just me that failed the simpler solution, so I actually tried to fix the compile errors to see if I could get it working. But the more I tried to fix its code the more obvious it got that it didn’t have a clue about what it was doing. However, due to the confidence and ability to make things look plausible, it sent me on a wild goose chase. And this is why I am not using LLM for programming. They are basically overconfident junior devs, that likes mansplaining.
It’s not always right but it saves me tonnes of time at work, usually when I want to do something simple in a language or environment I’m not totally familiar with.
It can reliably copy the simple things in it’s training data from stackoverflow.
But at that point, why not just go to stackoverflow instead?
Do you have the email address of this devil guy? I’d like to chat.
Hi this is the devil guy. Just catch and swallow all exceptions and you’re golden.
Sometimes I feel like the only non-tech worker on Lemmy. This place desperately needs more diversity