Know a guy who tried to use AI to vibe code a simple web server. He wasnāt a programmer and kept insisting to me that programmers were done for.
After weeks of trying to get the thing to work, he had nothing. He showed me the code, and it was the worst Iāve ever seen. Dozens of empty files where the AI had apparently added and then deleted the same code. Also some utter garbage code. Tons of functions copied and pasted instead of being defined once.
I then showed him a web app I had made in that same amount of time. It worked perfectly. Never heard anything more about AI from him.
AI is very very neat but like it has clear obvious limitations. Iām not a programmer and I could tell you tons of ways I tripped Ollama up already.
But itās a tool, and the people who can use it properly will succeed.
Iām not saying ita a tool for programmers, but it has uses
I think its most useful as an (often wrong) line completer than anything else. It can take in an entire file and just try and figure out the rest of what you are currently writing. Its context window simply isnāt big enough to understand an entire project.
That and unit tests. Since unit tests are by design isolated, small, and unconcerned with the larger project AI has at least a fighting change of competently producing them. That still takes significant hand holding though.
Iāve used them for unit tests and it still makes some really weird decisions sometimes. Like building an array of json objects that it feeds into one super long test with a bunch of switch conditions. When I saw that one I scratched my head for a little bit.
Isnāt writing tests with AI like a really bad idea? I mean, the whole point of writing separate tests is hoping that you wonāt make the same mistakes twice, and therefore catch any behavior in the code that does not match your intent. But If you use an LLM to write a test using said code as context (instead of the original intent you would use yourself), thereās a risk that itāll just write a test case that makes sure the code contains the wrong behavior.
Okay, it might still be okay for regression testing, but youāre still missing most of the benefit youād get by writing the tests manually. Unless you only care about closing tickets, that is.
Funny. Every time someone points out how god awful AI is, someone else comes along to say āItās just a tool, and itās good if someone can use it properly.ā But nobody who uses it treats it like ājust a tool.ā They think itās a workman they can claim the credit for, as if a hammer could replace the carpenter.
Plus, the only people good enough to fix the problems caused by this ātoolā donāt need to use it in the first place.
But nobody who uses it treats it like ājust a tool.ā
I do. I use it to tighten up some lazy code that I wrote, or to help me figure out a potential flaw in my logic, or to suggest a ābetterā way to do something if Iām not happy with what I originally wrote.
Itās always small snippets of code and I donāt always accept the answer. In fact, Iād say less than 50% of the time I get a result I can use as-is, but I will say that most of the time it gives me an idea or puts me on the right track.
This. I have no problems to combine couple endpoints in one script and explaining to QWQ what my end file with CSV based on those jsons should look like. But try to go beyond that, reaching above 32k context or try to show it multiple scripts and poor thing have no clue what to do.
If you can manage your project and break it down to multiple simple tasks, you could build something complicated via LLM. But that requires some knowledge about coding and at that point chances are that you will have better luck of writing whole thing by yourself.
I understand the motivated reasoning of upper management thinking programmers are done for. I understand the reasoning of other people far less. Do they see programmers as one of the few professions where you can afford a house and save money, and instead of looking for ways to make that happen for everyone, decide that programmers need to be taken down a notch?