That’s not universal. For instance, last week I got help writing a bash script. But I hope they’re helping lots of you in lots of ways.
I TA for an electrical engineering class. It’s amusing, to look at student’s code these days. Everything is so needlessly wrapped up in 3-line functions, students keep trying to do in 25 lines what can be done in 2, and it all becomes impossible to debug.
When their code inevitably breaks, they ask me to tell them why it isn’t working. My response is to ask them what its meant to be doing, but they can’t answer, because they don’t know.
The sad thing is we try to make it easy on them. Their assignment specs are filled with tips, tricks, hints, warnings, and even pseudo-code for the more confusing algorithms. But these days, students would rather prompt chatgpt than read docs.
I’ve never seen chatgpt ever benefit a student. Either it misunderstands and just confuses the student with nonsense code and functions, or else in rare cases it does its job too well and the students don’t end up learning anything. The department has collectively decided to ban it and all other genAI chatbots starting next semester.
How do you know if it doesnt benefit a student? If their work is exceptional, do you assume they didnt use an LLM? Or do you not see any good code anymore?
It replaces the work required to research and think about the problem. You know the part where you’d normally learn and understand the issue at hand
I mean, they don’t generally keep their use of chatgpt a secret. Not for now, anyway. Meanwhile, the people who do well in the class write their code in a way that clearly shows they read the documentation, and have made use of the headers we’ve written for them to use.
In the end, does it matter? This isn’t a CS major, where you can just BS your way through all your classes and get a well paying career doing nothing but writing endpoints for some js framework. We’re trying to prepare them for when they’re writing their own architecture, their own compilers, their own OSses; things that have 0 docs for chatgpt to chew up at spit out, because they literally don’t exist yet.
Oh interesting that they wouldnt need or want to hide that. When i use it i interpret every line of code and decide if its appropriate. If that would be too time consuming then i wouldnt use an llm. I would never deviate from the assignment criterion or the material covered by deferring to some obscure methodology used by an llm.
So i personally dont think its been bad for my education, but i did complete a lot of my education before llms were a thing.
Dont you guys test the students in ways to punish the laziness? I know you are just a ta, but do you think the class could be better about that? Some classes ive taken are terribly quality and all but encouraged laziness, and other classes were perfactly capable of cutting through the bullshit.
I don’t understand why it would be acceptable to submit generated code in the first place. I’d say it’s functionally asking others to complete your assignment. Sampling code excessively and without attribution is plagiarism.
And seconding that concern about people not even learning how code works. This was an issue even before chatGPT, when people would by-default look up stack overflow snippets or existing algorithms instead of thinking and training their mind to be able to solve actual real problems, but now it’s probably much more widespread as an easier way out. If the school is able to do a code exam in an offline environment, even with manual docs available, it should weed out the ones who didn’t learn pretty quickly.
A friend of mine works in a similar position and we discussed it a bit.
Since ai is a thing and we have some newer, younger and motivated profs, they actually kind of teach and discuss the use of ai in class, which is pretty important.
In my opinion we will not get rid of them, just like the internet.
And we have no metric to determined if ai was used or not.
So the only way to deal with this situation is to accept the existence and use of ai and create different tasks.
For example make them explain the code and make it clear there will be questions. That way they have to learn code. If they use ai or not does not matter.
And create tasks that require human interaction, like collaborative tasks, those can’t be done by ai and you have to structure the project.
This is my big concern at my day job. Management keeps pushing AI chat on my younger co-workers, but they can’t tell when it’s hallucinating. And since there’s no feedback loop (our chatbot doesn’t learn from us as we type), it just keeps spewing the same lies.
Yeah, been dealing with that a bunch lately too, I’ve started pushing them towards the documentation directly (though to be fair, sometimes that’s ass or nearly nonexistent) with some success.
There is no need to ask GPT for a ready-to-use code, it does not work well for it. But it explains someone else’s complex code much better. Students need to ask it for short hints in places where it is not clear specifically or very small parts of the code, then it brings good benefits.