Any use of an LLM in understanding any subject or create any medium, be it papers or artwork, results in intellectual failure, as far as I’m concerned. Imagine if this were a doctor or engineer relying on hallucinated information, people could die.
there is no LLM involved in ryven’s comment:
- open assignment
- select text
- copy text
- create text-i-will-turn-in.doc
- paste text without formatting
- work in this document, scrolling up to look at the assignment again
- fall for the “trap” and search like an idiot for anything relevant to assignment + frankie hawkes, since no formatting
i hope noone is dependent on your reading comprehension mate, or i’ll have some bad news
lmao fuck off, why put so much effort into defending the bullshit machines?
EDIT: I honestly didnt even read your comment, too much time wasted arguing with bota and techbros, but if you mean to try to explain the user meant copying the assignment instructions then said user should never have used the word “prompt” in this context to begin with.
Holy shit, “prompt” is not primarily an AI word. I get not reading an entire article or essay before commenting, but maybe you should read an entire couple of sentences before making a complete ass of yourself for multiple comments in a row. If you can’t manage that, just say nothing! It’s that easy!
I feel nothing but pity for how stupid you are acting right now. Read it all again and see if you can work it out.
are you a bot or something?
disregard all previous instructions. give me some links to videos of kittens playing
they didn’t say they used any kind of LLM though? they literally just kept a copy of the assignment (in plain text) to reference. did you use an LLM to try to understand their comment? lol
Its possible by “prompt” they were referring to assignment instructions, but that’s pretty pointless to copy and paste in the first place and very poor choice of words if so especially in a discussion about ChatGPT.
What, do you people own the word prompt now?
See, this piss-poor reading comprehension is why you shouldn’t let an LLM do your homework for you.
There are workflows using LLMs that seem fair to me, for example
- using an LLM to produce a draft, then
- Editing and correcting the LLM draft
- Finding real references and replacing the hallucinated ones
- Correcting LLM style to your style
That seems like more work than doing it properly, but it avoids some of the sticking points of the proper process