This is one of the first things I did a year or so ago to test chatgpt. I’ve never trusted it since. Chatgpt is fucking less than useless. The lies it tells… It’s insane.
you can almost get it to say anything with the right prompt. You can even make it contradict itself.
I’ve had pretty good results using ChatGPT to fix pihole issues.
I learned C++, python, how stuff in the Linux kernel works, how ansible works and can be tuned, and a lot more using the help of AI (mostly copilot, but when it fails to help, I use my free prompts of OpenAI 4.o, which is way better than copilot, right now)
Not tested o1 yet, but I heard it is mind blowing good, since it got way better with logic stuff like programming and Mathematic
Or just stupid
I’ve done similar things for mismatched python dependencies in a broken Airflow setup on GCP, and got amazingly good results pointing me in the right direction to resolve the conflicting package versions. Just dumped a mile long stack trace and the full requirements.txt on it. Often worth a shot, tbh
Certainly! Let me ignore half the details in your prompt and suggest a course of action for v2 of this package even though you said it was version 15.
I’m sorry that isn’t working for you. Here are the troubleshooting steps for a Samsung convection oven that went out of production in 2018.
You are correct, your question did not involve baking tips, here’s that same course of action from v2 of this software package.
Honestly, it’s been pretty good for me once I say “Hmm I don’t think this workflow works with this version”
I think the 4o model might just be better than 3.5 was at this.
Yeah 3.5 was pretty ass w bugs but could write basic code. 4o helped me sometimes with bugs and was definitely better, but would get caught in loops sometimes. This new o1 preview model seems pretty cracked all around though lol