Not an advertisement I swear, but I honestly love language models, I specifically use Chat GPT and Gemini ALL the time.
I’m someone who loves research and learning. I always have questions on all sorts of things. Before GPT I would be googling things for hours each day, visiting forums, watching YouTube videos, etc.
Now with GPT I can ask so many questions on things I am curious about, and even ask follow up questions or ask for more in depth explanations.
I think I am also a bit ADHD and so my brain is always jumping around different topics which makes me curious on specific things. My latest is insects, specifically wasps.
I absolutely hate bugs, and wasps are the worst. But I am now learning more and more how important a lot of bugs actually are to the ecosystem. It’s really a great way to learn and engage with topics when you can ask follow up questions or ask for more details on specific aspects.
TLDR: GPT has replaced 90% of my research that Google used to be.
Important Note: Please be aware language models can be innacurate and prone to mistakes, so always verify the data from other sources if you need accurate information and not just general knowledge.
I like AI to write letters or wishes, etc. Also, double check what they write. It’s not really accurate for long things and chats.
You can use self hosted AI like others have mentioned but you can also use DuckDuckGo’s proxy service to access some cloud AI providers. It works over Tor Browser in “Safer” security mode as well for increased anonymous access.
Be careful about all the lies it tells you, everytime I doulbe check something there are made up things. It is only useful for introductory keywords, but it starts to fill the gaps randomly when it doesn’t know about the topic.
Thankfully for general knowledge questions it gets things correct. A lot of what I ask or am curious about is general knowledge so it works really well for me
It really doesn’t get general knowledge correct.
AI will often confuse itself because the answers are probabilistic. So it often is right, but just as often is hallucinating random BS, because AI doesn’t really know shit. It just detects patterns.
I just saw a post on Reddit about GPT claiming Argentina is the second most populated country in South America (which is false, it’s Colombia), but since Argentina is usually second place in most lists in the region, GPT tends to place Argentina in there anyway.
My problem used to be that I started like that, and then I wanted to ask more an then another bit more, and that’s when things get messy. And at some point I usually end up annoyed that half of the conversation was based on a lie. Now I barely open the chat because of that.
Before GPT I would be googling things for hours each day, visiting forums, watching YouTube videos, etc.
And then…
Important Note: Please be aware language models can be innacurate and prone to mistakes, so always verify the data from other sources if you need accurate information and not just general knowledge.
So, basically the same but with an extra step that might be bullshit.
Cool.
I’m on the other side, but what LLMs did - they taught me to cherish unique content creators. Just a week ago I had a very specific question and an indian guy on YT put an effort to draw diagrams explaining it in his video. His English was bad to be honest, but I appreciated how he fought with it and made his best to explain the subject at hand with all the challenges he faced. I came to enjoy people who do stuff that can’t be average’d by the math machine, I came to praise weirdos like myself even more. Being unique, being yourself, being deviant is what makes you more human than you are, that’s what maybe can be called your soul.