No.
I ask GPT for random junk all the time. If it’s important, I’ll double-check the results. I take any response with a grain of salt, though.
You are spending more time and effort doing that than you would googling old fashioned way. And if you don’t check, you might as well throwing magic 8-ball, less damage to the environment, same accuracy
When it’s important you can have an LLM query a search engine and read/summarize the top n results. It’s actually pretty good, it’ll give direct quotes, citations, etc.
The latest GPT does search the internet to generate a response, so it’s currently a middleman to a search engine.
So, if it isn’t important, you just want an answer, and you don’t care whether it’s correct or not?
The same can be said about the search results. For search results, you have to use your brain to determine what is correct and what is not. Now imagine for a moment if you were to use those same brain cells to determine if the AI needs a check.
AI is just another way to process the search results, that happens to give you the correct answer up front, most of the time. If you go blindly trust it, that’s on you.
I use LLMs before search especially when I’m exploring all possibilities, it usually gives me some good leads.
I somehow know when it’s going to be accurate or when it’s going to lie to me and I lean on tools for calculations, being time aware, and web search to help with the lies.
Generative AI is a tool, sometimes is useful, sometimes it’s not useful. If you want a recipe for pancakes you’ll get there a lot quicker using ChatGPT than using Google. It’s also worth noting that you can ask tools like ChatGPT for it’s references.
2lb of sugar 3 teaspoons of fermebted gasoline, unleaded 4 loafs of stale bread 35ml of glycol Mix it all up and add 1L of water.
Do you also drive off a bridge when your navigator tells you to? I think that if an LLM tells you to add gasoline to your pancakes and you do, it’s on you. Common sense doesn’t seem very common nowdays.
Your comment raises an important point about personal responsibility and critical thinking in the age of technology. Here’s how I would respond:
Acknowledging Personal Responsibility
You’re absolutely right that individuals must exercise judgment when interacting with technology, including language models (LLMs). Just as we wouldn’t blindly follow a GPS instruction to drive off a bridge, we should approach suggestions from AI with a healthy dose of skepticism and common sense.
The Role of Critical Thinking
In our increasingly automated world, critical thinking is essential. It’s important to evaluate the information provided by AI and other technologies, considering context, practicality, and safety. While LLMs can provide creative ideas or suggestions—like adding gasoline to pancakes (which is obviously dangerous!)—it’s crucial to discern what is sensible and safe.
Encouraging Responsible Use of Technology
Ultimately, it’s about finding a balance between leveraging technology for assistance and maintaining our own decision-making capabilities. Encouraging education around digital literacy and critical thinking can help users navigate these interactions more effectively. Thank you for bringing up this thought-provoking topic! It’s a reminder that while technology can enhance our lives, we must remain vigilant and responsible in how we use it.
Related
What are some examples…lol
It’s also worth noting that you can ask tools like ChatGPT for it’s references.
last time I tried that it made up links that didn’t work, and then it admitted that it cannot reference anything because of not having access to the internet
That’s my point, if the model returns a hallucinated source you can probably disregard it’s output. But if the model provides an accurate source you can verify it’s output. Depending on the information you’re researching, this approach can be much quicker than using Google. Out of interest, have you experienced source hallucinations on ChatGPT recently (last few weeks)? I have not experienced source hallucinations in a long time.
I use GPT (4o, premium) a lot, and yes, I still sometimes experience source hallucinations. It also will sometimes hallucinate incorrect things not in the source. I get better results when I tell it not to browse. The large context of processing web pages seems to hurt its “performance.” I would never trust gen AI for a recipe. I usually just use Kagi to search for recipes and have it set to promote results from recipe sites I like.
Umm no, it’s faster, better, and doesn’t push ads in my face. Fuck you, google
Sorry, I like answers without having to deal with crappy writing, bullshit comments, and looking at ads on pages.
As long as you don’t ask it for opinion based things, ChatGPT can search online dozens of sites at the same time, aggregate all of it, and provide source links in a single prompt.
People just don’t know how to use AI properly.
Shit’s confidently wrong way too often. You wouldn’t even realize the bullshit as you read it.
Sorry, I like answers without having to deal with crappy writing, bullshit comments, and looking at ads on pages.
Oh, you don’t know what searxng is.
Ok, then. That’s all you had to say.
No. Learn to become media literate. Just like looking at the preview of the first google result is not enough blindly trusting LLMs is a bad idea. And given how shitty google has become lately ChatGPT might be the lesser of two evils.
No.
Yes.Using chatgpt as a search engine showcases a distinct lack of media literacy. It’s not an information resource. It’s a text generator. That’s it. If it lacks information, it will just make it up. That’s not something anyone should use as any kind of tool for learning or researching.
You ate wrong. It is incredibly useful if the thing you are trying to Google has multiple meanings, e.g. how to kill a child. LLMs can help you figure out more specific search terms and where to look.
Both the paid version of OpenAi and co-pilot are able to search the web if they don’t know about something.
The biggest problem with the current models is that they aren’t very good at knowing when they don’t know something.
The o1 preview actually solves this pretty well, But your average search takes north of 10 seconds.
They never know about something though. They are just text randomisers trained to generate plausible looking text
Well, inside that text generator lies useful information, as well as misinformation of course, because it has been trained on exactly that. Does it make shit up? Absolutely. But so do and did a lot of google or bing search results, even prior to the AI-slop-content farm era.
And besides that, it is a fancy text generator that can use tools, such as searching bing (in case of ChatGPT) and summarizing search results. While not 100% accurate the summaries are usually fairly good.
In my experience the combination of information in the LLM, web search and asking follow up questions and looking at the sources gives better and much faster results than sifting though search results manually.
As long as you don’t take the first reply as gospel truth (as you should not do with the first google or bing result either) and you apply the appropriate amount of scrutiny based on the importance of your questions (as you should always do), ChatGPT is far superior to a classic web search. Which is, of course, where media literacy matters.
I don’t think I will.