Wouldn’t it cut down on search queries (and thus save resources) if I could search for “this is my phrase” rather than rawdogging it as an unbound series of words, each of which seems to be pulling up results unconnected to the other words in the phrase?
There are only 2 reasons I can think of why a website’s search engine lacks this incredibly basic functionality:
- The site wants you to spend more time there, seeing more ads and padding out their engagement stats.
- They’re just too stupid to know that these sorts of bare-bones search engines are close to useless, or they just don’t think it’s worth the effort. Apathetic incompetence, basically.
Is there a sound financial or programmatic reason for running a search engine which has all the intelligence of a turnip?
Cheers!
EDIT: I should have been a bit more specific: I’m mainly talking about search engines within websites (rather than DDG or Google). One good example is BitTorrent sites; they rarely let you define exact phrases. Most shopping websites, even the behemoth Amazon, don’t seem to respect quotation marks around phrases.
It’s because websites interpret those characters differently because of how coding requires using the physical qwerty keyboard. Essentially “>” gets used as a compator operator in programming languages, which means that it’s used as a tool to instructs the computer how to do things. When we need to display the symbol, we use “>” as an “escaped character” which basically means treat it as the symbol, not the instruction set. Often search engines will use a very powerful tool called a regular expression which looks like this for phone numbers: ^(\d{3})\s\d{3}-\d{4}
And each character represents something, ^ means start with. \d means digit { means 3 of whatever’s in front of me }. Breaking apart the search parameters is pretty complex and it needs to happen FAST, so at a certain point the developers just throw away things that can be a security concern like special characters like &^|`"'* specially because they can be used to maliciously attack the search engine.
For other characters: https://www.w3schools.com/html/html_entities.asp
For the most part I think they do. I frequently use quoted strings in my search queries (on DDG and Google, I hardly ever use any other search engines) and it usually seems to show me more relevant ones when I do that.
But in general the WWW is now so big that search engines have been having to become more and more complex (and think for themselves instead of taking the queries very literally) in order to be useful at all.
I’m going to break with what most people are saying and offer the suggestion that search engines are actually doing a decent job. If my mother searches Google for the phrase “Can you please show me a recipe for apple pie?,” she’s probably going to get a recipe for apple pie. If I search google for “c++20” “std::string” “constructors”, after I skip over the ads, I’m most likely going to get a web page that shows me the the constructors for std::string in c++20.
Ad-sponsored pages and AI bullshit aside, most search engines do still give decent results.
Guys, please. The solution to Google reinterpreting your search queries has been around for years, and it is called VERBATIM SEARCH. (Search options: All results -> Verbatim). Voila, welcome back to 1997.
I don’t know the answer but I can tell you two things:
- It has often been beneficial to me when the search query wasn’t taken literally, it’s not always a bad thing. Many searches are ones where the user doesn’t know exactly what they’re looking for. Granted, that’s definitely not always the case. That said, I don’t remember ever catching it outright ignore stuff like quoted words/phrases.
- Regarding “save resources”, Google introduced Instant Search in 2010 which started showing results as you type, thus creating an ungodly amount of extra load on their servers since each user search now created multiple queries. They clearly have no trouble scaling up resources.