Architeuthis
It’s not always easy to distinguish between existentialism and a bad mood.
Current flavor AI is certainly getting demystified a lot among enterprise people. Let’s dip our toes into using an LLM to make our hoard of internal documents more accessible, it’s supposed to actually be good at that, right? is slowly giving way to “What do you mean RAG is basically LLM flavored elasticsearch only more annoying and less documented? And why is all the tooling so bad?”
I mean, that was definitely a thing when I was at school, only it was mostly about teaching undergrads graph search algorithms and the least math possible in order to understand backpropagation.
As an aside, weird that we don’t hear much about genetic algorithms anymore, but it’s probably just me.
Companies probably actually need to curate down their documents so that simpler thinks work, then it doesn’t cost ever increasing infrastructure to overcome the problems that previous investment actually literally caused
Definitely, but the current narrative is that you don’t need to do any of that, as long as you add three spoonfulls of AI into the mix you’ll be as good as.
Then you find out what you actually signed up for is to do all the manual preparation of building an on-premise search engine to query unstructured data, and you still might end up with a tool that’s only slightly better than trying to grep a bunch of pdfs at the same time.
(update: disproven by Crowdstrike’s blog post).
How do you mean? The current top post on the blog seems to mention .sys files as part of the problem very prominently.
Channel file “C-00000291*.sys” with timestamp of 0527 UTC or later is the reverted (good) version. Channel file “C-00000291*.sys” with timestamp of 0409 UTC is the problematic version.
The whole point of using these things (besides helping summon the Acausal Robot God) is for non-technical people to get immediate results without doing any of the hard stuff, such as, I don’t know, personally maintaining and optimizing an LLM server on their llinux gaming(!) rig. And that’s before you realize how slow inference gets as the context window fills up or how complicated summarizing stuff gets past a threshold of length, and so on and so forth.