Avatar

bradd

bradd@lemmy.world
Joined
0 posts • 102 comments
Direct message

Weird how “a nation of immigrants” wants to know where they are from.

permalink
report
parent
reply

There are alternate on-prem solutions that are now good enough to compete with vmware, for a majority of the people impacted by vmwares changes. I think the cloud ship has sailed and the stragglers have reasons for not moving to the cloud, and in many cases companies nove back from the cloud once they realize just how expensive it actually is.

I think one of the biggest drivers for businesses to move to the cloud is they do not want to invest in talent, the talent leaves and it’s hard to find people who want to run in house infra for what is being offered. That talent would move on to become SRE’s for hosting providers, MSP’s, ISP’s, and so on. The only option the smaller companies have would be to buy into the cloud and hire what is essentially an administrator and not a team of architects, engineers, and admins.

permalink
report
parent
reply

It was a dumb move. They had a niche market cornered, (serious) enterprises with on-prem infrastructure. Sure, it was the standard back in the late 2000’s to host virtualization on-prem but since then, the only people who have not outsourced infrastructure hosting to cloud providers, have reasons not to, including financial reasons. The cloud is not cheaper than self-hosting, serverless applications can be more expensive, storage and bandwidth is more limited, and performance is worse. Good example of this is openai vs ollama on-prem. Ollama is 10,000x cheaper, even when you include initial buy-in.

Let VMware fail. At this point they are worth more as a lesson to the industry, turn on your users and we will turn on you.

permalink
report
reply

As a side note, I feel like this take is intellectually lazy. A knife cannot be used or handled like a spoon because it’s not a spoon. That doesn’t mean the knife is bad, in fact knives are very good, but they do require more attention and care. LLMs are great at cutting through noise to get you closer to what is contextually relevant, but it’s not a search engine so, like with a knife, you have to be keenly aware of the sharp end when you use it.

permalink
report
parent
reply

I guess it depends on your models and tool chain. I don’t have this issue but I have seen it for sure, in the past with smaller models no tools and legal code.

permalink
report
parent
reply

No, I don’t, but the misspelling was intentional.

permalink
report
parent
reply

There was a project a few years back that scrapped and parsed, literally the entire internet, for recipes, and put them in an elasticsearch db. I made a bomb ass rub for a tri-tip and chimichurri with it that people still talk about today. IIRC I just searched all tri-tip rubs and did a tag cloud of most common ingredients and looked at ratios, so in a way it was the most generic or average rub.

If I find the dataset I’ll update, I haven’t been able to find it yet but I’m sure I still have it somewhere.

permalink
report
parent
reply

I legiterally have an LLM use searxng for me.

permalink
report
parent
reply

When it’s important you can have an LLM query a search engine and read/summarize the top n results. It’s actually pretty good, it’ll give direct quotes, citations, etc.

permalink
report
parent
reply

Sure but you can benchmark accuracy and LLMs are trained on different sets of data using different methods to improve accuracy. This isn’t something you can’t know, and I’m not claiming to know how, I’m saying that with exposure I have gained intuition, and as a result have learned to prompt better.

Ask an LLM to write powershell vs python, it will be more accurate with python. I have learned this through exposure. I’ve used many many LLMs, most are tuned to code.

Currently enjoying llama3.3:70b by the way, you should check it out if you haven’t.

permalink
report
parent
reply