Avatar

bradd

bradd@lemmy.world
Joined
0 posts • 95 comments
Direct message

I’d be more inclined to call this a misc utensils drawer. I have one just like it, with many of the same items, but I also have a true “junk drawer”, but it has anything but utensils in it. Like, batteries, screws, magnifying glass, fire starters, a deck of cards, etc. All of the shit that ends up near the kitchen that doesn’t have a whole space dedicated to similar things, finds a home in the junk drawer.

permalink
report
reply

I don’t think people have been so reliant on systems before. Like, the airplane isn’t quite ready to fly yet.

It was government, church, and loose systems that brought food from the soil to your plate, not an extensive system.

permalink
report
reply

And authenticators, password managers.

permalink
report
parent
reply

If I put text into a box and out comes something useful I could give a shit less if it has a criteria for truth. LLM’s are a tool, like a mannequin, you can put clothes on it without thinking it’s a person, but you don’t seem to understand that.

I work in IT, I can write a bash script to set up a server pivot to an LLM and ask for a dockerfile that does the same thing, and it gets me very close. Sure, I need to read over it and make changes but that’s just how it works in the tech world. You take something that someone wrote and read over it and make changes to fit your use case, sometimes you find that real people make really stupid mistakes, sometimes college educated people write trash software, and that’s a waste of time to look at and adapt… much like working with an LLM. No matter what you’re doing, buddy, you still have to use your brian.

permalink
report
parent
reply

I understand your skepticism, but I think you’re overstating the limitations of LLMs. While it’s true that they can generate convincing-sounding text that may not always be accurate, this doesn’t mean they’re only good at producing noise. In fact, many studies have shown that LLMs can be highly effective at retrieving relevant information and generating text that is contextually relevant, even if not always 100% accurate.

The key point I was making earlier is that LLMs require a different set of skills and critical thinking to use effectively, just like a knife requires more care and attention than a spoon. This doesn’t mean they’re inherently ‘dangerous’ or only capable of producing noise. Rather, it means that users need to be aware of their strengths and limitations, and use them in conjunction with other tools and critical evaluation techniques to get the most out of them.

It’s also worth noting that search engines are not immune to returning inaccurate or misleading information either. The difference is that we’ve learned to use search engines critically, evaluating sources and cross-checking information to verify accuracy. We need to develop similar critical thinking skills when using LLMs, rather than simply dismissing them as ‘noise generators’.

See these:

permalink
report
parent
reply

what the said, am come where?

permalink
report
parent
reply

How were Trumps McDonalds burgers? Like, are they better than what they feed the peasants?

permalink
report
reply

I call myself an “IT systems engineer”.

permalink
report
reply