It seems, by the comments, that everyone is quite enjoying all that ai has to offer right now. I think if we phrase the question in a good way, everyone is also quite excited to see the development in the future.
Why is it the hype right now to hate ai? Sure, the brands are pushing their half baked products everywhere, but I think this is a part of the journey. Good products can’t happen without it.
Why the hate? Because 99% of what’s AI now is actively harming society.
Training and running them consumes enormous amounts of energy, all the IP is within some gigantic monopolistic corporations, these corporations in turn push huge amounts of money into products that are not only bad, but dangerous (MS Recall or X’s porn generator AI), other corporations use AI as excuses to fire thousands of people and letting their core products rot away.
Currently, AI has hardly any positive sides, and those positives are very very narrow. Overall it’s a net negative.
99%? Really? Where did you get that percentage?
The tech is awesome already and getting developed extremely fast. Sure, there are many negatives, but many might just be growing pains.
On a wider scale : this is THE tech. This is the direction, it’s is inevitable. Of course it’s important how the development happens, there are many nuances to it. What I don’t like is these blanket hate statements against something. Many people are using ai today and benefiting in personal life and business.
Then where is it?
There’s hardly any application that’s more than a gimmick. ChatGPT is an incompetent liar, Sora and all the image/video generators produce mediocre crap the can’t reasonably controlled, chat bots keep making up stuff, etc. etc.
This tech is done. Why do you think there’s no progress from openai? The tech hit a ceiling. LLMs scaled to their current state very quickly, but each increment used exponentially more compute. There’s not enough compute, not enough training data to get better.
I’m pretty sure, you don’t understand how models work. It’s just magic for you. Just like blockchains, NFTs and VR. None of them changed the world in any meaningful way - just scams.
AI companies very fundamentally don’t make money, and have no way to become profitable in the near future. None of their tech has any business model. OpenAI relies 100% on Microsoft essentially donating azure instances.
Sure, AI has its applications, but not hundreds of billions worth of applications.
Training and running them consumes enormous amounts of energy,
Right now, using GPUs and unoptimized chips. Running and building other services like Google Search and YouTube also took enormous amounts of power and still does, though it’s vastly more efficient these days.
all the IP is within some gigantic monopolistic corporations,
This feels like a pretty knee jerk point instead of a well thought out one.
A) the biggest AI players are startups like ChatGPT and Anthropic which have gotten a lot of funding and attention but are neither giant nor monopolistic.
B) of the biggest monopolist companies (Apple, Google, Microsoft, and Meta), only Google and Apple are keeping their research closed, with both Microsoft and Meta publishing their models openly.
these corporations in turn push huge amounts of money into products that are not only bad, but dangerous (MS Recall or X’s porn generator AI),
Literally the vast majority of software developers already use copilot or a similar AI assistant. Bing search is genuinely useful for synthesizing answers and asking plain language questions with sourced answers. People are finding ChatGPT useful or they wouldnt be paying for it. DeepMind has literally discovered novel protein structures that we never knew existed before. And VFX artists like Corridor Crew are using it to make wild videos way faster than they ever were able to before. This feels like youre just cherry picking poor uses.
other corporations use AI as excuses to fire thousands of people and letting their core products rot away.
Capitalism does that with all forms of automation, whether it’s AI based, or just normal, run of the mill, software / machines. It’s how you end up producing the same products with less effort and manual labour. If you want to go back to hand milling flour you’re more than welcome to, otherwise automation is going to continue. The answer to automation lies in the government and social safety nets, not blocking automation technology.
As a massive car enthusiast ChatGPT is a fucking GODSEND now that Google is completely shit.
I wanted to know what model of transmission was in a car, hours of googling only returned me link after link after link of gearboxes to buy, parts for gearboxes to buy all like 2000 - 2004 make and model. Now for those who dont know, gearboxes are often internally the same with different casings for different manufacturers or use cases, so “How much horsepower can a make and model gearbox support” is a waste of time but “How much power can an Aisin A340e support” gets you the right info.
I asked Chatgpt, and yep. Thats the one.
please use Bing Copilot instead of ChatGPT for this. it’s the same language model underneath, largely, but the distinction of backing replies with actual sources and citing those sources in a way that allows you to click through and check the information you’re getting is huge for a variety of important reasons.
I am totally looking forward to AI customer support. The current model of a person reading a scripted response is painful and fucking awful and only rarely leads to a good resolution. I would LOVE an AI support where I could just describe the problem and it gives me answers and it only asks relevant follow up questions. I can’t wait.
The script doesn’t go away when you replace a helpdesk operator with ChatGPT. You just get a script-reading interface without empathy and a severally hindered ability to process novel issues outside it’s protocol.
The humans you speak to could do exactly what you’re asking for, if the business did not handcuff them to a script.
The script doesn’t go away when you replace a helpdesk operator with ChatGPT. You just get a script-reading interface without empathy and a severally hindered ability to process novel issues outside it’s protocol.
The humans you speak to could do exactly what you’re asking for, if the business did not handcuff them to a script.
But they do handcuff them to a script… at least 1st and 2nd level tech support. That’s the point. It’s so fucking awful. It’s a barrier to keep you from the more highly paid tech support people who may actually be able to answer your questions. First you have to wait on hold to make sure you think it’s worth wasting their time on your annoying problem, THEN it’s a maze you have to navigate, and then whoops you just got hung up on… so sorry, start all over! LLMs are (can be) so much better at this!
They’re already deployed and they’re less than helpful, because LLMs are bullshitting machines.
I already use LLMs to problem solve issues that I’m having and they’re typically better than me punching questions into Google. I admit that I’ve once had an llm hallucinate while it was trying to solve a problem for me, but the vast majority of the time it has been quite helpful. That’s been my experience at least. YMMV.
If you think LLMs suck, I’m guessing you haven’t actually used telephone tech support in the past 10 years. That’s a version of hell I wish on very few people.
If you think LLMs suck, I’m guessing you haven’t actually used telephone tech support in the past 10 years. That’s a version of hell I wish on very few people.
I’m specifically claiming that they’re bullshit machines. i.e. they’re generating synthetic text without context or understanding. My experience with search engines and telephone support is way better than what any LLM fed me.
There have already been cases where phone operators where replaced with LLMs which gave dangerops advice to anorexig patients.
If all you want is something trivial that’s been done by enough people beforehand, it’s no surprise that something approaching correct gets parroted back at you.
Not so sure what is so cool about “regularly wrong and dangerous”.
Going for a hike, seeing a nice plant and saying: I wonder what this plant is. And most of the time getting a correct answer.
If people is stupid enough to eat wild things based on any kind of unprofessional identification it may be just proving that Darwin was onto something.
Right, so when you need an ID but don’t really care about whether it’s correct, AI is great I guess. Not sure what the point is though.
But if you actually want a proper ID, I’d stay far away from AI and go to community with experts.
I have use it a lot while hiking and mostly got correct results for tree or small planta identification to satisfy my curiosity. Good enough for me. I’m not calling the National Center for Botanics neither hiring a professional botanic for 2000€/hour just to satisfy my curiosity while hiking.
It has its use cases. I would trust it as much as those old plant books for amateurs I used to have. I would also got incorrect identifications out of those due my lack of expertise, more so that with the AI I use nowadays.
Regular chatgpt is perfectly serviceable and fine. It does what I need. And I’ve used Bings AI to make cassette tape art and memes of coworkers.
Hey, deep fakes are awesome. They are a necessary step in the evolution of the technology that leads to holodecks.
I want holodecks, I bet you want holodecks, practically everyone wants holodecks, so we have to go through this stage of the tech to get there.
Call me a luddite, but I don’t think going through a phase where bad actors have the power to set every democracy back by centuries through misinformation and other bad actors have an infinite kiddy porn machine is worth it for what ultimately amounts to a luxury VR Video game that, if even possible to exist (the holodeck isn’t a “technology”, it is a narrative device), would be something that realistically only the ultra-rich would be able to use (because let’s face it, Star Trek’s post-capitalist utopia isn’t happening)
You want a device that’s only available on Star Fleet ships as part of the crew and probably limited only to a subset of higher ranking members of the crew? A major difference between the holodeck and deep fakes is that what happens in the holodeck stays in the holodeck - unless it gets out, in which case it usually becomes an illustration of why it should have been kept in.
You want a device that’s only available on Star Fleet ships as part of the crew and probably limited only to a subset of higher ranking members of the crew?
You can just rent a program at quarks?