Silicon Valley wants us to believe that their autonomous products are a kind of self-guided magic, but the technology is clearly not there yet. A quick peak behind the curtain has consistently revealed a product base that, at a minimum, is still deeply reliant on human workforces.
This sounds exactly like Amazon’s “Just walk out” grocery store concept that actually required remote supervising by workers in India.
I’m starting to get a bit annoyed by takes like this.
Of course people had to check the automated system. that’s how they are debugged and trained.
The newsworthy part is just that they missed their target goal of reviewed sales. In the end of the trial they still needed 70% review rate instead of their goal of 5%.
The system was still fully automated. But some needed checks after the sales happened. That’s what trials are there for
Or you could, you know, pay a person a living wage to be physically present at the store to assist shoppers and review the sales.
Or, hear me out. Maybe a 70% review requirement is not automation at all. Just saying.
You could, yes. And that should be the criticism.
If you attack them on bullshit terms, you do exactly what they want and they can go “well, those idiots don’t even know what they are talking about”.
Maybe a 70% review requirement is not automation at all
And amazon agrees. which is why they closed the experiment down
While you aren’t wrong that every automated system needs human oversight and occasional intervention, when the average person hears “fully automated” or any of the many marketing terms used for these things lately they are going to take it pretty close to face value.
It also doesn’t help that it was largely marketed and reported on as if it wasn’t an experiment, but a solved and working “product”.
Every system will have its own requirements and acceptable margins for error and required interventions, but I think most people would feel that even the one in twenty (5%) goal is a lot for a project like the Amazon automated shops. It would be a lot for any of the automations I come into contact with (and have built) at my job, but admittedly I’m not doing anything as remotely novel or as complicated as an unattended shop.
Beyond that, people have a lot more reasons to dislike these systems than just the amount of human intervention and I think they’re just going to jump on whichever one is currently being discussed in order to express it. Like displeasure that the teleoperation positions are outsourced the way they are, taking even more jobs away from the local population.
Mechanical Turks are my favorite trope of the 2020s.
The job post also notes that such a teleoperation center requires “building highly optimized low latency reliable data streaming over unreliable transports in the real world.” Tele-operators can be “transported” into the robotaxi via a “state-of-the-art VR rig,” it adds.
Oh man that’s pretty hilarious for “autonomous vehicles”
Tesla would not be the first robotaxi company to use this method. In fact, it’s an industry standard. It was previously reported that Cruise, the robotaxi company owned by General Motors, was employing remote human assistants to troubleshoot when its vehicles ran into trouble
Oh, so this is actually completely normal and should not be news worthy…
Remote human intervention when automated systems fail should be expected and required to be honest with current technology. There are simply too many edge cases in the real world, even with the trillions of miles Tesla has trained their system on.
When will the intervention be called upon? How we react is defined by the context we have. Imagine being dropped into a pre accident situation without any context.
No idea, and I doubt they’ll ever publicly say.
Direct human intervention is definitely something other companies could be doing more of. Waymo especially given all the videos of them getting stuck, sometimes en masse.
Remote human intervention when automated systems fail should be expected and required to be honest with current technology.
The “human in the loop” is one of those things that sounds good but isn’t at all in reality.
https://pluralistic.net/2024/10/30/a-neck-in-a-noose/
A human was literally sitting at the wheel as Uber’s taxi ran someone over.
Driving is nothing but edge cases, and that’s why maybe paying drivers to drive people around is better than some half-baked AI driving people under trucks and hoping a call center employee is paying enough attention to bail them out.
It’s normal in the industry but the industry likes to tell the public otherwise so from time to time these articles pop up.
Amazon’s just walk out shop, with AI looking with cameras what you bought, was actually run by indians remotely because the automation didn’t quite work. Food delivery robots are run by people in low cost areas. Over guy runs multiple robots with a pont-and-click interface. That kind of thing. I’m sure autonomy is worked on but it’s not fully autonomous yet.
Two notes on this as someone who works in the sector.
It’s “completely normal”, but only if you’re not having a full time driver for each vehicle, which is what the article sounds like… Then the vehicles wouldn’t be autonomous, they’d just be teleoperated.
And the second part, why is this an industry standard and why are investors ok with it? Imagine you have a product (robotaxi) that is autonomous but can’t deal with absolutely everything on its own (not even Waymo is that advanced). The key component that you need to build into the system is the ability to come to a stop safely, and be recovered remotely. Then these “teleoperators” can recover the vehicles if/when they fail, and given a sufficiently low failure rate, you can have one operator for each X vehicles. Even if this is more than “0 drivers”, having 1 driver per 10 vehicles is a massive cost saving. Plus zooming out and thinking of other things than robotaxis, there are sectors like mining where they don’t care (that much) about the number of drivers - their primary goal is to have the drivers away from a dangerous mine. They can save money from simplifying operations that way.
Musk is a little whiney bitch that can’t hold his word. Full autonomous my ass. His words are all lies.
What’s hilariously tragic is that he could very likely have his full self-driving if he would just shut his shit-spewing asshole of a mouth for a hot second, and spend some of his ungodly billions on the problem.
There are incredibly bright people out there who can make this stuff a reality. But, it takes paying them well, not shit-talking or overruling them, and giving them the environment for success—e.g., not taking away the radar from the cars.
He just wants to talk a big game without spending any real effort or money on the problem. And, it’s just sad, because he could have his FSD and look like a genius.
The lidar drama is why Tesla without Musk could overtake global EV market, but they have him.
It may well be a matter of opinion whether Tesla, even operating at its highest potential, could now overtake the likes of BYD, which is getting extensive help from its government. But, it’s reasonably clear that Tesla’s chances get thinner with every bad decision of Musk’s.
He fucked with the engineering, chasing pennies on critical components, like the lidar. He fucked with the crown jewel of the company—its Supercharger network—by destroying the team, and thereby slowing down rollouts and critical maintenance. He ran his mouth off and chased away folks—like me—who would have otherwise bought, by espousing pants-on-head-crazy crypto-bro viewpoints. Hell, his idea of PR is a poop emoji auto-responder.
It’s just frustrating to see such a great concept—the ubiquitous electric car—be fucked up so badly by the person with the most means to succeed.
To be fair it is probably not on purpose. He is just too stupid to make realistic estimates of what will be possible.