4 points
*

You have a tl;dr of this?

Edit: this was supposed to be a joke.

permalink
report
reply
2 points

Seriously? It’ll take you a minute to read, if you read slowly.

permalink
report
parent
reply
0 points

This IS the tl;dr of the article first off, and second, just read the top “paragraph” (in quotes because it’s only like 2 sentences). It’s basically the tldr of the tldr

permalink
report
parent
reply
1 point

I can summarise it with AI:

Ed Zitron, a tech beat reporter, criticizes a recent paper from Goldman Sachs, calling AI a “grift.” The article raises questions about the investment, the problem it solves, and the promise of AI getting better. It debunks misconceptions about AI, pointing out that AI has not been developed over decades and the American power grid cannot handle the load required for AI. The article also highlights that AI is not capable of replacing humans and that AI models use the same training data, making them a standstill for innovation.

permalink
report
parent
reply
8 points

Ed Zitron, a tech beat reporter, criticizes a recent paper from Goldman Sachs, calling AI a “grift.”

Fittingly, this paragraph is incomprehensible to anyone who hasn’t already read the blog post; who is calling AI a grift, Zitron or GS? And is Zitron critical of the GS article (no, he’s not)?

Now, if it was your job to actually absorb the information in this blog post, there’s really no way around actually reading the thing - at least if you wanna do a good job. Any “productivity boost” would sacrifice quality of output.

permalink
report
parent
reply
3 points

Point 2 and 3 are legit, especially the part about not having a roadmap, a lot of what’s going on is pure improvisation at this point and trying different things to see what sticks. The grid is a problem but fixing it is long over due. In any case, these companies will just build their own if the government can’t get its head out of it’s ass and start fixing the problem (Microsoft is already doing this).

The last two point specifically point to this person being someone that doesn’t know the technology just like what they are accusing others of being.

It’s already replacing people. You don’t need it to do all the work, it will still bring about layoffs if it gives the ability for one person to do the job of 5. It’s already affecting jobs like concept artist and every website that used to have someone at the end of their chat app now has an LLM. This is also only the start, it’s the equivalent of people thinking computers won’t affect the workforce in the early 90s. It won’t hold up for long.

The data point is also quit a bold statement. Anyone keeping abreast with the technology knows that it’s now about curating the datasets and not augmenting them. There’s also a paper that comes out everyday about new training strategies which is helping a lot more than a few extra shit posts from Reddit.

permalink
report
reply
12 points

Feels like you’re missing the point of the fourth bulletpoint. What they are saying, is not that AI is not taking people’s jobs, only that true potential comes from real humans that provide some quality that AI is not capable of truly replacing. It is being used to replace people with it’s inferior imitations.

Not that your point is invalid, it absolutely is a valid and valuable criticism itself.

permalink
report
parent
reply
-10 points

If AI is a trillion dollar investment, what trillion dollar problem is it solving?

If you could increase the productivity of knowledge-workers 5%, that’s worth a trillion

  1. AI won’t work because of the American power grid

Makes no sense. Why would one random country having an underdeveloped power grid stymie AI?


I’m not an AI gal, but those are obvious bad points.

permalink
report
reply
-1 points

I question more that smartphones and especially the internet had roadmaps. was the roadmap for the military to pass it to education to be taken up by companies and transform it from text to gui then use algorithms to take advanatage of human psychology???

permalink
report
parent
reply
1 point

I think number 2 is a fairly good point.

Qualitatively, there were huge leaps made between 2018 and 2020. Then it’s been maybe a shade better but not really that much better than 3 years ago. Certainly are finding ways to apply it more broadly and more broad ecosystem of providers catching up to each other, but the end game is mostly more ways to get to roughly the same experience you could get in 2021.

Meanwhile, people deep in the field go “but look at this obscure quantitative measure of “AI” capability that’s been going up this whole time, which show a continuation of the improvement we saw from 2018”. Generally, the correlation between those values and the qualitative experience tracked during those early years, but since then qualitative has kind of stalled and the measures go up. Problem is the utility lies in the qualitative experience.

permalink
report
parent
reply
-2 points

2 and 3 are both posing questions

  • How could AI even be applied?!?!?

  • How could AI even be improved?!?!?

as though the implication were that these are unanswerable questions

when they’re actually easily answerable

2: it can be applied to logistics, control of fusion energy, drug-discovery pipelines, lots of things that could soon amount to a trillion dollars

3: it can be improved by combining LLMs with neural-symbolic logic and lots of other things extensively written about


I assume the Goldman Sachs report is more intelligent than this summary makes out. Coz the summary is just saying we should throw our hands up in despair at well-studied questions that a lot of work has gone into answering.

permalink
report
parent
reply
-1 points

the first two computers were connected in 1969 leading to arpanet. I would say the qualitative experience took quite some time to improve. The type of algorithms ai has evloved from I would say came out of the 2000’s maybe late 90’s. Taking google as sorta a baseline maybe. I would say we are equivalent now to about mid nineties internet wise so it will be interesting to say the least on where this goes. They do use to much energy though and I hope they can bring this down maybe with hardware acceleration.

permalink
report
parent
reply
-2 points

I feel like #3 shows up for every tech innovation. I remember people bitching about the Internet not being viable because phone lines were too slow. The demand needs to be there before the infrastructure will get built.

permalink
report
parent
reply
3 points

Some difference in that fixing the power capacity problem will absolutely mean combusting more hydrocarbons in practical terms, something we can ill afford to do right now. Until we get our legs under us with non carbon based energy generation, we should at least not take on huge power burdens.

Now there was an alleged breakthrough that might make LLMs less energy hogs but I haven’t seen it discussed much to know if it is promising or a bust. Either way power efficiency might come as a way to save #3.

permalink
report
parent
reply
2 points

You’re absolutely right that AI is going to use electricity, and (for now) a majority of that electricity is generated through combustion.

I don’t think that is the concern when it comes to the power grid. I think most people are worried about power delivery infrastructure, which is a lot harder to upgrade/add than power generation infrastructure. Sorry if my comment wasn’t clear about this.

permalink
report
parent
reply
7 points

If you could increase the productivity of knowledge-workers 5%, that’s worth a trillion

A big if (and where does these numbers come from?), but more importantly, a “more productive” knowledge worker isn’t necessarily a good thing if the output is less reliable, interesting or innovative for example. 10 shitty articles instead of 1 quality article is useless if the knowledge is actually worth anything to the end user.

permalink
report
parent
reply
5 points

Nice now i’ve read a post about an article about a paper by goldman-sachs, see you later if i find the original paper, otherwise there’s nothing really to discuss.

permalink
report
reply
13 points

Ok first of all it’s not a peer-reviewed paper, it’s a report. Words fucking matter.

permalink
report
parent
reply
6 points

There’s not even any evidence that anyone actually printed it 😤

permalink
report
parent
reply
1 point

Tell that to the lady on Tumblr

permalink
report
parent
reply
1 point

As usual a critic of novel tech gets some things right and some things wrong, but overall not bad. Trying to build a critic of LLMs where your understanding is based on a cartoon representation skipping the technical details about what is novel about the approach and only judging based on how commercial products are using it can be an overly narrow lens to what it can be, but isn’t too far off from what it is.

I suspect LLMs or something like them will be a part of something approaching AGI, and the good part is once the tech exists you don’t have to reinvent it and can test it’s boundaries and how it would integrate with other systems, but if that is 1%, 5%, or 80% of an overall solution is unknown.

permalink
report
reply