Architeuthis
It’s not always easy to distinguish between existentialism and a bad mood.
To have a dead simple UI where you, a person with no technical expertise, can ask in plain language for the data you want in the way you want them presented, along with some basic analysis that you can tell it to make it sound important. Then you tell it to turn it into an email in the style of your previous emails, send it, and take a 50min coffee break. All this allegedly with no overhead besides paying a subscription and telling your IT people to point the thing to the thing.
I mean, it would be quite something if transformers could do all that, instead of raising global temperatures to synthesize convincing looking but highly suspect messaging at best while being prone to delirium at worst.
I’m not spending the additional 34min apparently required to find out what in the world they think neural network training actually is that it could ever possibly involve strategy on the part of the network, but I’m willing to bet it’s extremely dumb.
I’m almost certain I’ve seen EY catch shit on twitter (from actual ml researchers no less) for insinuating something very similar.
It hasn’t worked ‘well’ for computers since like the pentium, what are you talking about?
The premise was pretty dumb too, as in, if you notice that a (very reductive) technological metric has been rising sort of exponentially, you should probably assume something along the lines of we’re probably still at the low hanging fruit stage of R&D, it’ll stabilize as it matures, instead of proudly proclaiming that surely it’ll approach infinity and break reality.
There’s nothing smart or insightful about seeing a line in a graph trending upwards and assuming it’s gonna keep doing that no matter what. Not to mention that type of decontextualized wishful thinking is emblematic of the TREACLES mindset mentioned in the community’s blurb that you should check out.
So yeah, he thought up the Singularity which is little more than a metaphysical excuse to ignore regulations and negative externalities because with tech rupture around the corner any catastrophic mess we make getting there won’t matter. See also: the whole current AI debacle.
IKR like good job making @dgerard look like King Mob from the Invisibles in your header image.
If the article was about me I’d be making Colin Robinson feeding noises all the way through.
edit: Obligatory only 1 hour 43 minutes of reading to go then
The interminable length has got to have started out as a gullibility filter before ending up as an unspoken imperative to be taken seriously in those circles, isn’t HPATMOR like a million billion chapters as well?
Siskind for sure keeps his wildest quiet-part-out-loud takes until the last possible minute of his posts, when he does decide to surface them.
Ah yes, Alexander’s unnumbered hordes, that endless torrent of humanity that is all but certain to have made a lasting impact on the sparsely populated subcontinent’s collective DNA.
edit: Also, the absolute brain on someone who would think that before entertaining a random recent western ancestor like a grandfather or whateverthefuckjesus.
I am overall very uninformed about the chinese thechnological day-to-day, but here’s two interesting facts:
They set some pretty draconian rules early on about where the buck stops if your LLM starts spewing false information or (god forbid) goes against party orthodoxy so I’m assuming if independent research is happening It doesn’t appear much in the form of public endpoints that anyone might use.
A few weeks ago I saw a report about chinese medical researchers trying use AI agents(?) to set up a virtual hospital in order to maybe eventually have some sort of a virtual patient entity that a medical student could work with somehow, and look how many thousands of virtual patients our handful of virtual doctors are healing daily, isn’t it awesome folks. Other than the rampant startupiness of it all, what struck me was that they said they had chatgpt-3.5 set up up the doctor/patient/nurse agents, i.e. they used the free version.
So, who knows? If they are all-in in AGI behind the scenes they don’t seem to be making a big fuss about it.