Peanut
Big. Fan of ai stuff. Not a fan of this. This definitely won’t have issues with minority populations and neurodivergents falling outside of distribution and causing false positives that enable more harassment of people who already get unfairly harassed.
Let this die with the mind reading tactics they spawned from.
I see intelligence as filling areas of concept space within an econiche in a way that proves functional for actions within that space. I think we are discovering more that “nature” has little commitment, and is just optimizing preparedness for expected levels of entropy within the functional eco-niche.
Most people haven’t even started paying attention to distributed systems building shared enactive models, but they are already capable of things that should be considered groundbreaking considering the time and finances of development.
That being said, localized narrow generative models are just building large individual models of predictive process that doesn’t by default actively update information.
People who attack AI for just being prediction machines really need to look into predictive processing, or learn how much we organics just guess and confabulate ontop of vestigial social priors.
But no, corpos are using it so computer bad human good, even though the main issue here is the humans that have unlimited power and are encouraged into bad actions due to flawed social posturing systems and the confabulating of wealth with competency.
Possibly one of my favourites to date. Absolutely love it.
While I agree about the conflict of interest, I would largely say the same thing despite no such conflict of interest. However I see intelligence as a modular and many dimensional concept. If it scales as anticipated, it will still need to be organized into different forms of informational or computational flow for anything resembling an actively intelligent system.
On that note, the recent developments with active inference like RXinfer are astonishing given the current level of attention being paid. Seeing how llms are being treated, I’m almost glad it’s not being absorbed into the hype and hate cycle.
Perhaps instead we could just restructure our epistemically confabulated reality in a way that doesn’t inevitably lead to unnecessary conflict due to diverging models that haven’t grown the necessary priors to peacefully allow comprehension and the ability exist simultaneously.
breath
We are finally coming to comprehend how our brains work, and how intelligent systems generally work at any scale, in any ecosystem. Subconsciously enacted social systems included.
We’re seeing developments that make me extremely optimistic, even if everything else is currently on fire. We just need a few more years without self focused turds blowing up the world.
Funny I don’t see much talk in this thread about Francois Chollet’s abstraction and reasoning corpus, which is emphasised in the article. It’s a really neat take on how to understand the ability of thought.
A couple things that stick out to me about gpt4 and the like are the lack of understanding in the realms that require multimodal interpretations, the inability to break down word and letter relationships due to tokenization, lack of true emotional ability, and similarity to the “leap before you look” aspect of our own subconscious ability to pull words out of our own ass. Imagine if you could only say the first thing that comes to mind without ever thinking or correcting before letting the words out.
I’m curious about what things will look like after solving those first couple problems, but there’s even more to figure out after that.
Going by recent work I enjoy from Earl K. Miller, we seem to have oscillatory cycles of thought which are directed by wavelengths in a higher dimensional representational space. This might explain how we predict and react, as well as hold a thought to bridge certain concepts together.
I wonder if this aspect could be properly reconstructed in a model, or from functions built around concepts like the “tree of thought” paper.
It’s really interesting comparing organic and artificial methods and abilities to process or create information.