OpenAl saved its biggest announcement for the last day of its 12-day “shipmas” event. On Friday, the company unveiled o3, the successor to the o1 “reasoning” model it released earlier in the year. o3 is a model family, to be more precise as was the case with o1. There’s o3 and o3-mini, a smaller, distilled model fine-tuned for particular tasks. OpenAl makes the remarkable claim that o3, at least in certain conditions, approaches AGI - with significant caveats. More on that below.
Large gains were due to scaling the hardware, and data. The training algorithms didn’t change much, transformers allowed for higher parallelization. There are no signs of the process becoming self-improving. Agentic performance is horrible as you can see with Claude (15% of tasks successful).
What happens in the brain is a big mystery, and thus it cannot be mimicked. Biological neural networks do not exist, because the synaptic cleft is an artifact. The living neurons are round, and the axons are the result of dehydration with ethanol or xylene.
The living neurons are round, and the axons are the result of dehydration with ethanol or xylene.
Most scientists would not believe that. But if you are right in some way, we are very far of what I said here before. Clearly it’s not me who will convince you otherwise. I wish you the best, take care 😌