Betas of iOS 18.1 et al with Apple Intelligence could land as soon as this week.
My guess is they thought they were 99% done but that the 1% (“just gotta deal with these edge case hallucinations”) ended up requiring a lot more work (maybe even an entirely new sub-system or a wholly different approach) than anticipated.
I know I suggested the issue might be hallucinations above, but what I’m genuinely curious about is how they plan to have acceptable performance without losing half or more of your usable RAM to the model.
Will it run locally? I just assumed it would be run on Apple servers in some way.
They framed it like most of the stuff is running on device while some in some cases, I suppose image generations, it will use the “very secure” apple servers. Additionally appleAI can decide that it would make sense to ask chatGPT on their servers and gives you the option to do so.
I thought they had confirmed at least some of the image generation stuff happening locally. I am in the intelligence beta now and went offline and played around with Siri and lots of stuff worked. Not really doing much new right now, but the speed and quality of understanding and dealing with when you stumble over words are way better.