it's currently opt-in rather than opt-out, fully on-device and won't work on devices with weak NPUs (or on any which completely lack it)
unless it changes in the future it's not that bad at the moment tbh
Of course it's on-device. Microsoft is doing all the processing on people's PCs, rather than their own servers, where they'd have to pay for that computation.
It must communicate with Microsoft in a way, just by the fact the "AI" must not "hallucinates" by suggesting the user to jump from a bridge or to add Glue in his pizza...
"On-device" has to be a half-true at best. I'm having a hard time believing that the NPUs on these new ARM chips are powerful enough for it to be fully on-device. Even more-so with "approved" x86 chips. There has to be some data sharing between the client and server, similar to how Rabbit does their shit.
Look up TPUs, like a coral tensor. Extremely efficient at machine learning, only, and cheap. If NPUs use anything like a TPU, then it absolutely can do local "AI." Then once the heavy lifting is done, then I'd imagine all that data is uploaded.