There's a model with a more expensive dock, or one without. The one without worked fine. But it had to be the Box 3 not Box 2. It worked pretty well and you could create custom images to indicate whether it was listening, thinking, etc.
The box isn't powerful enough to run an LLM itself. It's just good enough as an audio conduit. You can either use their cloud integration with ChatGPT, or now, Anthropic Claude. But if you had a powerful Home Assistant server, say an Nvidia Jetson or a PC with a beefy Nvidia GPU, you could run local models like Llama and have better privacy.
This is from earlier this year. I imagine they've advanced more since then.
I'm happy to see untracked energy devices covered in the energy graphs. I'd been using a Grafana dashboard to display more detailed energy visualisations including consumption of untracked devices before.