I mean to be realistic, Whisper (the audio to text AI ) linked with chatGPT can subtitle anything in real time, translated in any language, in very high quality...
It doesn't need to be realtime since you can pre generate an srt with time codes beforehand using something like bazarr. Whisper also runs faster than realtime in most model sizes, up to 32x realtime so it can really be worth it to add auto subtitles to media in your collection that's missing subtitles as a one time job.
It's an interesting idea to patch the holes when absolutely no srt files are available.
But why not have an open repository where already present srt files could be shared by people.
We could call it libre-subs or something like that.
Ah, I didn't realise the USB one cost that much more. I'm not sure most people would prefer the USB version though. It's convenient to move around and you can use it with mini PCs, but cooling isn't as good compared to something that sits in a case with good airflow (so it's more likely to thermally throttle while in use), and having dedicated PCIe lanes as you'd get with an M.2 is way more efficient than using a shared bus like USB. Google have always advertised the USB version for "prototyping" while the M.2 versions are for "production".
For $40, you can get an M.2 version that has two Coral TPUs on a single board. https://coral.ai/products/m2-accelerator-dual-edgetpu. I've got this one with a PCIe adapter, but currently only use one of the TPUs.