We and our 848 partners store and access personal data, like browsing data or unique identifiers, on your device. Selecting "I Accept" enables tracking technologies to support the purposes shown under "we and our partners process data to provide," whereas selecting "Reject All" or withdrawing your consent will disable them. If trackers are disabled, some content and ads you see may not be as relevant to you. You can resurface this menu to change your choices or withdraw consent at any time by clicking the ["privacy preferences"] link on the bottom of the webpage [or the floating icon on the bottom-left of the webpage, if applicable]. Your choices will have effect within our Website. For more details, refer to our Privacy Policy. Cookie Policy. Ways we may use your data:
Actively scan device characteristics for identification. Use precise geolocation data. Develop and improve services. Create profiles to personalise content. Measure advertising performance. Use limited data to select advertising. Use limited data to select content. Use profiles to select personalised content. Create profiles for personalised advertising. Measure content performance. Use profiles to select personalised advertising. Understand audiences through statistics or combinations of data from different sources. Store and/or access information on a device.
😮so there is really no OCR in those dictation apps 🤯? Is there a OCR API in iOS? If so, it should not be too hard integrating it into an app 🤔 I assume
That seems like a lemmy limitation that probably needs worked on (i.e. prompting for alt text for images so apps can just read the alt text and folks are reminded to think of it).
EDIT: It's been brought to my attention that the Lemmy server software actually does support alt-text ... but I'm not sure how prevalent this is with clients (I don't remember ever seeing a prompt for it).