I had the honor and the pleasure to demo an AI-augmented user interface during a recent meetup. Since the demo is online and publicly available, I am inviting anyone who might be interested to try it out. The audience for this demo is, I'd venture to say, front-end developers, designers, and photographers.
As an amateur photographer myself, I frequently post photos on social media. There are lots of forums out there, and it becomes quickly apparent that different communities have different rules as to how a post should look like. For example, subreddits often require that the type of lens be mentioned in the title. Posts need to be tagged differently if the image is straight from camera (SOOC) or post-processed. Crafting a post isn't terribly difficult, but some details can be gleaned from the embedded EXIF data in a more reliable way than from the shooter's memory. Also, I wanted to see how helpful image recognition could prove in this context.
So I set out to build a user interface that only requires one action from the user: uploading a photo. All the rest is done via asynchronous requests that integrate with the UI. EXIF data is extracted on the server, AI-generated tags come from a third-party image recognition service. The challenge is to display the information seamlessly, as it comes in from the various sources, without startling the user.
Here is the demo. Special thanks to Scicloj for hosting me with grace and kindness. If you're not familiar with that organization yet, check it out.
Have you tried uploading a photo from your own devices/cameras? What are your thoughts? Is there something you would have done/seen differently? Please let me know.
Exactly, it does what you describe. And if it detects that the photo was shot on a Fujifilm camera, it will offer to post on the r/fujifilm sub on behalf of the user (via OAuth). This is how it's immediately useful to the members of that sub (of which I am member).
I am gathering feedback and feature requests. Always curious to hear from users.