Skip Navigation

AI-augmented user interface

I had the honor and the pleasure to demo an AI-augmented user interface during a recent meetup. Since the demo is online and publicly available, I am inviting anyone who might be interested to try it out. The audience for this demo is, I'd venture to say, front-end developers, designers, and photographers.

As an amateur photographer myself, I frequently post photos on social media. There are lots of forums out there, and it becomes quickly apparent that different communities have different rules as to how a post should look like. For example, subreddits often require that the type of lens be mentioned in the title. Posts need to be tagged differently if the image is straight from camera (SOOC) or post-processed. Crafting a post isn't terribly difficult, but some details can be gleaned from the embedded EXIF data in a more reliable way than from the shooter's memory. Also, I wanted to see how helpful image recognition could prove in this context.

So I set out to build a user interface that only requires one action from the user: uploading a photo. All the rest is done via asynchronous requests that integrate with the UI. EXIF data is extracted on the server, AI-generated tags come from a third-party image recognition service. The challenge is to display the information seamlessly, as it comes in from the various sources, without startling the user.

Here is the demo. Special thanks to Scicloj for hosting me with grace and kindness. If you're not familiar with that organization yet, check it out.

Have you tried uploading a photo from your own devices/cameras? What are your thoughts? Is there something you would have done/seen differently? Please let me know.

3
3 comments