Skip Navigation

Court Bans Use of 'AI-Enhanced' Video Evidence Because That's Not How AI Works

gizmodo.com Court Bans Use of 'AI-Enhanced' Video Evidence Because That's Not How AI Works

This AI hype cycle has dramatically distorted society's views of what's possible with image upscalers.

Court Bans Use of 'AI-Enhanced' Video Evidence Because That's Not How AI Works

A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

236

You're viewing a single thread.

236 comments
  • This actually opens an interesting debate.

    Every photo you take with your phone is post processed. Saturation can be boosted, light levels adjusted, noise removed, night mode, all without you being privy as to what's happening.

    Typically people are okay with it because it makes for a better photo - but is it a true representation of the reality it tried to capture? Where is the line of the definition of an ai-enhanced photo/video?

    We can currently make the judgement call that a phones camera is still a fair representation of the truth, but what about when the 4k AI-Powered Night Sight Camera does the same?

    My post is more tangentially related to original article, but I'm still curious as what the common consensus is.

    • Every photo you take with your phone is post processed.

      Years ago, I remember looking at satellite photos of some city, and there was a rainbow colored airplane trail on one of the photos. It was explained that for a lot of satellites, they just use a black and white imaging sensor, and take 3 photos while rotating a red/green/blue filter over that sensor, then combining the images digitally into RGB data for a color image. For most things, the process worked pretty seamlessly. But for rapidly moving objects, like white airplanes, the delay between the capture of red/green/blue channel created artifacts in the image that weren't present in the actual truth of the reality being recorded. Is that specific satellite method all that different from how modern camera sensors process color, through tiny physical RGB filters over specific subpixels?

      Even with conventional photography, even analog film, there's image artifacts that derive from how the photo is taken, rather than what is true of the subject of the photograph. Bokeh/depth of field, motion blur, rolling shutter, and physical filters change the resulting image in a way that is caused by the camera, not the appearance of the subject. Sometimes it makes for interesting artistic effects. But it isn't truth in itself, but rather evidence of some truth, that needs to be filtered through an understanding of how the image was captured.

      Like the Mitch Hedberg joke:

      I think Bigfoot is blurry, that's the problem. It's not the photographer's fault. Bigfoot is blurry, and that's extra scary to me.

      So yeah, at a certain point, for evidentiary proof in court, someone will need to prove some kind of chain of custody that the image being shown in court is derived from some reliable and truthful method of capturing what actually happened in a particular time and place. For the most part, it's simple today: i took a picture with a normal camera, and I can testify that it came out of the camera like this, without any further editing. As the chain of image creation starts to include more processing between photons on the sensor and digital file being displayed on a screen or printed onto paper, we'll need to remain mindful of the areas where that can be tripped up.

      • The crazy part is that your brain is doing similar processing all the time too. Ever heard of the blindspot? Your brain has literally zero data there but uses "content-aware fill" to hide it from you. Or the fact, that your eyes are constantly scanning across objects and your brain is merging them into a panorama on the fly because only a small part of your field of vision has high enough fidelity. It will also create fake "frames" (look up stopped-clock illusion) for the time your eyes are moving where you should see a blur instead. There's more stuff like this, a lot of it manifests itself in various optical illusions. So not even our own eyes capture the "truth". And then of course the (in)accuracy of memory when trying to recall what we've seen, that's an entirely different can of worms.

      • Fantasitc expansion of my thought. This is something that isn't going to be answered with an exact scientific value but will have to decided based on our human experiences with the tech. Interesting times ahead.

    • We can currently make the judgement call that a phones camera is still a fair representation of the truth

      No you can't. Samsung's AI is out there now and it absolutely will add data to images and video in order to make them look better. Not just adjust an image but actually add data...on it's own. If you take an off angle photo and then tell it to straighten it it will take your photo, re-orient, and then "make up" what should have been in the corner. It will do the same thing for video. With video it also has the ability flat out add frames in order to do the slow-motion effect or smooth out playback if the recording was janky.

      Samsung has it out there now so Apple and rest of the horde will surely be quick in rolling it out.

    • Computational photography in general gets tricky because it relies on your answer to the question "Is a photograph supposed to reflect reality, or should it reflect human perception?"

      We like to think those are the same, but they're not. Your brain only has a loose interest in reality and is much more focused on utility. Deleting the irrelevant, making important things literally bigger, enhancing contrast and color to make details stand out more.
      You "see" a reconstruction of reality continuously updated by your eyes, which work fundamentally differently than a camera.

      Applying different expose settings to different parts of an image, or reconstructing a video scene based on optic data captured over the entire video doesn't capture what the sensor captured but it can come much closer to representing what the human holding the camera perceived.
      Low light photography is a great illustration of this, because we see a person walk from light to dark and our brains will shamelessly remember what color their shirt was and that grass is green and update your perception, as well as using a much longer "exposure" time to capture more light data to maintain color perception in low light conditions, even though we might not have enough actual light to make those determinations without clues.

      I think most people want a snapshot of what they perceived at the moment.
      I like the trend of the camera capturing the image, and also storing the "plain" image. There's also capturing the raw image data, which is basically a dump of the cameras optic sensor data. It's basically what the automatic post processing is tweaking, and what human photographers use to correct light balance and stuff.

    • This is what I was wondering about as I read the article. At what point does the post processing on the device become too much?

    • I was wondering that exact same thing. If I take a portrait photo on my Android phone, it instantly applies a ton of filters. If I had taken a picture of two people, and then one of those people murders the other shortly afterwards, could my picture be used as evidence to show they were together just before the murder? Or would it be inadmissible because it was an AI-doctored photo?

236 comments