A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

  • @[email protected]
    link
    fedilink
    English
    11 year ago

    Think about how they reconstructed what the Egyptian Pharoahs looks like, or what a kidnap victim who was kidnapped at age 7 would look like at age 12. Yes, it can’t make something look exactly right, but it also isn’t just randomly guessing. Of course, it can be abused by people who want jurys to THINK the AI can perfectly reproduce stuff, but that is a problem with people’s knowledge of tech, not the tech itself.

    • zout
      link
      fedilink
      31 year ago

      Unfortunately, the people with no knowledge of tech will then proceed to judge if someone is innocent or guilty.

  • @[email protected]
    link
    fedilink
    English
    61 year ago

    Used to be that people called it the “CSI Effect” and blamed it on television.

    Funny thing. While people worry about unjust convictions, the “AI-enhanced” video was actually offered as evidence by the defense.

  • Neato
    link
    fedilink
    English
    471 year ago

    Imagine a prosecution or law enforcement bureau that has trained an AI from scratch on specific stimuli to enhance and clarify grainy images. Even if they all were totally on the up-and-up (they aren’t, ACAB), training a generative AI or similar on pictures of guns, drugs, masks, etc for years will lead to internal bias. And since AI makers pretend you can’t decipher the logic (I’ve literally seen compositional/generative AI that shows its work), they’ll never realize what it’s actually doing.

    So then you get innocent CCTV footage this AI “clarifies” and pattern-matches every dark blurb into a gun. Black iPhone? Maybe a pistol. Black umbrella folded up at a weird angle? Clearly a rifle. And so on. I’m sure everyone else can think of far more frightening ideas like auto-completing a face based on previously searched ones or just plain-old institutional racism bias.

    • @[email protected]
      link
      fedilink
      English
      21 year ago

      just plain-old institutional racism bias

      Every crime attributed to this one black guy in our training data.

  • dual_sport_dork 🐧🗡️
    link
    fedilink
    English
    1591 year ago

    No computer algorithm can accurately reconstruct data that was never there in the first place.

    Ever.

    This is an ironclad law, just like the speed of light and the acceleration of gravity. No new technology, no clever tricks, no buzzwords, no software will ever be able to do this.

    Ever.

    If the data was not there, anything created to fill it in is by its very nature not actually reality. This includes digital zoom, pixel interpolation, movement interpolation, and AI upscaling. It preemptively also includes any other future technology that aims to try the same thing, regardless of what it’s called.

    • @[email protected]
      link
      fedilink
      English
      241 year ago

      Digital zoom is just cropping and enlarging. You’re not actually changing any of the data. There may be enhancement applied to the enlarged image afterwards but that’s a separate process.

      • dual_sport_dork 🐧🗡️
        link
        fedilink
        English
        361 year ago

        But the fact remains that digital zoom cannot create details that were invisible in the first place due to the distance from the camera to the subject. Modern implementations of digital zoom always use some manner of interpolation algorithm, even if it’s just a simple linear blur from one pixel to the next.

        The problem is not in how a digital zoom works, it’s on how people think it works but doesn’t. A lot of people (i.e. [l]users, ordinary non-technical people) still labor under the impression that digital zoom somehow makes the picture “closer” to the subject and can enlarge or reveal details that were not detectable in the original photo, which is a notion we need to excise from people’s heads.

        • @[email protected]
          link
          fedilink
          English
          31 year ago

          I 100 % agree on your primary point. I still want to point out that a detail in a 4k picture that takes up a few pixels will likely be invisible to the naked eye unless you zoom. “Digital zoom” without interpolation is literally just that: Enlarging the picture so that you can see details that take up too few pixels for you to discern them clearly at normal scaling.

    • @[email protected]
      link
      fedilink
      English
      8
      edit-2
      1 year ago

      If people don’t get the second law of thermodynamics, explaining this to them is useless. EDIT: … too.

      • dual_sport_dork 🐧🗡️
        link
        fedilink
        English
        61 year ago

        There’s a grain of truth to that. Everything you see is filtered by the limitations of your eyes and the post-processing applied by your brain which you can’t turn off. That’s why you don’t see the blind spot on your retinas where your optic nerve joins your eyeball, for instance.

        You can argue what objective reality is from within the limitations of human observation in the philosophy department, which is down the hall and to your left. That’s not what we’re talking about, here.

        From a computer science standpoint you can absolutely mathematically prove the amount of data that is captured in an image and, like I said, no matter how hard you try you cannot add any more data to it that can be actually guaranteed or proven to reflect reality by blowing it up, interpolating it, or attempting to fill in patterns you (or your computer) think are there. That’s because you cannot prove, no matter how the question or its alleged solution are rephrased, that any details your algorithm adds are actually there in the real world except by taking a higher resolution/closer/better/wider spectrum image of the subject in question to compare. And at that point it’s rendered moot anyway, because you just took a higher res/closer/better/wider/etc. picture that contains the required detail, and the original (and its interpolation) are unnecessary.

        • Richard
          link
          fedilink
          English
          21 year ago

          You cannot technically prove it, that’s true, but that does not invalidate the interpolated or extrapolated data, because you will be able to have a certain degree of confidence in them, be able to judge their meaningfulness with a specific probability. And that’s enough, because you are never able to 100% prove something in physical sciences. Never. Even our most reliable observations, strongest theories and most accurate measurements all have a degree of uncertainty. Even the information and quantum theories you rest your argument on are unproven and unprovable by your standards, because you cannot get to 100% confidence. So, if you find that there’s enough evidence for the science you base your understanding of reality on, then rationally and by deductive reasoning you will have to accept that the prediction of a machine learning model that extrapolates some data where the probability of validity is just as great as it is for quantum physics must be equally true.

      • Natanael
        link
        fedilink
        English
        11 year ago

        Entropy and information theory is very real, it’s embedded in quantum physics

    • @[email protected]
      link
      fedilink
      English
      9
      edit-2
      1 year ago

      It preemptively also includes any other future technology that aims to try the same thing

      No it doesn’t. For example you can, with compute power, for distortions introduced by camera lenses/sensors/etc and drastically increase image quality. For example this photo of pluto was taken from 7,800 miles away - click the link for a version of the image that hasn’t been resized/compressed by lemmy:

      The unprocessed image would look nothing at all like that. There’s a lot more data in an image than you can see with the naked eye, and algorithms can extract/highlight the data. That’s obviously not what a generative ai algorithm does, those should never be used, but there are other algorithms which are appropriate.

      The reality is every modern photo is heavily processed - look at this example by a wedding photographer, even with a professional camera and excellent lighting the raw image on the left (where all the camera processing features are disabled) looks like garbage compared to exactly the same photo with software processing:

      • dual_sport_dork 🐧🗡️
        link
        fedilink
        English
        111 year ago

        None of your examples are creating new legitimate data from the whole cloth. They’re just making details that were already there visible to the naked eye. We’re not talking about taking a giant image that’s got too many pixels to fit on your display device in one go, and just focusing on a specific portion of it. That’s not the same thing as attempting to interpolate missing image data. In that case the data was there to begin with, it just wasn’t visible due to limitations of the display or the viewer’s retinas.

        The original grid of pixels is all of the meaningful data that will ever be extracted from any image (or video, for that matter).

        Your wedding photographer’s picture actually throws away color data in the interest of contrast and to make it more appealing to the viewer. When you fiddle with the color channels like that and see all those troughs in the histogram that make it look like a comb? Yeah, all those gaps and spikes are actually original color/contrast data that is being lost. There is less data in the touched up image than the original, technically, and if you are perverse and own a high bit depth display device (I do! I am typing this on a machine with a true 32-bit-per-pixel professional graphics workstation monitor.) you actually can state at it and see the entirety of the detail captured in the raw image before the touchups. A viewer might not think it looks great, but how it looks is irrelevant from the standpoint of data capture.

        • Richard
          link
          fedilink
          English
          21 year ago

          They talked about algorithms used for correcting lens distortions with their first example. That is absolutely a valid use case and extracts new data by making certain assumptions with certain probabilities. Your newly created law of nature is just your own imagination and is not the prevalent understanding in the scientific community. No, quite the opposite, scientific practice runs exactly counter your statements.

      • @[email protected]
        link
        fedilink
        English
        11 year ago

        offtopic: I like the picture on the left more. It feels more alive. Colder in color, but warmer in expression. Dunno how to say that. And I’ve been in a forest yesterday, so my perception is skewed.

      • Natanael
        link
        fedilink
        English
        61 year ago

        This is just smarter post processing, like better noise cancelation, error correction, interpolation, etc.

        But ML tools extrapolate rather than interpolate which adds things that weren’t there

      • @[email protected]
        link
        fedilink
        English
        131 year ago

        No computer algorithm can accurately reconstruct data that was never there in the first place.

        What you are showing is (presumably) a modified visualisation of existing data. That is: given a photo which known lighting and lens distortion, we can use math to display the data (lighting, lens distortion, and input registered by the camera) in a plethora of different ways. You can invert all the colours if you like. It’s still the same underlying data. Modifying how strongly certain hues are shown, or correcting for known distortion are just techniques to visualise the data in a clearer way.

        “Generative AI” is essentially just non-predictive extrapolation based on some data set, which is a completely different ball game, as you’re essentially making a blind guess at what could be there, based on an existing data set.

        • Richard
          link
          fedilink
          English
          11 year ago

          making a blind guess at what could be there, based on an existing data set.

          Here’s your error. You yourself are contradicting the first part of your sentence with the last. The guess is not “blind” because the prediction is based on an existing data set . Looking at a half occluded circle with a model then reconstructing the other half is not a “blind” guess, it is a highly probable extrapolation that can be very useful, because in most situations, it will be the second half of the circle. With a certain probability, you have created new valuable data for further analysis.

          • @[email protected]
            link
            fedilink
            English
            2
            edit-2
            1 year ago

            Looking at a half circle and guessing that the “missing part” is a full circle is as much of a blind guess as you can get. You have exactly zero evidence that there is another half circle present. The missing part could be anything, from nothing to any shape that incorporates a half circle. And you would be guessing without any evidence whatsoever as to which of those things it is. That’s blind guessing.

            Extrapolating into regions without prior data with a non-predictive model is blind guessing. If it wasn’t, the model would be predictive, which generative AI is not, is not intended to be, and has not been claimed to be.

          • @[email protected]
            link
            fedilink
            English
            31 year ago

            But you are not reporting the underlying probability, just the guess. There is no way, then, to distinguish a bad guess from a good guess. Let’s take your example and place a fully occluded shape. Now the most probable guess could still be a full circle, but with a very low probability of being correct. Yet that guess is reported with the same confidence as your example. When you carry out this exercise for all extrapolations with full transparency of the underlying probabilities, you find yourself right back in the position the original commenter has taken. If the original data does not provide you with confidence in a particular result, the added extrapolations will not either.

            • @[email protected]
              link
              fedilink
              English
              1
              edit-2
              1 year ago

              And then circles get convictions so even if the model did somehow start off completely unbiassed people are going to start feeding it data that weighs towards finding more circles since a prosecution will be used as a ‘success’ to feed back into the model and ‘improve’ it.

    • @[email protected]
      link
      fedilink
      English
      20
      edit-2
      1 year ago

      Hold up. Digital zoom is, in all the cases I’m currently aware of, just cropping the available data. That’s not reconstruction, it’s just losing data.

      Otherwise, yep, I’m with you there.

      • Natanael
        link
        fedilink
        English
        4
        edit-2
        1 year ago

        There’s a specific type of digital zoom which captures multiple frames and takes advantage of motion between frames (plus inertial sensor movement data) to interpolate to get higher detail. This is rather limited because you need a lot of sharp successive frames just to get a solid 2-3x resolution with minimal extra noise.

        • ioen
          link
          fedilink
          English
          181 year ago

          Also since companies are adding AI to everything, sometimes when you think you’re just doing a digital zoom you’re actually getting AI upscaling.

          There was a court case not long ago where the prosecution wasn’t allowed to pinch-to-zoom evidence photos on an iPad for the jury, because the zoom algorithm creates new information that wasn’t there.

    • Richard
      link
      fedilink
      English
      31 year ago

      That’s wrong. With a degree of certainty, you will always be able to say that this data was likely there. And because existence is all about probabilities, you can expect specific interpolations to be an accurate reconstruction of the data. We do it all the time with resolution upscaling, for example. But of course, from a certain lack of information onward, the predictions become less and less reliable.

    • @[email protected]
      link
      fedilink
      English
      31 year ago

      In my first year of university, we had a fun project to make us get used to physics. One of the projects required filming someone throwing a ball upwards, and then using the footage to get the maximum height the ball reached, and doing some simple calculations to get the initial velocity of the ball (if I recall correctly).

      One of the groups that chose that project was having a discussion on a problem they were facing: the ball was clearly moving upwards on one frame, but on the very next frame it was already moving downwards. You couldn’t get the exact apex from any specific frame.

      So one of the guys, bless his heart, gave a suggestion: “what if we played the (already filmed) video in slow motion… And then we filmed the video… And we put that one in slow motion as well? Maybe do that a couple of times?”

      A friend of mine was in that group and he still makes fun of that moment, to this day, over 10 years later. We were studying applied physics.

    • @[email protected]
      link
      fedilink
      English
      581 year ago

      One little correction, digital zoom is not something that belongs on that list. It’s essentially just cropping the image. That said, “enhanced” digital zoom I agree should be on that list.

    • @[email protected]
      link
      fedilink
      English
      31 year ago

      No computer algorithm can accurately reconstruct data that was never there in the first place.

      Okay, but what if we’ve got a computer program that can just kinda insert red eyes, joints, and plums of chum smoke on all our suspects?

    • @[email protected]
      link
      fedilink
      English
      11 year ago

      Well that’s a bit close minded.

      Perhaps at some point we will conquer quantum mechanics enough to be able to observe particles at every place and time they have ever and will ever exist. Do that with enough particles and you’ve got a de facto time machine, albeit a read-only one.

      • @[email protected]
        link
        fedilink
        English
        11 year ago

        Complexity relates nonlinearly to the amount of moving parts.

        We might be able to spend an ungodly amount of energy to do that for one particle for an hour of its existence.

        Being able to build a computer (in a wide sense) that can emulate in short time (less than human life) processes consistent of more energy than was spent on its creation - it’s something else.

      • @[email protected]
        link
        fedilink
        English
        31 year ago

        So many things we believe to be true today suggest this is not going to happen. The uncertainty principle, and the random nature of nuclear decay chief among them. The former prevents you gaining the kind of information you would need to do this, and the latter means that even if you could, it would not provide the kind of omniscience one might assume.

        • dual_sport_dork 🐧🗡️
          link
          fedilink
          English
          21 year ago

          Limits of quantum observation aside, you also could never physically store the data of the position/momentum/state of every particle in any universe within that universe, because the particles that exist in the universe are the sum total of the materials with which we could ever use to build the data storage. You’ve got yourself a chicken-and-egg scenario where the egg is 93 billion light years wide, there.

  • @[email protected]
    link
    fedilink
    English
    2
    edit-2
    1 year ago

    Sure, no algorithm is able to extract any more information from a single photo. But how about combining detail caught in multiple frames of video? Some phones already do this kind of thing, getting multiple samples for highly zoomed photos thanks to camera shake.

    Still, the problem remains that the results from a cherry-picked algorithm or outright hand-crafted pics may be presented.

    • Natanael
      link
      fedilink
      English
      1
      edit-2
      1 year ago

      Depends on implementation, if done properly and if they don’t try to upscale and deblur too much then that kind of interpolation between multiple frames can be useful to extract more detail. If it’s a moving subject then this type of zoom can create false results because the algorithm can’t tell the difference and will think it’s an optical artifact. For stationary subjects and photographers it can be useful

  • AutoTL;DRB
    link
    fedilink
    English
    41 year ago

    This is the best summary I could come up with:


    A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial.

    And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

    Lawyers for Puloka wanted to introduce cellphone video captured by a bystander that’s been AI-enhanced, though it’s not clear what they believe could be gleaned from the altered footage.

    For example, there was a widespread conspiracy theory that Chris Rock was wearing some kind of face pad when he was slapped by Will Smith at the Academy Awards in 2022.

    Using the slider below, you can see the pixelated image that went viral before people started feeding it through AI programs and “discovered” things that simply weren’t there in the original broadcast.

    Large language models like ChatGPT have convinced otherwise intelligent people that these chatbots are capable of complex reasoning when that’s simply not what’s happening under the hood.


    The original article contains 730 words, the summary contains 166 words. Saved 77%. I’m a bot and I’m open source!

  • @[email protected]
    link
    fedilink
    English
    241 year ago

    Why not make it a fully AI court and save time if they were going to go that way. It would save so much time and money.

    Of course it wouldn’t be very just, but then regular courts aren’t either.

  • @[email protected]
    link
    fedilink
    English
    2151 year ago

    “Your honor, the evidence shows quite clearly that the defendent was holding a weapon with his third arm.”