Photoshop Engineer Unblurs Motion & Restores Focus

Here is a quickie, Wild Ducks. File this one under “Wow!

This demonstration by an Adobe PhotoShop developer forces me to rethink my understanding of focus and information recovery.

Deconvolution restores information, but only if captured in original image and obfuscated via a reversible & non-lossy process. The filter proves that motion blur meets the criteria. It is not indicative of missing information!

Until now, I thought that motion blur (example #1 in the video) and focus (example #2) were evidence of lost information—and therefore, they could not be overcome. That is, if a camera is out of focus or moving in relation to its subject, it is part way along the path to a complete loss of picture information (for example, a camera that is totally unfocussed or moving in a complete circle with the shutter open. In the extreme case, film is exposed to unfocussed light…no useful information). But this video proves that there exist algorithms that can make reasonable measurements and assumptions about the original scene and then recover sharpness and lost information.

Listen to the audience reaction at these times in the video:  1:17 & 3:33. The process is startling because it appears to recover information and not just perceived sharpness. Click for close ups of before-&-after that wow’d the audience  [Plaza]   [Cruise poster]

An existing 3rd party plugin, Focus Magic [updated review], may do the same thing. It is pitched to forensic investigators. (Note to Wild Ducks: Thwarting forensics is a noble calling). Focus Magic touts startling before-&-after photos of a blurry license plate which becomes easily readable after processing. Their web site highlights the restoration of actual sharpness through a process of deconvolution* as opposed to simply enhancing perceived sharpness by applying faux features such as unsharp mask or edge acutance. It is not clear if the two projects use the same underlying technique.

Implications for File Compression (e.g. JPEG)

Here’s something for armchair mathematicians to ponder. If we compare two compressed files: An image with sharp focus and an identical image that is unfocused but still recoverable, we see that the file size of the unfocused image is considerably smaller. In the past, we explained this based on the assumption that the unfocused image contains less information, as if we had resampled the original image at a lower resolution.

But if the unfocused image can be brought into focus (and if the compressed file size relates to the visual entropy of the uncompressed image), then how do we explain the smaller file size? Put another way, if detail in the unfocused image is recoverable, than we should be able to boost file compression by intentionally unfocusing images and then restoring focus during decompression. This should also work for lossless compression methods such as TIF/CCITT.

* Deconvolution is a field of mathematics & signal processing that refers to the removal of noise or distortion and revealing meaningful information hidden within a polluted file or signal. What is surprising about the PhotoShop demonstration (and perhaps the process used by Focus Magic) is that there exists a deconvolution process for information that I had assumed was never captured during the original recording process.

6 thoughts on “Photoshop Engineer Unblurs Motion & Restores Focus

  1. This is an interesting piece of writing–a bit beyond me, in certain respects, yet I think I get the gist of what you are saying. Thanks for the sharing. Warm wishes, Tasha

  2. If you like this, check out what they’re doing at lytro.com, where they’re making a camera that captures the entire light field and lets you choose the focal plane later.

    • Yes, Paul. I have recently read articles about post-snapshot, multi-field focus. Fascinating!

      Incidentally, while I believe that the Adobe tecnology demonstration inspires awe (observe the gasps from the audience!), I may have misunderstood a subtle but important distinction at the time I posted…

      In the Adobe demo, three examples of image-recovery are demonstrated: An outdoor mall plaza, a cruise poster, and an image of an Adobe employee. Although the demonstration certainly corrects blurry images, the process may be restricted to motion blur. Some news reports used the words “restores focus” rather than “recovers sharpness” or “removes multiple exposures”. Based on the word “focus”, I believed (but may have been mistaken) that the process compensates for images that are actually out of focus.

      Although I am not sure about the 3rd image (an employee close up), I suspect that all 3 photos may have been blurred by camera motion and rather than lack of focus. The speaker explains that the startling correction (and the audience impression of recovered information) is achieved by creating a map of the camera trajectory and then mathematically reversing the process. In other words, the magic is more likely due to the deletion of a continuous exposire rather than the recovery of information! (Perhaps it helps if the movement was “jarring” so which would allow the algorithm remove clean multiples exposures).

      It still seems to me that unfocussed images lack information from the original scene and cannot be meaningfully reconstructed without tossing in asumptions that distort reality.

      • It shouldn’t matter whether its defocus or motion blur. The algorithm is slightly different, but both rely on the concept of deconvolution: which is to analyze a sample and generate (through iterative and heuristic means) an original image that would have, if defocused, looked like the input image. Information is not created, but there are going to be multiple solves for any given input. The cleverness is in determining which is most likely to look like the user wants it to.

        cw

        • Hello Chris,

          I would like to talk with you about this. Please use the contact form to send me your phone number. (It won’t be shown online or used for anything other than my one call). I think that you are mistaken. But, it is quite possible that you have more knowledge in this field than me, and so I am eager to expand my understanding…

          It is my belief that although an algorithm may be agnostic about the underlying blur, it cannot improve an unfocused (or defocused) image, other than superficial sharpening—a perceptual trick, which does not really recover anything. Whereas, motion blur contains all of the clear underlying information with added noise. The brilliance of Adobe’s new methods (in my opinion), is that they realized that the added noise from motion-blur is not random, but rather the ‘smear’ follows a path—and therefore, it can that can be backed out algorithmically.

          Again, I would enjoy discussing this with you on a phone call.

  3. Actually, no information is ‘lost’ (kinda) in motion blur OR because of a defocused camera. All those photons still hit photosites on the sensor, they just don’t hit them in the spots that they would have with a stationary, focused image. In each case, the distribution of the photons can be inferred from the image- a motion blurred image will translate the exposure values in some way, while the defocused camera will redistribute them in by converting dimensionless points of light into larger circles of light.

    Sensors, (and film) don’t remember which photons hit first, nor do they know which ‘parent’ light source contributed to a given defocused circle. But by calculating in frequency space, the math needed to infer this gets easier, and thus the estimation of a focused (or stationery) image from a defocused (or motion blurred) image is quite possible.

    Sensors have limited dynamic range, so the ability to untangle this info breaks down if the redistribution over or underexposes the image beyond the range of the sensor. But clever methods of inference can lead to spectacular results. If you can figure out how the image’s photons were distributed, and you haven’t exceeded your sensor’s range (or underrun it completely) you can put together a decent image. The putting together part is easy- photons and sensors are dead linear, coming up with the spatial info and the ensuing deconvolutuion is the genius part. Some approaches are discussed in the papers below. (I didn’t write any of these). There is a great paper out there that sums it all up but I cant seem to find that one. But I can tell you with absolute certainty that algorithms exist for both motion blurred and defocused images.

    http://www.wseas.us/e-library/conferences/2005miami/papers/501-247.pdf
    http://refocus-it.sourceforge.net
    http://zoi.utia.cas.cz/restoration.html
    http://yuzhikov.com/articles/BlurredImagesRestoration1.htm
    http://www.metakine.com/products/backinfocus/tutorials/basics.html

    I did write this paper http://gl.ict.usc.edu/Research/CFPC (with some help) so hopefully that will establish that I do know a little. Whether that’s more than you is not a big deal- I am always happy to share (and argue) I don’t work for Adobe, but I have friends who do, next time I’m in the right company I will see what I can find out.

    — Chris Watts

Ellery reads all feedback. 1st comment delayed for moderation