You know that scene in Blade Runner where Deckard uses futuristic technology (that somehow still uses an old tube television) to explore what is happening in a photograph, even stuff that isn't actually in the photograph? Well, Google and MIT have together created an algorithm that does exactly that. Sort of. Well, okay, not really, but it's still really cool.
What Google and MIT have actually created is a digital method of removing obstructions and reflections from photos that could otherwise be ruined by them. As seen below, the algorithm has the ability to separate the reflection from the photo, which is pretty cool in and of itself, and then fill in the missing information in the desired photo (or give you an interesting picture of a reflection without a background, if that's what you're looking for).
A demonstration of the method can be found in the video below:
The algorithm also works with video, but for me the whole thing brings to mind the fact that digital technology is making it harder and harder to take a "bad" photo. Reflections, rain, fence in the way of your picture? As demonstrated, Google can solve that problem. Stupid photo of food you're about to eat? Slap an Instagram filter on it and suddenly it's artistic and presentable. This is stuff that goes way beyond the red-eye reduction feature on your old point and shoot camera.
We live in an increasingly image-based world. What are the implications when technology and software can guarantee that none of those images will be bad ones?