قالب وردپرس درنا توس
Home / Tips and Tricks / How Computational Photography Improves Smartphone Photos

How Computational Photography Improves Smartphone Photos



The camera on an iPhone 12 Pro Max.
Hadrian / Shutterstock.com

When it comes to smartphone cameras, it̵

7;s not all about the hardware anymore. Modern smartphones automatically use computational photography techniques to enhance every photo you take.

Using software to improve your smartphone camera

Computer photography is a broad term for many different techniques that use software to enhance or expand the capabilities of a digital camera. Crucially, computational photography starts with a photo and ends with something that still looks like a photo (even if it could never be taken with a regular camera.)

How Traditional Photography Works

Before we go into more detail, let’s take a look at what happens when you take a photo with an old film camera. Kind of like the SLR camera that you (or your parents) used in the 1980s.

film photography image
I shot this with a 1989 movie camera. It’s about as non-computational as it gets. Harry Guinness

When you click the shutter button, the shutter opens for a fraction of a second and light falls on the film. All light is focused by a physical lens that determines how everything in the photo will look. To zoom in on birds that are far away, use a telephoto lens with a long focal length, while for wide-angle shots of an entire landscape, choose something with a much shorter focal length. Likewise, the aperture of the lens determines the depth of field, or how much of the image is in focus. As the light hits the film, the photosensitive compounds are exposed, changing their chemical composition. The image is actually etched onto the film stock.

That just means that the physical properties of the equipment you use determine everything about the photo you take. Once created, an image cannot be updated or changed.

Computational photography adds some extra steps to the process and therefore only works with digital cameras. In addition to capturing the optically determined scene, digital sensors can record additional data such as the color and intensity of the light hitting the sensor. Multiple photos can be taken at once, with different exposure levels to capture more information about the scene. Additional sensors can register how far the subject and background were. And a computer can use all that extra information to do something about the image.

While some DSLRs and mirrorless cameras have standard computational photography features built in, the real stars of the show are smartphones. Google and Apple in particular have used software to expand the capabilities of the small, physically limited cameras on their devices. For example, check out the iPhone’s Deep Fusion Camera feature.

What kinds of things can computer photography do?

Until now, we’ve talked about capabilities and generalities. Now let’s look at some concrete examples of the things that computational photography makes possible.

Fashion portrait

portrait mode preview
This portrait mode shot is very similar to a photo taken on a DSLR with a lens with a wide aperture. There is some evidence that it is not at the transitions between me and the background, but it is very impressive. Harry Guinness

Portrait mode is one of the great successes of computational photography. The tiny lenses in smartphone cameras are physically incapable of taking classic portraits with a blurred background. By using a depth sensor (or machine learning algorithms) they can identify the subject and background of your image and selectively blur the background, giving you something much like a classic portrait.

It’s a perfect example of how computational photography starts with a photo and ends with something that looks like a photo, but using software creates something that the physical camera can’t.

Take better photos in the dark

google astrophotography example
Google captured this with a Pixel phone. That is ridiculous. Most DSLRs don’t take night photos very well. Google

Taking pictures in the dark is difficult with a traditional digital camera; there just isn’t much light to work with so you have to compromise. However, smartphones can do better with computer photography.

By taking multiple photos with different exposure levels and combining them, smartphones can extract more detail from the shadows and get a better end result than a single image would, especially with the tiny sensors in smartphones.

This technique, Night Sight by Google, Night Mode by Apple and something similar by other manufacturers, is not without compromises. It may take a few seconds to capture the multiple exposures. For best results, you should hold your smartphone firmly between the two, but you can still take pictures with it in the dark.

Expose photos better in tricky lighting situations

smart hdr sample shot on iphone
Smart HDR has been activated on my iPhone for this shot. Hence, there are still details in the shadows and highlights. It actually makes the recording a bit weird here, but it’s a great example of the possibilities. Harry Guinness

Merging multiple images not only improves photos when it’s dark; it can work in many other challenging situations as well. HDR or High Dynamic Range photography has been around for a while and can be done manually with DSLR images, but it is now the standard and automatic in the latest iPhones and Google Pixel phones. (Apple calls it Smart HDR, while Google calls it HDR +.)

HDR, whatever it’s called, works by combining photos that prioritize the highlights with photos that prioritize the shadows, then smoothing out any discrepancies. HDR images used to be oversaturated and almost cartoony, but the processes just got a lot better. They can still deviate from it a bit, but for the most part, smartphones do a great job of using HDR to overcome the limited dynamic range of their digital sensors.

And much more

Those are just a few of the more demanding computing functions built into modern smartphones. There are many more features they have to offer such as inserting augmented reality elements into your compositions, automatically editing photos for you, taking long exposure photos, combining multiple frames to enhance the depth of field of the final photo and even the humble panorama mode which also relies on some software assistants to work.

Computational Photography: You Can’t Avoid It

Normally, with an article like this, we would end things by suggesting ways you can take computer photos, or by recommending you play around with the ideas yourself. However, as should be pretty clear from the examples above, if you own a smartphone, you can’t avoid computer photography. Every photo you take with a modern smartphone automatically undergoes a sort of calculation process.

And the techniques of computational photography are only becoming more common. Camera hardware development has slowed for the past six months as manufacturers have reached physical and practical limits and must work around them. Software improvements don’t have the same hard limits. (For example, the iPhone has similar 12 megapixel cameras since the iPhone 6. It’s not that the newer cameras aren’t any better, but the jump in sensor quality between the iPhone 6 and the iPhone 11 is a lot less dramatic than the one between the iPhone 6 and the iPhone 4.)

In the coming years, smartphone cameras will continue to get better as machine learning algorithms get better and ideas move from research labs to consumer technology.




Source link