Computational photography uses software to digitally enhance photographs. It does this in many ways and is most commonly used in smartphones. In fact, computational photography is largely responsible for why smartphone cameras are now so good—especially when compared with much larger and more expensive cameras.

Let’s take a look at what computational photography involves and how it is used to enhance images.

How Does Computational Photography Enhance Images?

photo of someone taking pictures with their smartphone

Traditionally, every photograph is made via two main processes. First, there’s the optical component, which includes the lens, camera sensor, and settings, and then there’s image processing. Usually, image processing occurs after a photograph is made, in developing film or manipulating an image using software like Photoshop.

In contrast, computational photography occurs automatically, side-by-side with the actual capture of the photograph. For instance, when you open your smartphone camera, several things are already taking place, including analyzing the color of the local area and detecting objects like faces within the scene. These processes happen prior to, during, and just after taking a photograph and can drastically improve its quality.

So, what are some of the functions of computational photography?

Image Stacking

Image stacking is when multiple images are combined to retain the best qualities of each. Smartphones use this very often, especially when taking high-dynamic-range (HDR) photographs. The camera takes sequential images very quickly, altering the exposure slightly each time. By stacking the images, details from the lightest and darkest parts of the image can be retained.

This is particularly useful with scenes that have both bright and dark parts. For example, you might be taking a picture of a city with a bright sunset behind it. Image stacking allows your phone to correctly expose both the sun and the darker city, allowing for a vivid, detailed image to be taken.

Pixel Binning

The problem with smartphones is that their camera sensors need to be very small, meaning that for a sensor with high resolution, the pixels also need to be very small. For instance, one of the Samsung S21's sensors measures 64 megapixels and 1.76 inches across. This equates to a pixel size of 0.8 micrometers—more than five times smaller than most DSLR pixels, which is an issue because smaller pixels let in less light than bigger pixels, resulting in lower-quality images.

Pixel binning avoids this problem by combining the information of neighboring pixels into one pixel. In this way, four neighboring pixels will become one. The problem with this is that it drops the ultimate resolution by a quarter (so a 48-megapixel camera will produce a 12-megapixel image). But, the trade-off is usually worth it when it comes to image quality.

Simulated Depth of Field

piano-2308370_1920
Pixabay - no attribution required

You’ll notice that smartphone images generally show everything more or less in focus, including the background of the shot. The reason for this gets a bit technical, but basically, because a smartphone sensor is so small, and the aperture of the lens is typically fixed, each shot has a large depth of field.

In comparison, images from high-end cameras like DSLRs will often have a very soft out-of-focus background that improves the overall aesthetic quality of the image. Hign-end camera lenses and sensors can be manipulated to give this result.

Smartphones instead use software to achieve this effect. Some phones have multiple lenses that take a photo of the foreground and background simultaneously, while some have software that analyses the scene for objects and their edges and blurs the background artificially.

Sometimes this process doesn’t work extremely well, and the smartphone fails to pick up edges properly, blurring parts of a person or object into the background and leading to some interesting photos. But, the software is becoming more sophisticated, leading to some excellent portrait photography from smartphones.

Color Correction

Pretty much every camera has a color balance option. Nowadays, most cameras can do this fully automatically. The camera will take in information about the temperature of color in the scene and determine what kind of lighting is abundant. Is it the warm orange glow of sunset or the bright blue of indoor fluorescent lighting? The camera will take this information and adjust the colors in the photograph accordingly.

Sharpening, Noise Reduction, and Tone Manipulation

To improve the quality of images, many smartphones will apply various effects to the photograph, including sharpening, noise reduction, and tone manipulation.

  • Sharpening selectively applies to the in-focus sections of images.
  • Noise reduction eliminates much of the graininess that arises in low-light situations.
  • Tone manipulation is like applying a filter. It will alter the shadows, highlights, and mid-tones of the photograph to apply a more appealing look to it.

Uses For Computational Photography

Computational photography has made some amazing things possible on the small, unobtrusive cameras in our smartphones.

Night photography

Using HDR image stacking to take multiple exposures of a scene allows smartphones to take sharp, high-quality images in low light.

Astrophotography

Certain phones, like the Google Pixel 4 and above, include an astrophotography mode. For example, the Pixel 4 takes 16 15-second exposures. The long exposure allows the phone sensor to pick up as much light as possible, while the 15-second exposures aren’t long enough for the movement of the stars to cause streaking in the resulting photo.

These images are then combined, artifacts removed automatically, and the result is a gorgeous image of the night sky.

Portrait Mode

With the option to simulate depth of field, smartphones can take gorgeous portrait photography—including selfies. This option can also isolate objects in a scene, adding an out-of-focus appearance to the background.

Panorama Modes

city-3482240_1920
Pixabay - no attribution required

Like HDR, other forms of photography involve combining multiple pictures. The panorama mode included in most smartphones involves taking multiple photographs, then software stitching them together where they meet to create a large photograph.

Some cameras include really interesting versions of this. For instance, some drones like the Mavic Pro 2 include a sphere photo option. The drone will take a series of photographs and stitch them together to create what looks like a miniature Earth.

Computational Photography: Small Sensors, Excellent Photos

As computational photography evolves, smaller cameras like those used in phones, drones, and action cameras will improve drastically. Being able to simulate many of the desirable effects of larger, more expensive camera/lens combinations will be appealing for many people.

Automating these processes will help ordinary people with no photography experience to take amazing photos—something professional photographers might not be too happy about!