Light field photography has been around for a long time. The first analog light field device was invented in 1908 by Gabriel Lippmann who eventually won a Nobel Prize for his work on color photography.

Light field photography allows you to move the focus plane of an image around after an image has already been taken, which is impossible in normal photography.

So, how does light field photography work? This article will teach you everything you need to know.

What Is Light Field Photography?

photo of a woman taking photos on her camera

Normal photography works very similarly to the human eye. You focus with the camera and the sensor captures a two-dimensional image of three-dimensional space, with a “slice” of that space being in focus. Everything in front or behind the focused area is blurry and out of focus. This is because a normal sensor captures information only regarding the intensity of the light.

The light field refers to the entirety of all rays of light (every photon) in a scene. The light rays that make up the light field are defined by the plenoptic function (this is why light-field cameras are also called plenoptic cameras). The plenoptic function describes a light ray in five dimensions: its coordinates in 3D space (X, Y, `) and its direction in 2D space (two angles).

Light field photography captures information from the light field in a particular scene, including both the intensity of the light and the direction of the light rays (according to the plenoptic function).

Light field photography is very different from conventional photography. It allows you to capture a three-dimensional image and choose where the focus will be after the fact. By using multiple sensors, both the incoming light and the direction of the light rays can be captured.

How Does Light Field Photography Work?

Lytro Illum 2015 Camera
Image Credit: Morio/Wikimedia Commons

The information in light field cameras includes the intensity, color, and direction of the light. Because of this, it’s possible to mathematically determine where each ray of light emanated from before it reached the sensor. This means that a three-dimensional model of the scene can be constructed.

There are several techniques for capturing a light field, for instance:

  • Using a single camera to capture information about a scene from multiple angles. This method produces a selection of many images.
  • Multiple-camera arrays. These usually feature dozens of sensors in a broad array that each capture information about a scene from a slightly different angle. This method also produces many images at once.
  • Microlens arrays. Having an array of hundreds of microlenses in front of a single digital camera sensor allows for light field information to be captured. This produces an image that is made up of hundreds of sub-images.

Each image or sub-image differs by capturing light rays that originated at slightly different locations in space. Because each pixel will therefore show a slightly different scene, information about the angle of the light ray is recorded. This makes it possible to calculate each object’s distance from the camera and position in the scene and ultimately develop a 3D model of the scene.

Applications of Light Field Photography

There are various uses for light field photography that could be incredibly useful. Because all the information about the light field of a scene is recorded, it’s possible to process light field images in many ways that aren’t possible in normal photography.

1. Custom Focal Point

The most well-known feature of light field photography is being able to change the focus point after the image has been taken. This is because the information captured by the camera includes focus at every distance, meaning that with sophisticated software, it’s possible to choose any distance to be the focal point in the scene.

2. Variable Depth of Field

Depth of Field Illustration
Image Credit: Doodybutch/Wikimedia Commons

Because of the nature of information recorded, it’s possible to process images with “synthetic aperture”. Aperture is the diameter of the opening in a lens and determines the depth of field (how out of focus the foreground and background are) in an image.

Because a light field image includes information at every possible focus distance, it’s possible to create images that have the smallest possible depth of field (only a very small section is in focus). It’s also possible to create an image with infinite depth of field where everything in the image is in focus.

3. Parallax Effect

Depending on the way the light field is captured, it’s possible to produce slightly different view angles of the scene. This depends on the diameter or width of the system used to take the image. The wider the lens system is, the more light is captured from wider angles.

Once the image is taken, it’s possible to change the perspective of the image by a small amount as if you were moving your head around in the actual scene. This is known as a parallax effect. Using the parallax effect, it’s also possible to reconstruct a 3D image.

4. Calculate Distances

Depending on the sensitivity of the light field photography system, and how well-known its optical properties are, it’s possible to calculate the distance from the lens to objects in a scene. One major application of this would be in microscopy where it’s useful to accurately measure the size of synthetic or biological samples.

5. Change Lighting Conditions

Because so much information about scene depth is recorded in light field photography, it’s possible with post-processing software to accurately reconstruct the lighting in a scene. Since the software knows the relative positions of all the objects in an image, it can convincingly calculate where the shadows would fall.

6. Virtual Reality

A photograph of a PlayStation Virtual Reality Headset

Throughout the early 2020s, virtual reality became more of a prominent talking point. This was partially fueled by Meta's desire to create the Metaverse. Light field photography can be used to create real-life VR; Google has developed examples of this that can be viewed on Steam.

Using a rotating camera array of 16 GoPros, Google captured thousands of images that recorded all of the light field information in a 3D space. They were then able to create a three-dimensional, six-degrees-of-freedom, virtual reality experience.

Are Light Field Cameras the Future of Photography?

In 2012, the first consumer market light field camera was released by the company Lytro. This camera had a one-megapixel resolution with a constant aperture of f/2 and sold for between $400 and $500. Since then, very few consumer-targeted light field cameras have hit the market.

The lack of resolution and image quality meant that light field cameras simply didn’t take off in the consumer market as DSLRs did. In fact, many of the uses of light field technology remain in development.

But, there’s a reason Google (and Apple) are investing in this technology, and its use in creating 3D user experiences for VR is just one example!

Light Field Photography: A Developing Technology

While light field cameras are relatively new, they have huge potential. You can draw a lot of parallels to the early days of photography, even though it's different—with fixed apertures and low resolution being two examples. Over time, however, it's likely that this technology will grow.

It's worth keeping an eye out to see how light field photography is applied in the future.