Researchers develop AI that can understand light in photographs

Researchers develop AI that can understand light in photographs
Despite significant progress in developing AI systems that can understand the physical world like humans do, researchers have struggled with modeling a certain aspect of our visual system: the perception of light.

"Determining the influence of light in a given photograph is a bit like trying to separate the ingredients out of an already baked cake," explains Chris Careaga, a Ph.D. student in the Computational Photography Lab at SFU. The task requires undoing the complicated interactions between light and surfaces in a scene. This problem is referred to as intrinsic decomposition, and has been studied for nearly half a century.

In a new paper published in the journal ACM Transactions on Graphics, researchers in the Computational Photography Lab, at Simon Fraser University, develop an AI approach to intrinsic decomposition that works on a wide range of images. Their method automatically separates an image into two layers: one with only lighting effects and one with the true colors of objects in the scene.

"The main innovation behind our work is to create a system of neural networks that are individually tasked with easier problems. They work together to understand the illumination in a photograph," Careaga adds.

Although intrinsic decomposition has been studied for decades, SFU's new invention is the first in the field to accomplish this task for any HD image that a person might take with their camera.

"By editing the lighting and colors separately, a whole range of applications that are reserved for CGI and VFX become possible for regular image editing," says Dr. Yağız Aksoy, who leads the Computational Photography Lab at SFU.

"This physical understanding of light makes it an invaluable and accessible tool for content creators, photo editors, and post-production artists, as well as for new technologies such as augmented reality and spatial computing."

The group has since extended their intrinsic decomposition approach, applying it to the problem of image compositing. "When you insert an object or person from one image into another, it's usually obvious that it's edited since the lighting and colors don't match" explains Careaga.

"Using our intrinsic decomposition technique, we can alter the lighting of the inserted object to make it appear more realistic in the new scene." In addition to publishing a paper on this, presented at SIGGRAPH Asia last December, the group has also developed a computer interface that allows users to interactively edit the lighting of these "composited" images. S. Mahdi H. Miangoleh, a Ph.D. student in Aksoy's lab, also contributed to this work.

Aksoy and his team plan to extend their methods to video for use in film post-production, and further develop AI capabilities in terms of interactive illumination editing. They emphasize a creativity-driven approach to AI in film production, aiming to empower independent and low-budget productions.

To better understand the challenges in these production settings, the group has developed a computational photography studio at the Simon Fraser University campus where they conduct research in an active production environment.

The above publications represent some of the group's initial steps towards providing AI-driven editing capabilities to the rich filmmaking industry in British Columbia.

Their focus on intrinsic decomposition enables even low-budget productions to adjust lighting easily, without requiring costly reshoots. These innovations support local filmmakers, maintaining BC's position as a global filmmaking hub, and will serve as the foundation of many more AI-enabled applications to come from the Computational Photography Lab at SFU.

Comments

Popular posts from this blog

At least 11 dead, others injured after train catches fire near Rahim Yar Khan

Three soldiers martyred in terrorist attack on checkpost in Balochistan’s Zarghoon

Legal process to try May 9 planners, perpetrators under army act begins: COAS