TodayPic

Google’s ex-lead of computational photography Marc Levoy to build new imaging experiences at Adobe

Marc Levoy1, Google’s former computational photography lead and arguably one of the founding figures of computational approaches to imaging, has joined Adobe as Vice President and Fellow, reporting directly to Chief Technology Officer Abhay Parasnis. At Adobe, Marc will ‘spearhead company-wide technology initiatives focused on computational photography and emerging products, centered on the concept of a universal camera app.’ He will also work closely with Photoshop Camera, Adobe Research, and the machine-learning focused Sensei and Digital Imaging teams.

The imaging sphere was taken by surprise a few months back when Marc left Google where he helped spearhead a revolution in mobile imaging with the excellent success of Pixel phones and their stills and video capabilities. Marc and his colleagues at Google developed HDR+, which uses burst photography alongside clever exposure and merging techniques to increase dynamic range of capture and reduce noise. His work, in conjunction with Peyman Milanfar, also helped Pixel cameras yield visible photos in the dark using Night Sight, and even capture super-resolution data that captured far more detail in ‘zoomed-in’ shots than competitors, despite limited hardware. Google’s burst mode techniques even allowed its cameras to forego traditional demosaicing processes, yielding more detailed images than even competitive cameras with similar sensor sizes.2

Marc Levoy… [is] arguably one of the founding figures of computational approaches to imaging

Marc also championed the use of machine learning to tackle challenges in image capture and processing, leading to better portrait modes, more accurate colors via learning-based white balance, and synthetic re-lighting of faces. Marc helped push the boundaries of what is possible with limited hardware by focusing heavily on the software.

At its core, Adobe is a software company, and so Marc’s expertise is at once relevant. At Adobe, Marc will continue to explore the application of computational photography to Adobe’s imaging and photography products, with one of his focuses being the development of a ‘universal camera app’ that could function across multiple platforms and devices. This should allow Marc to continue his passion for delivering unique and innovative imaging experiences to the masses.

Marc has a knack for distilling complex concepts into simple terms. You can learn about the algorithms and approaches his teams spearheaded in the Pixel phones in our interview above.

More on Marc Levoy

Marc Levoy has a long history of pioneering computational approaches to images, video and computer vision, spanning both industry and academia. He taught at Stanford University, where he remains Professor, Emeritus, and is often credited as popularizing the term ‘computational photography’ through his courses. Before he joined Google he worked as visiting faculty at Google X on the camera for the Explorer Edition of Google Glass. His work early on at Stanford with Google was the basis for Street View in Google Maps. Marc also helped popularize light field photography with his work at Stanford with Mark Horowitz and Pat Hanrahan, advising students like Ren Ng who went on to found Lytro.

Marc also developed his own smartphone apps early on to utilize the potential of burst photography for enhanced image quality with apps like SynthCam. The essential idea – which underpins all multi-imaging techniques today employed by smartphones – is to capture many images to synthesize together into a final image. This technique overcomes the major shortcomings of smartphone cameras: their sensors have such small surface areas and their lenses have such small apertures that the amount of light captured is relatively low. Given that most of the noise in digital images is due to a lack of captured photons (read our primer on the dominant source of noise: shot noise), modern smartphones employ many clever techniques to capture more total light, and in intelligent ways as well to retain both highlight and shadow information while dealing with subject movement from shot to shot. Much of Marc’s early work, as seen in SynthCam, became the basis for the multi-shot noise averaging and bokeh techniques used in Pixel smartphones.

Marc is also passionate about the potential for collaborative efforts and helped develop the ‘Frankencamera‘ as an open source platform for experimenting with computational photography. We look forward to the innovation he’ll bring to Adobe, and hope that much of it will be available across platforms and devices to the benefit of photographers at large.


Footnotes:

1Apart from being well renowned in the fields of imaging and computer graphics, Marc Levoy is himself a photography enthusiast and expert, and while at Stanford taught a Digital Photography class. The course was an in-depth look at everything from sensors to optics to light, color, and image processing, and is available online. We highly recommend our curious readers watch his lectures in video form and also visit Marc’s course website for lecture slides and tools that help you understand the complex concepts both visually and interactively.

2Our own signal:noise ratio analyses of Raw files from the Pixel 4 and representative APS-C and four-thirds cameras show the Pixel 4, in Night Sight mode, to be competitive against both classes of cameras, even slightly out-performing four-thirds cameras (for static scene elements). See our full signal:noise analysis here.

Source
DPreview.Com

%d bloggers like this: