Quantcast
Channel: Prasar Bharati Parivar
Viewing all articles
Browse latest Browse all 9466

Computational cinematography: Light field imaging is here

$
0
0
Advances in audio technology have resulted in infinitely flexible object-oriented sound. Could Light Field Imaging usher in an era of object-oriented video?
Instead of recording a flat picture, what if we could capture all the light falling on the camera? And if we could do that, could we then generate a perspective from any position? And possibly even display it as a three-dimensional holograph? That's the theory behind light field imaging, which has potentially revolutionary consequences for visual storytelling. Recent advances in processing power and sensor technology have made the technology appealing to electronics giants like Microsoft and august cinema engineering bodies like SMPTE.
A light field – a concept originally proposed in 1846 by Michael Faraday - is defined by the number of light rays within a given area. It is technically five-dimensional: three spatial (x, y, z), plus two angular dimensions describing the direction of the ray. To capture a light field you typically either array cameras which will simultaneous record different angles of the same scene, or place a micro-lens in front of conventional optics to funnel information (about intensity, direction, colour).
Post production possibilities
At present, there is no way of post-producing the sheer volume of data produced, or of displaying it, but that doesn't mean there aren't useful applications for the technology around the corner. Researchers at German institute Fraunhofer IIS, for example, have developed a system comprising 16 HD cameras arranged in a 4x4 grid. Last September, it released a plug-in for Nuke as an aid to processing the data and shot a short film, Coming Home, with Stuttgart Media University, which showcased the technique's capabilities for live action filming. The plug-in can be downloaded from the Fraunhofer website (www.iis.fraunhofer.de/lightfield)

The chief advantage, Fraunhofer contends, is that light field imaging will offer a more cost effective way to produce film and TV. “On-location retakes are time-consuming and expensive,” says Frederik Zilly, head of Fraunhofer's Computational Imaging group. “What if the focus was incorrectly set during shooting or the perspective has to be changed? The use of multicamera systems opens the door to a world of new post-production possibilities.” Among the possibilities are dolly-zooms, Vertigo and Matrix camera tricks which could be rendered out of existing material in the cutting room. “Expensive effects, previously the preserve of cinema, can be brought to TV with light-field recording,” Zilly says.
New reality for cinematographers
Also known as computational cinematography the idea is anathema to most cinematographers. If all the important camera parameters, such as position, viewing angle, depth of field, aperture and exposure, can be determined in post there are big questions about where this leaves the DP's craft. “Cinematographers will worry that light fields take away one of their primary tools – composition - because the viewer can move around the space, and see things from different perspectives,” says Ryan Damm, founder and light field systems developer of Visby Camera. “On the other hand, this opens up lots of new creative possibilities, but completely changes the creative toolkit.”

Source and Credit :- http://www.tvtechnologyeurope.com/acquisition/computational-cinematography-light-field-imaging-is-here/01239
Forwarded by :- Shri. Swamy DN ,    dns_v@yahoo.com


Everybody is talking about light fields and nobody fully understands the potential yet


The main driver of interest in light field today is its potential application in virtual reality. Most current VR systems position multiple lenses in a sphere then stitch the resulting images together. Despite some tweaking in software this approach arguably lacks the subtitles of parallax which allow a VR viewer to have positional tracking - to move their head side to side, forward and back, look straight up or down without the illusion breaking. In theory, light field-captured 360-degree video would create a more genuine sense of presence and freedom of movement for live video which is only possible today in CG VR experiences. “Cameras shooting 360-video can't use position tracking to synthesise a single perspective,” says Damm. “That is VR video using existing standards, rendered using game engines, and that model won't work.”
Lytro, Californian maker of the first consumer light field still cameras, announced Lytro Immerge last November and plans to launch it at NAB. Immerge consists of a five-ring globe that captures what Lytro calls “light-field volume”, dedicated servers for storage and processing, an editor for integrating data with NLEs and compositors and a video playback engine. Earlier this month Lytro announced a new Lytro Cinema Camera (pictured below), which it claims is the first system able to produce a light field master by capturing and processing the entire high resolution light field. Captured data can be rendered into multiple formats, including IMAX, RealD, Dolby Vision, ACES and HFR. The Lytro Cinema camera features a 755 RAW MP sensor which can capture images at up to 300 fps with 16 stops of dynamic range. The company calls it "The highest resolution video sensor ever designed".


“Everybody is talking about light fields and nobody fully understands the potential yet,” said Aaron Koblin, co-founder and CTO of VR production outfit Vrse which helped develop Immerge. “We’re just waiting for the moment when we have the tools. I think both the capture and playback of light fields will be the future of cinematic virtual reality."

VR headsets (Oculus, HTC Vive) and augmented reality systems (Meta, Microsoft Hololens - both in closed beta) are the only means to display light fields at present. In the pipeline are holographic screens, such as that in development at Leia3D, with Samsung among tech giants to have filed similar patents.
None of these displays is capable of showing live action video, though that may change with the release of Immerge. The bigger challenge is creating a camera with enough fidelity that it may be better termed a holographic video camera.
400 petabytes an hour
“With a micro-lens approach you end up with an effective resolution equal to the number of micro lenses,” says Christian Perwass, founder, Raytrix. “Even with a 40 megapixel camera, with 20,000 micro-lenses you will only end up with 20,000 pixels. The higher the effective resolution, the shallower your depth of field becomes which means you can't take advantage of all the different views.”

Raytrix, a German company selling precision measuring instruments for industrial work, has effected a compromise by deploying a micro-lens with three different focal lengths. Based on a 42 megapixel sensor, its R42 camera (pictured below) offers an effective resolution of 10 megapixels at 7 fps.

Perwass believes existing light field systems are limited by the laws of physics. “They are workable with close-up subjects like a face but if you want to extract depth information for scenes 10-20 metres away you might as well use standard stereo 3D cameras,” he says. 
If you want to extract depth information for scenes 10-20 metres away you might as well use standard stereo 3D cameras
Holograms
There is a third way, using traditional optics: This is to film a scene with multiple arrays of micro-lens imagers or with higher resolution sensors – or ideally a combination of both. Phase One released a 100MP stills camera in January, Canon is developing one with 120MP and even has a prototype 250MP chip. However, this only shunts the problem down the line. How much data does a hologram require, exactly? Damm, presenting on the topic for SMPTE at NAB, has done the math. A rough approximation: for a 2 square meter surface, you would need about 500 gigapixels of raw light field data, taking up more than a terabyte. At 60 frames per second that's about 400 petabytes per hour. “That equals a whole lotta hard drives,” he says. “People are cutting various corners to try to make it work, but it's a hard problem.” Visby, Damm's company, has a light field codec in development but doesn't plan on releasing anything until next year, at the earliest. “In the near term we are able to capture light fields and collapse all the data down to non-three dimensions for manipulation in post,” says Simon Robinson, chief scientist at The Foundry. “Imagine looking out of a window in your home. Now imagine that as a holographic picture. That is where we are headed in the longer term.”



Viewing all articles
Browse latest Browse all 9466

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>