Traditional video is flat. Volumetric video has depth. Think holograms, or video game characters. To fully experience volumetric video you’ll want to watch in an environment that lets you perceive the depth, such as AR, VR, or a 3D display. Other ways to use the technology could be in a traditional VFX pipeline for pre-visualizing a scene or creating effects — even as assets for a game.
Until recently, creating multi-view volumetric video— the kind that gives you the 3D experience by capturing a subject from multiple angles— was restricted to large, expensive setups that took a very long time to process footage. By contrast, Soar makes this same process possible using inexpensive cameras that capture depth and color that can be set up anywhere and viewed in real-time.
What do I do with it?
Volumetric video enables genuine human performances in the digital world. So, the real question is what couldn’t you do with it? Think of any live event that you have wanted to see but couldn’t physically be at, or every person you’ve ever wanted to visit but couldn’t be with. From sports stars to grandma, volumetric video brings you closer.
How do I capture it?
Capturing volumetric video requires cameras, a computer, and software. We use Microsoft Azure Kinect cameras, which capture depth by casting near-IR spectrum light onto the scene and recording how long it takes for the light to travel back. These cameras also include a color camera for acquiring the texture of the scene. You position these cameras around the subject (generally in a circle, but it’s flexible) and plug them into a desktop PC. The Soar software is then installed, which takes the raw color and depth data and turns it into a fully-textured 3D model for capture and livestreaming, all in real-time.