Any good Stereo Demos or Training?

Moderator: SecondMan

bowserlm
Fusioneer
Posts: 154
Joined: Sat Apr 28, 2018 3:44 pm
Been thanked: 2 times

Any good Stereo Demos or Training?

#1

Post by bowserlm » Wed Jun 13, 2018 6:22 pm

We're interested in using Fusion for an upcoming stereo VR project, but I can't seem to find a whole lot on the Stereo stuff in Fusion.

Is there a good resource somewhere? A good training, or website?

Thanks guys!

User avatar
AndrewHazelden
Fusionista
Posts: 961
Joined: Fri Apr 03, 2015 3:20 pm
Location: West Dover, Nova Scotia, Canada
Been thanked: 59 times
Contact:

Re: Any good Stereo Demos or Training?

#2

Post by AndrewHazelden » Wed Jun 13, 2018 7:24 pm

bowserlm wrote:
Wed Jun 13, 2018 6:22 pm
We're interested in using Fusion for an upcoming stereo VR project, but I can't seem to find a whole lot on the Stereo stuff in Fusion.

You should read the content on the two threads links below on the BMD Fusion forum about Fusion 9's LatLong Stereo rendering approaches. These issues also apply to Resolve Studio 15's LatLong Stereo rendering system too.

FWIW if you used Video Copilot Element3D rendered footage with AE's now bundled "SkyBox Converter" plugin to do the cubemap frame assembly and image projection conversion into a LatLong stereo frame format you would also get non-omnidirectional stereo output too. If quality matters, a raytraced lens shader makes for nice looking and comfortable to view stereo 360° output.

Problem with stereo spherical camera for VR

https://forum.blackmagicdesign.com/view ... 22&t=69994

Fusion Spherical Camera Rendering doesn´t line up

https://forum.blackmagicdesign.com/view ... 2&p=407440

User avatar
joergmark
Posts: 17
Joined: Sat Jan 03, 2015 1:59 am

Re: Any good Stereo Demos or Training?

#3

Post by joergmark » Thu Jun 14, 2018 3:00 pm

Hi Andrew,

Which means, it can not be done in Fusion, because it does not support raytracing??

User avatar
AndrewHazelden
Fusionista
Posts: 961
Joined: Fri Apr 03, 2015 3:20 pm
Location: West Dover, Nova Scotia, Canada
Been thanked: 59 times
Contact:

Re: Any good Stereo Demos or Training?

#4

Post by AndrewHazelden » Thu Jun 14, 2018 3:36 pm

joergmark wrote:
Thu Jun 14, 2018 3:00 pm
Hi Andrew,

Which means, it can not be done in Fusion, because it does not support raytracing??

The Typical OpenGL Stereo 360 Camera Rig

A typical (non-omnidirectional) stereo 360 degree rendering that is done without a raytracer typically uses two fixed cameras that are placed side by side and those views are rendered out in a single shot into two LatLong images which are used for the left and right eyes. This approach has flaws in that it has the stereo effect in the panoramic frame fall off to no depth visible when you pan horizontally to the 90 degree rotation angle in the LatLong frame, and then the depth goes into a cross-eyed backwards stereo output in the rear of the frame.

fulldome_3d_for_everybody.jpg

This image by Roberto Ziche shows what happens to the stereo depth in the view when you use a non-omnidrectional rendering approach.

Raytraced Lens Shaders

A raytraced lens shader allows you to do the omnidirectional stereo 360 degree rendering efficiently and without visible artifacts. You can easily fade off the stereo in the poles, adjust the zero parallax distance, and use screen space texture maps to refine the stereo parameters. This will give an output that is less likely to cause headaches for the viewer which is called brainshear.

Using 360VR Stereo Stitching on CG Renderings

If you tried to do omnidirectional stereo output without a lens shader, you are basically recreating a stitching process like you would do with a physical 360VR camera rig. This will mean you can get stitching and blending seams to worry about if you used only a few camera views or have too much camera separation between the views.

MistikaVR makes optical flow based stereo stitching look easy these days vs going for a very manual to set up approach in a node based package that needs to be tweaked per shot with lots of nodes to manage.

I've had memory leaks in Fusion 9.0.2 be an issue when I have done disparity mapping based stereo stitching of fisheye imagery last year so I suspect that would be another concern for CG renders going through the same workflow until the next Fusion update comes out.

You would need more then 18+ views to be rendered and stitched together with this approach before you have anything close to an acceptable output for a previz grade stereo 360°rendering done using OpenGL.

Re-creating a lens shader with hundreds of sliced left and right eye camera views

If you were using your own custom rig and node based workflow to do a still image based stereo 3D 360 output in a Fusion and fast render times weren't an issue you worried about, it should be possible to use a complex setup with a TimeStretcher node and a turning panoramic camera to render out super thin sliced vertical strips for the left and right eyes that are then reassembled back into your LatLong view. Realistically though, if you had access to any 3D package like C4D, Blender Cyles, Arnold, RedShift, or V-Ray they would likely beat that render time when outputting a scene of the same visual complexity. It becomes more of an academic challenge then a productive workflow.
You do not have the required permissions to view the files attached to this post.

User avatar
Chad
Fusionator
Posts: 1243
Joined: Fri Aug 08, 2014 1:11 pm
Been thanked: 18 times

Re: Any good Stereo Demos or Training?

#5

Post by Chad » Fri Jun 15, 2018 3:34 pm

I wonder what the tradeoffs would be for using mono renders (which are trivial) along with a depth map (again, trivial) to make your stereo renders. Obviously transparency will be an issue (like Harry Potter's glasses) but you'd have stereo that worked at any angle.