KartaVR v4 Freeware Edition  [SUBMITTED]

Moderator: SecondMan

User avatar
AndrewHazelden
Fusionator
Posts: 1529
Joined: Fri Apr 03, 2015 3:20 pm
Answers: 8
Location: West Dover, Nova Scotia, Canada
Been thanked: 98 times
Contact:

Re: KartaVR v4 Freeware Edition

#46

Post by AndrewHazelden » Mon Apr 15, 2019 8:51 am

AndrewHazelden wrote:
Wed Mar 06, 2019 4:56 am
cdwouda wrote:
Wed Mar 06, 2019 4:21 am
Hi,

I'm using KartaVR version 4.0.2. The rectangular to polar node within the conversion tool equirectangular to fisheye doesn't seem to work correctly in Resolve. If I open the fusion studio stand alone I do get good results tho. Does anyone have thoughts on what could cause this and why it's not working properly within resolve's fusion?

Chris.
Hi cdwouda.

This is a known issue in Resolve. I've already submitted a bug report to one of the Resolve Fusion page developers about the underlying problem. Until a new Resolve version is released the simplest workaround is to keep on using Fusion Standalone.
BTW if you try Resolve 16 (and Fusion 16) I think you will find all the KartaVR nodes that used polar to rectangular conversions work as expected now (basically the nodes that did fisheye/domemaster <> equirectangular conversions). :cheer:

User avatar
icifredo
Posts: 4
Joined: Fri Feb 15, 2019 6:15 am

Re: KartaVR v4 Freeware Edition

#47

Post by icifredo » Tue Apr 16, 2019 1:34 am

Hi Andrew
What is the best way to create text inside fusion for fuldome contents ? :roll:

User avatar
AndrewHazelden
Fusionator
Posts: 1529
Joined: Fri Apr 03, 2015 3:20 pm
Answers: 8
Location: West Dover, Nova Scotia, Canada
Been thanked: 98 times
Contact:

Re: KartaVR v4 Freeware Edition

#48

Post by AndrewHazelden » Tue Apr 16, 2019 5:46 am

icifredo wrote:
Tue Apr 16, 2019 1:34 am
What is the best way to create text inside fusion for fuldome contents ? :roll:
You have a few options you could use to put text into a fulldome format.

For a 2D graphic you could use the Rectilinear2Equirectangular macro to remap a regular "flat" graphic element (like an imported SVG illustration or a Text+ node) into an Equirectangular 360x180° image projection. Then you could take that into a fulldome format using the Equirectangular2Domemaster180 macro. This approach would let you easily reposition and animate the 2D graphic element.

Alternatively, you could create the text as a 3D element in the Fusion 3D workspace and render it to a fulldome image projection using the DomemasterRenderer3D node.

Or you could create the text as a 3D element in the Fusion 3D workspace and render it using Fusion 9's native spherical camera. Then you could place that spherical camera rendered media into a fulldome format using either the Equirectangular2Domemaster180 macro or a Equirectangular2Fisheye macro (if you need post XYZ rotation controls).

KartaVR Example Comps

In Reactor's "KartaVR/Comps" category you can install the "KartaVR Example Comps" atom package. If you navigate to the Reactor:/Deploy/Comps/KartaVR/ folder you can explore these two examples:

User avatar
Chad
Fusionator
Posts: 1436
Joined: Fri Aug 08, 2014 1:11 pm
Been thanked: 18 times

Re: KartaVR v4 Freeware Edition

#49

Post by Chad » Tue Apr 16, 2019 6:17 am

You could also just project your text onto a dome that has UV's for the projection format you are using. Then just render to UVs. That gives you an easy way to lay out text and make sure it's readable.

User avatar
cdwouda
Posts: 4
Joined: Thu Jan 31, 2019 8:21 am

Re: KartaVR v4 Freeware Edition

#50

Post by cdwouda » Fri Apr 19, 2019 6:24 am

AndrewHazelden wrote:
Mon Apr 15, 2019 8:51 am
BTW if you try Resolve 16 (and Fusion 16) I think you will find all the KartaVR nodes that used polar to rectangular conversions work as expected now (basically the nodes that did fisheye/domemaster <> equirectangular conversions). :cheer:
:cheer: I noticed, great! :cheer:

User avatar
AndrewHazelden
Fusionator
Posts: 1529
Joined: Fri Apr 03, 2015 3:20 pm
Answers: 8
Location: West Dover, Nova Scotia, Canada
Been thanked: 98 times
Contact:

Grading 3603D VR Top/Bottom Formatted Footage in Resolve 16's Color Page

#51

Post by AndrewHazelden » Fri Apr 26, 2019 7:30 pm

This week I watched a large discussion unfold on an immersive video user group that discussed Resolve 16 workflows for 360VR post-production. There were a lot of questions asked and one of the biggest themes was artists wondering how to do color grading on 360/3D footage.

I would like to show a workaround for bringing Top/Bottom (aka Over/Under) formatted LatLong stereo 3D media into Resolve 16. The illustrated tutorial will navigate the process of taking 360/3D VR footage (with the Resolve native Stereo 3D metadata setting intact) from the Media Pool, into the Edit page, and then finally into the Color page.

This approach could be useful to an artist who is using Resolve purely to apply a color grade to pre-stitched 360VR Stereo media that was combined into a single video file by an external tool like MistikaVR, Kolor APV, Google Jump, etc...

This is not the only way to work with Over/Under pre-formatted footage in Resolve's Color grading system but this shows one solution you could use in a bind when your left and right eye views are merged in advance for all the clips in your editing timeline.

Syncing the Top/Bottom Footage as Stereo 3D Media

Start by creating a new Resolve Project.

In the Media Pool create two new Bins named "Left" and "Right".

(The "create new folder" hotkey shortcut is "Shift + Command + N" on macOS).

1 Media Pool - Left and Right Bins.png

Drop a copy of all your Top/Bottom stereo 3D media into both of the "Left" and "Right" bin folders.

If a warning dialog appears that says your clips have a different frame rate then the current project settings, click the "Change" button.

Right click on one of the clips in the Media Pool "Left" folder. In the contextual menu select the "Stereo 3D Sync..." menu item.

A "Stereo 3D Sync" dialog will appear.

1 Media Pool - Stereo 3D Sync.png

Next to the "Choose left eye folder" field, click the "Browse" button. A "Folders" window will appear. Expand the triangle to the left of the word "Master" and select the "Left" bin you just created. Then press the "Ok" button to close the Folders window.

1 Media Pool - Stereo 3D Sync - Folders View.png

Next to the "Choose right eye folder" field, click the "Browse" button. A "Folders" window will appear. Expand the triangle to the left of the word "Master" and select the "Right" bin you just created. Then press the "Ok" button to close the Folder

Next to the "Output folder" field, click the "Browse" button. A "Folders" window will appear. Select the "Master" bin. Then press the "Ok" button to close the Folders window.

Back in the "Stereo 3D Sync" dialog we can ignore the "Left identifiers" and "Right identifiers" fields since we are using the same Top/Bottom media in both Bin folders so both the left and right view Filenames will be identical.

Click the "Sync" button. If the Sync command was successful you will see a "Done" dialog that lists all of the clips as synced. Click the "Ok" button.

Note: At this point it is important to know that the Top/Bottom stereo 3D footage as been synced (paired-up as left and right eye views). You won't be able to see the 3D left/right eye view differences until you are in the Color page and have applied geometric transforms to break apart the top/bottom view relationship.

Creating the 3D Timeline

For this simple proof-of-concept 3603D Top/Bottom test lets try a simple approach of loading one or two of the Stereo 3D linked clips into a new Timeline.

(Once you have this process worked out you can use this information to help you as you work with an EDL you have imported on your own project, and use that to build the full Edit page timeline automatically for you.)

Switch to the Master bin in the Media Pool.

Select the 3D clips in the bin (which have a small 3D logo icon on their bottom left corners).

2 Create new Timeline from selected clips.png

Right-click on one of the 3D clips. In the Contextual menu select the "Create New Timeline Using Selected Clips...".

A "New Timeline" dialog will appear. Set the Timeline Name to "3D Timeline 1". Click the "Use Custom Settings" button on the lower left corner of the window.

3 New Timeline 1 - Use Custom Settings.png

Switch to the "Format" tab. Set the "Timeline Resolution" to "Custom". Then change the Timeline Resolution number fields to 7680 x 3840. (This value would be the single eye monoscopic (left eye) view resolution if your original Top/Bottom source footage was a 7680 x 7680 video clip.)

Set the Timeline Frame Rate to either 30 frames per second or 60 frames per second based upon the native frame rate of your media.

Click the "Create" button.

4 New Timeline 2 - Format Settings.png

Now you can switch to the Edit page and the new stereo 3D timeline should be active with the clips added automatically.

5 Timeline - Switch to Color Page.png

Separating the Left and Right Eye Stereo views in the Color page

Let's try applying a quick grade to these clips.

To preserve the Resolve Media Pool/Edit page based stereo 3D metadata tag that is attached to your footage, switch directly to the Color page to apply your color grade and make sure to *avoid* the Fusion page.

In the Color page each of the video clips will have the word "3D" written below the thumbnail on the lower right.

6 Color Page - 3D Toolbar Item.png

To split apart the top and bottom views into separate eyes, look in the center of the Color page toolbar where there is a "3D" button. Clicking this toolbar item will display the "Stereo 3D" Controls.

On the far left side of the Stereo 3D Controls view is the "Eye" settings where you can customize the "Left" and "Right" eye outputs.

This area has the controls needed to extract the combined Top/Bottom formatted LatLong media by individually repositioning the Left/Right views on the vertical axis and by zooming the footage.

Left Eye Customization

In the "Eye" controls, switch to the "Left" eye.

7 Color Page - Stereo 3D - Eye Left.png

Set the Zoom setting to "2" and press enter. This will fit the Top/Bottom footage to the width of the frame.

Now adjust the "Tilt" control to pan the footage so you are looking perfectly at just the Left Eye from the Top/Bottom media.

On a 7K x 7K clip the "Left" eye's Tilt control would be set to "-480" to isolate just the left eye viewable image data. Press enter to accept this number value.

Right Eye Customization

In the "Eye" controls, switch to the "Right" eye.

8 Color Page - Stereo 3D - Eye Right.png

Set the Zoom setting to "2" and press enter. This will fit the Top/Bottom footage to the width of the frame.

Now let's adjust the "Tilt" control to pan the footage so you are looking perfectly at just the Right Eye from the Top/Bottom media.

On a 7K x 7K clip the "Right" eye's Tilt control would be set to "480" to isolate just the right eye viewable image data. Press enter to accept this number.

Vision Controls

The "Vision" controls can be used to preview the now isolated stereo 3D footage in several different stereo output modes.

When the "Mono" mode is active in the "Vision" controls, clicking between the "Eyes" control's "Left" and "Right" settings will alternate between showing each of those views and the corresponding node work area.

Switch the "Vision" control from the "Mono" mode over to "Stereo".

On the "Stereo" control set the "Out" mode to "Anaglyph (B/W)". If you have a pair of old anaglyph 3D glasses, you can check that the views were re-combined correctly.

9 Color Page - Stereo 3D - Stereo Anaglyph BW.png

Now set the "Out" mode to "Difference" to see the areas where there is a lot of parallax shift in the stereoscopic imagery.

Finally set the "Out" mode to "Side by side", and then "Top/Bottom".

If the left and right eye view customization process worked successfully you should be able to toggle correctly between each of these stereo 3D output modes and the footage will be properly adapted each time.

Remember, if you want to look at the left and right eye views individually, toggling the "Eye" setting from Left to Right will let you see that specific view.

Note: If you use the "Swap & Copy" grade controls you will have to re-key in the "Tilt" parameter for the left and right eye views by hand again.

6 Color Page - Nodes.png

At this point you can now manually add grades to the stereo 3D footage. Pay special attention to gradient effects or power windows that cross over the zone between the left and right frame edge boundaries.

Also, try to avoid doing any hard edge shaped windows near the border of the zenith and nadir zones as there is a LatLong image projection view compression/squishing effect that happens at the poles.

Deliver Page

If you switch to the Deliver page at this point you can now use the "Render Stereoscopic 3D" controls to output the media in your desired stereo 3D format. As an example, if you wanted to save out the final graded left and right eye 360VR media, you can enable the (x) Both eyes as "Separate files" setting. This would generate a "LEFT" and "RIGHT" folder in your output location and inside each folder you would have a per-eye view movie file saved.

10 Deliver Page - Render Stereoscopic 3D.png

You have the option to adjust the rendered resolution, and the video codec used, too. Once all the settings are configured you can add that media to the output render queue. After the queue is loaded up, you can then render the footage to disk using the "Start Render" button.

11 Deliver Page - Start Render.png

If you want to upload a video to a VR video platform like YouTube 360 you need to generate a Top/Bottom (aka. Over/Under) formatted movie. After the movie file is generated on disk, you would use an external tool to embed the spatial media metadata tags.

If you have WSL's Reactor package manager installed inside your copy of Resolve 15/16 you can take advantage of the "Bin" category's ready to install tools for macOS and Windows users. You can add a copy of the "Spatial Media Metadata Injector" in only a few clicks:

12 Reactor - Spatial Media Injector.png

And since this is a KartaVR thread on WSL, I'd like to mention that the Reactor provided "KartaVR Scripts | Send Media" atom that is accessible in the "KartaVR > Scripts" category has a helper tool for easily opening the "Spatial Media Metadata Injector" program.

13 Reactor - KartaVR - Send Media Scripts.png

After you install the "KartaVR Scripts | Send Media" atom, you will then be able to use the Resolve "Workspaces > Scripts > KartaVR > Send Media to > Open 360 Video Metadata Tool" menu item. This menu item launches the Reactor installed copy of the spatial media metadata tool.

14 Resolve Menus - Open Spatial Media Tool.png

You can then use the metadata injector GUI to add the required tags to your Top/Bottom formatted movie file.

15 Spatial Media Tool.png

Closing Words

The workflow mentioned in this post clearly isn't perfect, nor is it the easiest process to follow but it *will* let you access the native Resolve stereo 3D features in the Color and Deliver page while you are working from your original Top/Bottom formatted LatLong media files. :)

You do not have the required permissions to view the files attached to this post.

User avatar
AndrewHazelden
Fusionator
Posts: 1529
Joined: Fri Apr 03, 2015 3:20 pm
Answers: 8
Location: West Dover, Nova Scotia, Canada
Been thanked: 98 times
Contact:

Re: KartaVR v4 Freeware Edition

#52

Post by AndrewHazelden » Thu May 30, 2019 3:05 pm

Hi. Reactor v3 was released today. This update fixes the previous callback error that happened when large atom packages were installed. The process of installing KartaVR is much easier to do now in only a few clicks. Also, the Reactor atom package downloading speed is boosted since the Reactor toolset is now multi-threaded, too. :)

KartaVR Install.png
You do not have the required permissions to view the files attached to this post.

User avatar
AndrewHazelden
Fusionator
Posts: 1529
Joined: Fri Apr 03, 2015 3:20 pm
Answers: 8
Location: West Dover, Nova Scotia, Canada
Been thanked: 98 times
Contact:

Re: KartaVR v4 Freeware Edition

#53

Post by AndrewHazelden » Sat Aug 03, 2019 8:53 pm

Hi. If you're looking for the KartaVR HTML documentation that used to be on hosted my blog, you can find it in Reactor by installing the "KartaVR/Docs/KartaVR Documentation" atom package.

User avatar
avclubvids
Fusioneer
Posts: 113
Joined: Sun Sep 17, 2017 7:51 am
Location: Los Angeles, CA
Contact:

Re: KartaVR v4 Freeware Edition

#54

Post by avclubvids » Fri Sep 20, 2019 9:14 pm

AndrewHazelden wrote:
Sun Feb 10, 2019 1:29 pm
JCB91 wrote:
Fri Feb 08, 2019 8:03 am
Do you think you might have the time and motivation to update Generate UV Pass in PTGui.lua to work with the newset version of PTGui?
Hi @JCB91.

Adding PTGui Pro 11 support for JSON based .pts files won't likely happen until KartaVR v5 which will be quite a few months away. When do I update the Generate UV Pass in PTGui script I also need to update the matching PTGui Project Importer and PTGui Mask Importer scripts, too.
Hey @Andrew – question on this: until you manage to update the PTGUI importer and exporter scripts, do you know if there is any way to convert a PTGUI 11 .pts into a PTGUI 10 .pts, or is it just a matter of "don't use 11 yet"?

User avatar
AndrewHazelden
Fusionator
Posts: 1529
Joined: Fri Apr 03, 2015 3:20 pm
Answers: 8
Location: West Dover, Nova Scotia, Canada
Been thanked: 98 times
Contact:

Re: KartaVR v4 Freeware Edition

#55

Post by AndrewHazelden » Fri Sep 20, 2019 9:51 pm

Hi @avclubvids.

I haven't come back to update the KartaVR scripts to add PTGui 11 support yet. In addition to PTGui 11 switching to a JSON based .pts file format, it also changed the lens distortion model used for lens corrections, and the way individual lenses are defined in a .pts file which is noticeable if you have a more complex camera rig.

TBH I haven't gotten a clear reading if the 360VR community has moved to optical flow stitching in MistikaVR for most of their video stitching needs and left PTGui Pro behind for final frame output.

As far as KartaVR changes in the next while go I'm working on an improvement to the Z360 6DoF tools so separate left and right eye LatLong views can be fed in as color + depthmap over/under images so each eye's displacement effect and color texture maps can be customized and occlusions artifacts can be fine-tuned slighlty. This came about as a result of chats on the Facebook "3dstereopanoramas" user group where the new krpano image+depth support was discussed.

User avatar
AndrewHazelden
Fusionator
Posts: 1529
Joined: Fri Apr 03, 2015 3:20 pm
Answers: 8
Location: West Dover, Nova Scotia, Canada
Been thanked: 98 times
Contact:

Re: KartaVR v4 Freeware Edition

#56

Post by AndrewHazelden » Sat Sep 21, 2019 4:21 am

avclubvids wrote:
Fri Sep 20, 2019 9:14 pm
or is it just a matter of "don't use 11 yet"?
Yup.

User avatar
avclubvids
Fusioneer
Posts: 113
Joined: Sun Sep 17, 2017 7:51 am
Location: Los Angeles, CA
Contact:

Re: KartaVR v4 Freeware Edition

#57

Post by avclubvids » Sat Sep 21, 2019 2:48 pm

AndrewHazelden wrote:
Fri Sep 20, 2019 9:51 pm
TBH I haven't gotten a clear reading if the 360VR community has moved to optical flow stitching in MistikaVR for most of their video stitching needs and left PTGui Pro behind for final frame output.
Mistika has certainly taken over the prosumer end of the 360º/180º market (which is the vast majority of work right now), but OFlow only works until it doesn't, which means older more manual processes are still needed for perfect high-end stitching. I'm going to see if PTGUI10 works the way we need it to, might have to move some of it into Nuke/Cara.

Looking at the sample stitching files, I'm finding a lot of Karta's tools will soften footage excessively, I'm still working with different options but are there any global "gotchas" like needing to double or quadruple resolutions or change filter settings or similar workflow thoughts? Just wondering if you're seeing better results than I am and it's due to preferences etc?

User avatar
AndrewHazelden
Fusionator
Posts: 1529
Joined: Fri Apr 03, 2015 3:20 pm
Answers: 8
Location: West Dover, Nova Scotia, Canada
Been thanked: 98 times
Contact:

PTGui BatchBuilder Stitching With KartaVR + Fusion

#58

Post by AndrewHazelden » Sat Sep 21, 2019 7:23 pm

avclubvids wrote:
Sat Sep 21, 2019 2:48 pm
Looking at the sample stitching files, I'm finding a lot of Karta's tools will soften footage excessively, I'm still working with different options but are there any global "gotchas" like needing to double or quadruple resolutions or change filter settings or similar workflow thoughts? Just wondering if you're seeing better results than I am and it's due to preferences etc?
There are many ways to do the stitching and panoramic image retouching with the KartaVR scripts and macro nodes that are provided. Each workflow is chosen for a specific need.

For video sequence duration 360VR stitched output, KartaVR assisted UV pass warping based panoramic stitching, or PTGui BatchBuilding stitching typically works out the best in a Fusion-based VR pipeline. :)

If you are doing a manually controlled, high-resolution omni-stereo stitch, or some exotic stereo-depthmap 6DOF workflow, using pure KartaVR conversion nodes parametrically to do the warping + stitching + post-production makes more sense as the oversampling on the input image resolution doesn't slow the process down by much as the artist tweak time is the most involved part, not the render time itself.

All of the KartaVR Conversion macro nodes will benefit from oversampling the input footage to help cut down on the effects of filtering with image projection remapping losses. Make sure to keep an eye on the "Height" attribute on the KartaVR Conversion macros so it's not accidentally set lower than your original footage's native resolution — I typically expression link the Input Height attributes on the KartaVR nodes I use throughout my comp so they are all synced together and can be quickly up-rezzed or down-rezed as needed, in addition to using Fusion's Proxy system for interactivity.

Most of the KartaVR compositing examples and the stitching examples were designed to be able to be viewed and saved to disk with Resolve (Free) and Fusion 9's (Free) 4K UHD (3840 x 2160px) resolution limit in mind. That's why the examples aren't trying to output 6K+ resolution stitched footage or CG rendered images since KartaVR's PanoView script and the "Send Meda to" scripts would not be able to export at that setting for the typical new user trying 360VR in Fusion or Resolve Free for the first time. The same thing applies to the displacement resolution settings on the Z360 nodes are set to a low value by default so it can work on a 2GB of VRAM Laptop class GPU. TBH the primary purpose of the example comps was to show the node connections needed to push 360VR footage around in a Fusion comp so the 16GB of example media provided was already larger then I had ideally hoped for.

How I Like to Stitch 2D Mono 360° Footage

Note: This is not the easiest approach to use for 360VR panoramic stitching but software cost-wise it is very affordable, and provides a very high degree of control over the full process. In my experience, this workflow will result in the ability to create a seamless fine-stitched output. Also, you can check things are proceeding correctly at each stage of the process.

When I need to do a 2D mono 360° LatLong stitch with 100% full-manual adjustments I use KartaVR's "Convert Movies to Image Sequences" script to automatically break down a folder of fisheye formatted ProRes or MP4 videos into automatically named image sequences. This script also provides the option to save out a set of rough temp audio files to a .wav format from each of the source video clips at the same time as the image sequence is extracted.

My favourite default setting to use is to have the Image Name set to <name>/<name>.#.ext (In a Subfolder) and to have 4 digit frame padding.

scripts-convert-movies-to-image-sequences.png

The extracted footage created by the "Convert Movies to Image Sequences" script would then be brought manually into Fusion via Loader nodes. The timing of each of the Loader's in/out points, the composites' Global start/end, and Render start/end time ranges are adjusted once I've scrubbed through the extracted footage in the Fusion timeline.

Let BatchBuilder do the Hard Stuff

Next, I typically use the "Script > KartaVR > Stitching > PTGui BatchBuilder Creator" menu item to take my unstitched fisheye formatted image sequences and prepare them as PTGui centric specially numbered folder-based image sequences. If you've never used BatchBuilder before in PTGui Pro, it takes a little getting used to before it becomes second nature. :)

scripts-ptgui-batchbuilder-creator.png

This script grabs the frame number from the image sequences and pushes that frame number into the numbered BatchBuilder folder name. Then the script makes each of the individual camera view images' filenames identical across all those numbered folders. As an example, an image sequence named in the format of: name.####.ext will be renamed and placed in a PTGui BatchBuilder sequence numbered folder with a hierarchy of: ####/name.ext.

ptgui-batch-builder-folders.png

It takes a little while for the KartaVR BatchBuilder Creator script to finish its magic, so all of the unstitched media is placed into the numbered PTGui BatchBuilder folders.

I select the Loader nodes, and using the first timeline frame's unstitched camera footage for all of the camera views in the rig.

I then run the KartaVR "Scripts > KartaVR > Send Media to > Send Media to PTGui" menu item to open PTGui 10 with all of the camera view images loaded at once. The script works by passing the disk-based filenames from Loader/Saver nodes to PTGui via the CLI.

scripts-send-media-to-ptgui.png

If you haven't used the KartaVR "Send Media to" scripts before, you should configure the script's settings using the "Scripts > KartaVR > Send Media to > Edit Send Media to Preferences" menu item that opens an AskUser dialog. If you have the KartaVR hotkeys file installed from Reactor, the hotkey for instantly opening the "Edit Send Media to Preferences" dialog is Shift+A.

Edit Send Media to Preferences.png

After PTGui 10 opens up, I hit cancel on the initial PTGui dialog that appears for setting up the lens parameters. Then I select a Circular Fisheye lens with a 180° FOV in the main PTGui window, or I use a lens preset from my PTGui database of lenses.

Next, I set PTGui to the Advanced mode, and create/customize the .pts stitching template with manual control points, and a numerically levelled horizon in the panoramic editor view. I typically apply manual PTGui masking to the camera views where there might be seaming issues for regions that cross the stitching boundary zone.

I specify the areas to include/exclude via hand-painted masks in PTGui and make sure to leave a gap between the red/green mask areas so PTGui's stitcher has flexibility on where to make the seamless blends. Then I save out a single frame stitched image from PTGui and check it for issues. If everything looks good then I proceed to the next step.

At this point, I use PTGui's BatchBuilder system to batch stitch and process the numbered folder-based fisheye image sequences into a finished LatLong formatted output. This is done with the "Use current project as a template" option in the BatchBuilder dialog.

BatchBuilder With PTS Files Loaded.png

As the PTGui Pro BatchBuilder job queue runs and stitches your imagery, you can watch as each of the BatchBuilder numbered folders are used to process an individual frame set of images with its own .pts file copied in-place based upon your original .pts template file.

Batch Bulder Job Queue.png

After that stitching task is done, the finished PTGui BatchBuilder stitched LatLong media is brought back into Fusion using Loader nodes that access the first frame set of imagery from the BatchBuilder numbered folder.

With the Loader nodes selected, the "Script > KartaVR > Stitching > PTGui BatchBuilder Extractor" menu item is used to turn the numbered folder-based footage back into a more sane and post-production friendly "name.####.ext" style image sequence. The "PTGui BatchBuilder Extractor" script can also be used to pull the original unstitched camera view footage out of the numbered folders too if required.

At this point, I have a PTGui image sequence that is stitched and looks okay-ish. There is still a tripod in the shot and other things that should ideally be cleaned up in post-production. But the first step of business is to view the results. I typically use a Saver node to encode an MP4 movie from the LatLong based image sequence at this point.

Adding 360VR Metadata

YouTube360 / Google style spatial media metadata can be embedded into the video file using the "Script > KartaVR > Send Media to > Open 360 Video Metadata Tool" menu item. This will launch the Reactor installed "Spatial Media Metadata Injector" application if you are running Fusion on a macOS or Windows system.

Note: If you have a scripting TD in your Fusion based shop, you can check out the Fusion 8 era KartaVR "SaverIntool" macro node as it automatically encodes Fusion saved EXR sequences into MP4 movies via FFmpeg based intool scripting. There is an updated version of that macro floating on a WSL thread that works in newer versions of Fusion.

FWIW as an advanced workflow automation tip — if your TD wants to do it — it's possible to have spatial media metadata applied automatically to your Fusion renders via the Python CLI based Google spatial media metadata script in either intool scripting on a "SaverIntool" macro node, or in a stock built-in Fusion RunCommand node.

I did the automatic spatial media metadata encoding thing in my PlayblastVR for Maya toolset back in the day using this same .py scripted automated approach, since the Python library was built-in and shipped with Maya by default and Python's presence could be relied on existing on macOS/Windows/Linux.

Previewing the Results

By selecting the Saver node, the stitched MP4 movie can be displayed in your panoramic playback tool of choice using KartaVR's "Script > KartaVR > Viewers > PanoView" menu item.

PanoView can also be launched by clicking in the Nodes view, with a Loader or Saver Node pre-selected, and pressing the TAB hotkey. (This hotkey is available only if you have the KartaVR Hotkeys file installed, and you don't have the Reactor provided Nuke2Fusion atom package installed).

If this is the first time running PanoView on a Fusion system, the PanoView settings can be quickly opened by pressing the SHIFT + TAB hotkey, or by selecting the "Script > KartaVR > Viewers > Edit PanoView Preferences" menu item.

Edit PanoView Preferences.png

The KartaVR "Script > KartaVR > Viewer > Publish Media to Google Cardboard VR View" menu item can be used to view this MP4 based LatLong video clip on a mobile device/tablet/Google Cardboard/web browser.

The "Publish Media to Google Cardboard VR View" script is not for the faint of heart as it takes a few steps to setup. But IMO the reward is worth the effort since this network HTTP based 360VR previewing system is really cool when it's up and running as you can collaborate with the other artists on your team and do wireless previewing of your output from Fusion.

The technique to quickly re-run the VR View publishing script is to select a Loader/Saver node in the Nodes view, and then press the V hotkey.

vrview-screenshot.jpg

Clean up the Stitching Artifacts

At this point, I start the process of removing the tripod from the shot with Fusion's vector-based Paint tools using the multi-clone paint stroke with the frame duration set to a really big number that is a number longer than the total timeline duration.

You could use the Fusion 9+ provided LatLong patcher node for this task, or any other approach you feel like with the combination of Fusion native nodes or KartaVR nodes. You could also take the easy route and export a LatLong frame to an external paint tool like Affinity Photo and take advantage of its VR image projection painting mode to help create a larger and more involved clean-plate with the healing brush and clone stamp tools.

After a clean plate element / patch region has been prepared it's a good idea to only blend the exact patched region's pixels that are needed to cover the tripod zone or other repaired element back over the rest of the original PTGui stitched footage using a Merge node and a vector B-Spline based mask. This selective retouching is done to avoid softening the final image unnecessarily.

A shot might require additional stitching artifact cleanup work for things like power lines, tree trunks, or other regions that have crossed over the lens stitch line zones.

If PTGui's blending failed on a hero object in the frame, KartaVR's panoramic image conversion macro nodes can be used to bring in a single camera view of original unstitched fisheye footage, and warp it into the LatLong image projection.

Then a GridWarp node can be used to squash and stretch the patched zone as needed. When adding any gridwarped patches to the scene to fix artifacts, it helps to frequently toggle the Merge node that combines the foreground patches + background LatLong plate footage temporarily from the "Normal" Apply Mode to the "Difference" Apply Mode.

This allows you to be sure things are lined up nicely. If the majority of the difference mode previewed patch zone is shaded as black as it means things are perfectly lined up. After the gridwarped patch is in place, the Merge node needs to be changed back to the "Normal" Apply Mode.

A ColorCorrector node is also added to each patch to apply a basic amount of color matching.

Depending on the 360° shot's needs, the patched region might need to be pre-comped to reduce processing time. The patch element could be saved to disk as a static image, or image sequence and re-loaded in at that point in the node tree via a new Loader node.

I typically have several Saver nodes spaced throughout the comp node tree set to bypass/passthrough mode so I can bounce things to disk as pre-comped elements to keep things as interactive as possible. The hotkey for toggling the Passthrough mode on the selected nodes in the Nodes view is Command + P (MacOS), or Control + P (Win/Linux).

Enblend to the Rescue

script-generate-panoramic-blending-masks.png

If there are any remaining seam blending issues on a complex static camera-based scene the KartaVR "Script > KartaVR > Stitching > Generate Panoramic Blending Masks" menu item comes in handy. It uses the open-source enblend CLI tool to make seamlessly blended images.

This approach can help with challenging stitching on irregularly shaped elements that are harder to mask invisibly with a hand-drawn B-spline mask.

The imagery fed into Enblend has to be the warped frames saved to disk as intermediate images that all are placed at their final position in the current panoramic image projection. Also, the images all need to exist inside a common final composite matched frame size to be blended correctly, and have the custom alpha blending masks created per camera view image.

alpha-blending-mask-example.png

Enblend output is saved to the "Temp:/KartaVR/" PathMap folder. You can access this temporary directory using the "Script > KartaVR > Open Folder > Open KartaVR Temp Folder" menu item. If you have the KartaVR hotkeys installed, alternatively you could click in the Nodes view and press the Shift + O hotkey to instantly open the KartaVR Temp Folder.

Vectorize Your Raster Alpha Blending Masks

vectorized blending masks.png
An externally created alpha masking image can be converted into a vector editable B-spline mask in Fusion using the "VectorSnapshot" script. It is added automatically to your system when you add the "KartaVR 3rd Party Libraries" atom package in Reactor. The "VectorSnapshot" script can also be installed manually by itself by selecting it in the Reactor "Scripts/Comp" category.

VectorSnapshot in Reactor.png

This script is accessed from the Scripts > Andrew Hazelden > VectorSnapshot" menu. The Vector Snapshot script allows you to grab any raster pixel-based masking image from Fusion's left viewer window context and save the single frame output to disk as an automatically vectorized SVG image. This script saves the new SVG output to the "Temp:/Fusion/" PathMap folder on disk and then reveals the folder automatically in your Explorer/Finder/Nautilius folder browser window when done.

After the Vector Snapshot script is run, you can import the generated SVG vector-based mask back into Fusion's node-based workspace using the File > Import > SVG... menu item.

SVG Shape Import.png

In the SVG "Open File" dialog I expand the PathMaps section on the left sidebar and click on the Temp:/ entry. This jumps me into the system's $TEMP folder. Then I click on the Fusion folder, and then I select the generated SVG mask.

SVG Import Dialog.png

Fusion loads the SVG vector imported data into a Group node you can expand.

svg imported group node.png

Each of the vector shapes, that have been imported from the SVG file, are handled as a series of individual B-Spline/Polygon curve elements that are chained together from one node's output connection to the next node's input connection. These connected shapes are resolution-independent vectors during this stage due to node transform concatenation and can be resized with no image loss.

The B-Spline nodes are then finally connected to a Background node that defines the final vector mask size based upon the Background node's image width and height settings. I typically set the Background node's color to white after importing the SVG elements to line up with my alpha masking needs.

The VectorSnapshot script has parameters that can be changed in the "VectorSnapshot.lua" file on disk if you want to customize the default output quality, invert the colors, or change the other setting... (assuming someone wanted to go that deep). You can edit the VectorSnapshot script quickly by selecting the "Script > Edit > Andrew Hazelden > VectorSnapshot" menu item in Fusion.

Additionally, for each of the B-Spline/Polygon nodes in the SVG imported Group node, you can individually adjust the Soft-edge setting, Border Width, Invert checkbox, and other attributes which help refine the blending masking.
You do not have the required permissions to view the files attached to this post.

User avatar
avclubvids
Fusioneer
Posts: 113
Joined: Sun Sep 17, 2017 7:51 am
Location: Los Angeles, CA
Contact:

Re: KartaVR v4 Freeware Edition

#59

Post by avclubvids » Mon Sep 23, 2019 9:44 pm

As always Andrew, lots of great info and even more hidden corners of Karta to explore! Thanks for being you :)

User avatar
AndrewHazelden
Fusionator
Posts: 1529
Joined: Fri Apr 03, 2015 3:20 pm
Answers: 8
Location: West Dover, Nova Scotia, Canada
Been thanked: 98 times
Contact:

KartaVR Atom Package Updates

#60

Post by AndrewHazelden » Tue Oct 15, 2019 2:50 am

Hi @avclubvids.

I've got some good news.

This month is the beginning of a fresh effort to improve KartaVR v4's usability and to help users out on the discoverability aspect so they can start to understand why a tool is present in KartaVR and how it can help their workflows.

This is being done as part of the development journey from KartaVR v4.x to the next big update v5 which will be released in a few months. :)

Improved Atom Descriptions

I wanted to give people a heads-up that KartaVR's Reactor atom packages are getting a refresh on their descriptions with more information about how each package fits into a unified 360VR workflow, links to resources that relate to each major tool in KartaVR, and other details, plus clickable shortcuts to where you can find the files on disk after you install the packages via Reactor.

Note: Fusion v9's implementation of the "UI Manager" GUI system doesn't allow for it to open "clickable" links in Reactor v3 so you'd need to use Fusion v16-16.1 or a copy of Resolve to be able to use the clickable aspect of the blue underlined links in Reactor to access the external help resources. This comes down to a UI Manager/QT Framework (as compiled in Fusion v9) thing, and not something Steakunderwater can change.

For the "KartaVR" and "KartaVR 3rd Party Atom" packages there is a download size and install time estimate added to the top of the description field to allow new users to get an understanding of why the install is taking a while to complete due to the need to download a range of CLI (command-line) tools like FFmpeg, Imagemagick, Panotools, etc... that all have a whole lot of small files that need to be downloaded.

All of the Reactor "Bin" Category atom packages will also be receiving this download size and install time estimate information, too.

KartaVR Package Download Size Stats.png

GitLab Throttling on Rapid Reactor Package Installs

If you are installing a lot of content via Reactor in 1 session it can sometimes be helpful to select the "Reactor > Reactor Preferences" menu item to edit the Reactor Preferences to reduce the number of simultaneous (concurrent) network socket connections used to download atom package content from the default of "8" down to something between "1" to "4".

The Reactor repository is hosted on GitLab and it looks like they have some new policies on network management and can throttle the downloads slightly for a user if they try to install every package that is listed in Reactor in under a 20-minute session. Now you know. ;)

Asking for InstallScript Permissions

When using Reactor to install the KartaVR 3rd Party Library atom package, or the tools in the Reactor "Bin" category on macOS or Linux, a Reactor InstallScript is used to set the downloaded programs like FFmpeg and Imagemagick to be executable with a chmod "755" permissions level, or to create a new Fusion PathMap entry:

InstallScript Ask for Permissions Dialog.png

If you are the type of person who might get tired hitting the "Ok" button on all the InstallScript dialogs for the Reactor "Bin" category items, you have the option to uncheck the Reactor Preferences window's "[x] Always ask for InstallScript Permissions" checkbox.

Reactor Preferences.png

KartaVR Hotkeys

The KartaVR Hotkeys file is now an optional separate install that you can choose to do on your own in Reactor if you want.

Previously this hotkeys item was part of the main KartaVR package. This choice to make the hotkeys file an optional install was done to reduce the risk of collisions between KartaVR's custom VR production focused hotkey set, and the range of hotkeys used in the excellent Nuke2Fusion hotkeys .fu file that is in Reactor, too.

Atom Package Screenshots

If you'd like to see a few examples of the type of improved details that are listed in KartaVR atom packages, take a look at the following images attached below.

Note: I'm still updating and git syncing atom package changes for the rest of my com.AndrewHazelden.* atoms in Reactor so it will be 24 hours or so before all of the packages are fully updated to this detail level since there are a lot of them. :)

1 KartaVR Main Atom Package.png

2 KartaVR 3rd Party Atom Package.png

3 KartaVR Scripts Movies Atom Package.png

4 KartaVR Scripts Send Media To Atom Package.png

KartaVR DragDrop Stitching

This video shows the new KartaVR based Drag and Drop panoramic stitching support that is possible in Fusion/Resolve v16-16.1+ using a scripting technology in Fusion called "Events". This scripted "KartaVR PTGui DragDrop.fu" file is used to allow PTGui Pro v10 based .pts files to be dragged into the Fusion Nodes view from your desktop folder for an instant node-based parametric 360VR stitching setup.





You can add the new "KartaVR DragDrop" atom package to your system by opening up Reactor and looking in the "KartaVR/DragDrop" category:

5 KartaVR DragDrop.png

Atomizer DragDrop Package Editing

A nice spin-off from the creation of the "KartaVR DragDrop" atom package is a Reactor tool development addon for the Atomizer package editor. There is now an "Atomizer DragDrop" atom package in Reactor's "Scripts/Reactor" category.

This allows .atom package text files to be dragged into the Nodes view and they are instantly loaded in the Atomizer editing GUI. You can drag several .atom text files into the Nodes view at once, and they are all opened into separate editing contexts.

Atomizer DragDrop.png

I'm using the Atomizer DragDrop tool at this very second to revise all of my remaining atom packages to have clickable hyperlinks. So I get to "dog-food" my Reactor tools with that classic "eat your own dogfood" development process. :lol:
You do not have the required permissions to view the files attached to this post.
Last edited by AndrewHazelden on Sat Oct 26, 2019 12:59 pm, edited 1 time in total.