Hello 2021!

Multichannel workflow

Moderator: SecondMan

User avatar
razzy
Fusioneer
Posts: 143
Joined: Sun Aug 10, 2014 3:12 pm
Location: Vancouver, British Columbia, Canada

Multichannel workflow

#1

Post by razzy »

Tilt wrote:
ronviers wrote:... maybe ... ..., ..., ..., maybe ... It could be ... ... ... .... So their priority may be ... .... ... could be ... .... And whatever they're plan, ... seem ...

i don't know ..., ..., but it looks to me ... ..., ... ... ... - time will tell.
Sorry, I don't want to pick on you personally or deride what you've written, ronviers :lol: I think that BMD will do something with Fusion and since most of eyeon's shortcomings were due to their small dev team I think the road map will be sped up a lot now.

But reading your post has made it clearer to me what I think is the elephant in the room: we're still in the same boat as before the acquisition. Nobody knows what's going to happen to Fusion, everybody just hopes for future versions.

Compare this with the R&D videos from the foundry in the Off Topic board. There'll be nifty paint improvements, there'll be a toolset for VR. Compare that with the Foundry's road map accuracy ("will be released in Q4 of this year - boom, version gets released in Q4 of that year") the only thing you - as a vfx company owner - can lament is their pricing.

On the other hand, there was... sven? theo? don't remember, but they made VR stuff in Fusion and said something along the lines of "well, there really should be an update to the polar coordinate tool". But will they know if or when that is being considered by BMD? Or take myself. I'd love to augment our Linux/Nuke/Houdini pipeline with Fusion. But do I know when there'll be a version for Linux to try? And will there ever be proper multi-channel support in Fusion? We're making excessive use of it in our pipeline (Mantra can render out per-light passes).
Hey Tilt,

I understand the idea of multi channel exr's, but I will act ignorant and ask how important do you really feel it is. I gather it is for your pipeline, just wanted to ask. The thing is, I worked on flame for many years and we had only just got an ok way to deal with multi channel files and for the most part it was ok, it split out a group node with each channel displayed so you could connect to each channel. This is not like nuke, but just a nice way to deal with the files. I assume you would like to see something more like that? Just a way to deal with the single file? We all have different ways to deal with this issue, I like the storage aspect of many passes in one file, but prefer seeing its layers out on the flow so I know what is in front of me. Many like the nuke approach and that is fine. Just was wondering what you felt would be considered a better way to deal with it in fusion? The nuke style will not come to fusion, and personally I don't think it has to, but rather just as you said, have a smarter way to ingest and present the passes to the artist.

Cheers

User avatar
Tilt
Global Moderator
Posts: 336
Joined: Sat Aug 02, 2014 4:10 am
Location: Munich, Germany
Contact:

Re: Fusion 8 public beta impressions

#2

Post by Tilt »

razzy wrote:Hey Tilt,

I understand the idea of multi channel exr's, but I will act ignorant and ask how important do you really feel it is. I gather it is for your pipeline, just wanted to ask. The thing is, I worked on flame for many years and we had only just got an ok way to deal with multi channel files and for the most part it was ok, it split out a group node with each channel displayed so you could connect to each channel. This is not like nuke, but just a nice way to deal with the files. I assume you would like to see something more like that? Just a way to deal with the single file? We all have different ways to deal with this issue, I like the storage aspect of many passes in one file, but prefer seeing its layers out on the flow so I know what is in front of me. Many like the nuke approach and that is fine. Just was wondering what you felt would be considered a better way to deal with it in fusion? The nuke style will not come to fusion, and personally I don't think it has to, but rather just as you said, have a smarter way to ingest and present the passes to the artist.

Cheers
Hi razzy!

I think most people think multichannel workflow means you want to render everything in a monolithic exr and do your comp without separating the passes in your flow. There are a dozen reasons why this is bad. And that's why monolithic exrs are not my goal :lol:

We're indeed rendering separate files. However, Houdini allows you to split layers based on lights so you have a sunlight_diffuse or a fill_diffuse which do end up in a single exr. It's easy to load it in Nuke and use simple scripts to split out the lights. Building that thing in Fusion is harder because you're forced to load an exr, sniff out its layers and then add an unknown number of new Loader tools for additional light passes. Doable but cumbersome.

You're mentioning the way flame does it and it is indeed nice and would already be a step forward from Fusion's way of handling exrs. But all of this still revolves around the storage and that's not the main advantage of a multichannel comp workflow. It's about passing more than one RGBA layer through your comp without duplicating or instancing tools!

More and more I'm relying on having various masks as layers alongside my RGBA channels. Whether they are rendered or roto'd, they just get transformed and filtered with my RGBA channels - even passed through 3D projection setups - until I need them downstream. In Fusion you'd have to duplicate renderer3Ds, instance tools or drag connections from your rendered ID pass loaders all across your comp to use them downstream. You can repurpose some aux channels like background RGB or vector channels but that doesn't allow an arbitrary number of layers nor does it hold up in 3D.

For example, we're rendering ID passes as separate files. The usual script that builds a comp template, however, will shuffle any mask passes into the main image stream. They're available for color corrections in any branch of my CG comp but Nuke will not access EXR files for passes that you don't use. So that gets around the disadvantage of monolithic exrs while keeping the comp readable.

You've said that Nuke's multi-channel workflow won't come to Fusion. There were doubts whether we could ever have DoD because Fusion is not a scanline renderer. But eyeon made it work. It's a bit convoluted on first sight but it's clever and it even works in Fuses nowadays. According to the API docs about how requests work in Fusion (link) I can totally imagine a system that allows for arbitrary additional RGBA layers next to Fusion's standard RGB + aux channels:

When rendering a frame, Fusion already checks every upstream tool to figure out how large the required region of interest has to be and I think it already will not load exr scanlines it never needs. It could just as well check which channels or layers are needed and never allocate memory for the ones (even Fusion's own aux channels like disparity) that downstream tools don't need. It would require lots of changes to caching and tool controls but it would be backwards compatible like DoD: If a plugin doesn't indicate that it can handle multiple layers, it might be made to pass through all layers from upstream or it might drop them (like a non DoD-tool drops your overscan pixels). It would all be very very exciting stuff :) It would even make stereo comps easier (in case anybody still does those :oops: )

Wow, that's a long derailment of this thread. We might be better off splitting those posts into a new thread if this discussion evolves further (I'm certainly interested in hearing about the experience of other users regarding multichannel workflows!)

User avatar
Farmfield
Fusionista
Posts: 366
Joined: Tue Feb 10, 2015 2:16 am
Location: Goteborg - Sweden
Contact:

Re: Fusion 8 public beta impressions

#3

Post by Farmfield »

Reading what Tilt wrote and it's funny how different applications/plugins use nodes, sometimes so similarly, sometimes so differently, thinking about my first experiences with nodes with Maya in the 90s using shader nodes with MR, to the first node based compositor I used, Diskreet something, XSI's material editor, then Pflow and Thinking Particles in 3ds Max, the material and compositing nodes in Blender, how you work with attributes in Houdini, the Nuke color channel workflow vs. Fusion compositing methodology onto look dev. in Katana or Gaffer, the fast growing adaptation of Fabric Engine, etc...

It's a jungle out there, but at least it's nodebased. :D

User avatar
Chad
Fusionator
Posts: 1555
Joined: Fri Aug 08, 2014 1:11 pm

Re: Fusion 8 public beta impressions

#4

Post by Chad »

Multichannel is already in the Fusion SDK and in the GUI. It's a foregone conclusion. Timing is the only question.

As to how it would work, I don't think it will be allocating memory for multiple channels to an image, more like iterating through the flow for each channel request. So the flow becomes a function not for an image, but for an array of images. If you are doing stereo and only looking at the left eye, the right eye doesn't get called, so no processing or allocation. And if you do request it, the left eye is already rendered, so it doesn't reprocess. Individual images, unrelated, really, unless the tools themselves cause some interaction, like a stereo align. Nice thing about a system like that is that it fits well into what Fusion already has going. Still images, not scanlines, but not massive channel sets, which could go away if BMD wanted to put the effort into, and users/3rd parties were cool with that.

User avatar
razzy
Fusioneer
Posts: 143
Joined: Sun Aug 10, 2014 3:12 pm
Location: Vancouver, British Columbia, Canada

Re: Fusion 8 public beta impressions

#5

Post by razzy »

Tilt wrote:
razzy wrote:Hey Tilt,

I understand the idea of multi channel exr's, but I will act ignorant and ask how important do you really feel it is. I gather it is for your pipeline, just wanted to ask. The thing is, I worked on flame for many years and we had only just got an ok way to deal with multi channel files and for the most part it was ok, it split out a group node with each channel displayed so you could connect to each channel. This is not like nuke, but just a nice way to deal with the files. I assume you would like to see something more like that? Just a way to deal with the single file? We all have different ways to deal with this issue, I like the storage aspect of many passes in one file, but prefer seeing its layers out on the flow so I know what is in front of me. Many like the nuke approach and that is fine. Just was wondering what you felt would be considered a better way to deal with it in fusion? The nuke style will not come to fusion, and personally I don't think it has to, but rather just as you said, have a smarter way to ingest and present the passes to the artist.

Cheers
Hi razzy!

I think most people think multichannel workflow means you want to render everything in a monolithic exr and do your comp without separating the passes in your flow. There are a dozen reasons why this is bad. And that's why monolithic exrs are not my goal :lol:

We're indeed rendering separate files. However, Houdini allows you to split layers based on lights so you have a sunlight_diffuse or a fill_diffuse which do end up in a single exr. It's easy to load it in Nuke and use simple scripts to split out the lights. Building that thing in Fusion is harder because you're forced to load an exr, sniff out its layers and then add an unknown number of new Loader tools for additional light passes. Doable but cumbersome.

You're mentioning the way flame does it and it is indeed nice and would already be a step forward from Fusion's way of handling exrs. But all of this still revolves around the storage and that's not the main advantage of a multichannel comp workflow. It's about passing more than one RGBA layer through your comp without duplicating or instancing tools!

More and more I'm relying on having various masks as layers alongside my RGBA channels. Whether they are rendered or roto'd, they just get transformed and filtered with my RGBA channels - even passed through 3D projection setups - until I need them downstream. In Fusion you'd have to duplicate renderer3Ds, instance tools or drag connections from your rendered ID pass loaders all across your comp to use them downstream. You can repurpose some aux channels like background RGB or vector channels but that doesn't allow an arbitrary number of layers nor does it hold up in 3D.

For example, we're rendering ID passes as separate files. The usual script that builds a comp template, however, will shuffle any mask passes into the main image stream. They're available for color corrections in any branch of my CG comp but Nuke will not access EXR files for passes that you don't use. So that gets around the disadvantage of monolithic exrs while keeping the comp readable.

You've said that Nuke's multi-channel workflow won't come to Fusion. There were doubts whether we could ever have DoD because Fusion is not a scanline renderer. But eyeon made it work. It's a bit convoluted on first sight but it's clever and it even works in Fuses nowadays. According to the API docs about how requests work in Fusion (link) I can totally imagine a system that allows for arbitrary additional RGBA layers next to Fusion's standard RGB + aux channels:

When rendering a frame, Fusion already checks every upstream tool to figure out how large the required region of interest has to be and I think it already will not load exr scanlines it never needs. It could just as well check which channels or layers are needed and never allocate memory for the ones (even Fusion's own aux channels like disparity) that downstream tools don't need. It would require lots of changes to caching and tool controls but it would be backwards compatible like DoD: If a plugin doesn't indicate that it can handle multiple layers, it might be made to pass through all layers from upstream or it might drop them (like a non DoD-tool drops your overscan pixels). It would all be very very exciting stuff :) It would even make stereo comps easier (in case anybody still does those :oops: )

Wow, that's a long derailment of this thread. We might be better off splitting those posts into a new thread if this discussion evolves further (I'm certainly interested in hearing about the experience of other users regarding multichannel workflows!)
Hey,
Well that was a bit more than I was expecting. Thanks. I also thought your focus of smarter was for loading. I did not consider working within the comp with added layers like nuke in my general question. Like you I like hearing how people deal with it, ie the reason for my question. Thanks for sharing

As far as not coming to fusion, that is some interesting insight. I just feel, no concrete evidence, it would not come to fusion due to how it is structured. But if it did, great, I still have this old concept of having the map in front of me. Still, those are interesting thoughts.

Flame is doing its best but because of how it deals with footage it will not have this concept. And like fusion having all layers on display is not always bad, but having nested layers in the data stream is powerful for the user. When we worked on some large features the fact that we had everything visible, because comps were passed around a lot and seeing what was going on at a glance, was nice. I grew up that way so having data in a stream that you pluck out when needed as very handy, but for me on a personal level, it must be documented so I know what is going on, which many people do, but many times they do not.

Maybe if fusion just scanning the EXR's and built a pre comp concept. Like toxik would do. It basically had Maya build the comp from the render output, it was xml based so it was easy to implement. You just load that pre comp and the cg was done you then did what you needed to do after, but the render was a comp not an output file if you follow me.

So really there are two issues for fusion how to deal with multi channel EXR's and how to more efficiently pass data around without multiple loaders. By the way in my last comp I did I must admit I am comping more like flame and I found myself wanting to just pass the matte to the cc rather than copy it and connect to the cc or if pre rendered loading that matte later.

Cool... if Pieter wants to move this please do so.

Cheers

User avatar
msadauskas
Posts: 15
Joined: Sat Aug 30, 2014 3:37 am
Real name: Mikas Sadauskas
Contact:

Re: Fusion 8 public beta impressions

#6

Post by msadauskas »

Tilt wrote: We're indeed rendering separate files. However, Houdini allows you to split layers based on lights so you have a sunlight_diffuse or a fill_diffuse which do end up in a single exr. It's easy to load it in Nuke and use simple scripts to split out the lights. Building that thing in Fusion is harder because you're forced to load an exr, sniff out its layers and then add an unknown number of new Loader tools for additional light passes. Doable but cumbersome.
I know this forum is not about Houdini, but anyway - you CAN render separate light passes to separate single channel exr's in Houdini. I use it all the time.
houdini_separate_lights_single_layers.jpg
You do not have the required permissions to view the files attached to this post.

User avatar
Tilt
Global Moderator
Posts: 336
Joined: Sat Aug 02, 2014 4:10 am
Location: Munich, Germany
Contact:

Fusion 8 public beta impressions

#7

Post by Tilt »

Hi Miko, thanks. Gonna give it a try. But from your screenshot it seems as if you need to set up light masks for every light manually? The nice thing about the multichannel exrs is that all lights end up in there automatically.

hm... maybe we should really split off a multichannel workflow thread? Pieter? I don't know how to do it :-)

User avatar
SecondMan
Site Admin
Posts: 4775
Joined: Thu Jul 31, 2014 5:31 pm
Answers: 30
Location: Vancouver, Canada
Been thanked: 17 times
Contact:

Re: Multichannel workflow

#8

Post by SecondMan »

Agreed. Done :)

User avatar
msadauskas
Posts: 15
Joined: Sat Aug 30, 2014 3:37 am
Real name: Mikas Sadauskas
Contact:

Re: Fusion 8 public beta impressions

#9

Post by msadauskas »

Tilt wrote:Hi Miko, thanks. Gonna give it a try. But from your screenshot it seems as if you need to set up light masks for every light manually? The nice thing about the multichannel exrs is that all lights end up in there automatically.
Yes, that's true. There should be some simple solution to automate it, but I haven't found it yet.