Hello
@jcouritas, welcome to WSL!
jcouritas wrote: ↑Sun Mar 15, 2020 3:22 amtransform a mouvement from a 3D card projections (with a camera) to an animate corner pin (Perspective positionner). This is really powerfull to gain performances in calculations.
Maybe in Nuke, not necessarily in Fusion. Nuke is great in many ways, but there are also a lot of band-aids around to compensate for its 3D engine which is often not that performant. Fusion, however, can be very - very - fast, and in many cases those band-aids simply aren't necessary.
I can't say much about this particular use case, but set it up in Fusion (do share!) and see if it's even necessary to build a workaround for performance's sake.
jcouritas wrote: ↑Sun Mar 15, 2020 7:56 amIn fusion creating layers in the nuke way (using channel boleans) seems to reduce A LOT the performances of the software.
Ah, but there isn't "the Nuke way" in Fusion
Using Channel Booleans in Fusion is NOT the same as creating channels in Nuke. Not at all.
Nuke and Fusion are VERY different in many respects, and for one you really need to approach channels differently between the two. Nuke adopted the concept of 'arbitrary channels'. Basically an image stream can contain any combination of channels, all of them in 32 bit float, devoid of meaning - apart from RGBA - and references to them are carried throughout the comp until such time that they are being used, at which point the actual image data is taken into account.
Fusion is different. Channels are much more explicit. An image stream in Fusion is not arbitrary at all, and has 26 (so far) very specifically designated channels. For example,
Z Buffer
is not just a generic channel that happens to bear the name "Z Buffer", it is used as a channel carrying depth information and tools will use that channel for specific functionality (like Depth Merge in the Merge node).
Channels can have their own bit depth, too. For example, if I would run Optical Flow or Disparity on an 8 bit RGBA stream, the resulting Vector or Disparity channels would still be 32 bit float, all in the same image stream. But all of that is very easy to follow. Unlike Nuke, in Fusion when there is nothing available in a certain node, you won't be able to select the related channels in the viewers at all, because
they aren't there.
This is really important. You, the user, decide where any of the image data is needed at all. In other words, when you insert data into your image stream, from that point on it is carried from one tool to the next and will require resources such as RAM. Do that too early, and your performance may suffer (greatly). Do it in the right place, and your performance will be excellent, PLUS your flow will tell you exactly where, when and why it happens*.
While at first that may seem a little cumbersome if you're coming from Nuke (believe me, the other way around is at least as bad), there are no empty, or "dead" channels in Fusion, no ambiguity about what is available or not, and I also have never seen the phenomenon of corrupting comp files because they contain certain specifically named channels that cause weird conflicts and travel from VFX house to VFX house
So, just like with colour, in Fusion, it's you - the user - who decides how things are being transformed from one place to another. Nothing is assumed, it's all explicit. Sure, that does mean that you need to actually know what you're doing, too. But as I have said many times before, there really is no substitute for knowing your sh*t.
In short, please do yourself a favour and do NOT try to copy what you were used to in Nuke, to Fusion. Things will break that way, and they will frustrate you, and they will be bad practice and they will make you think that Fusion is slower or more difficult than it should be. There are lots of similarities, but each has their own specific way of doing things. You will start organising your comps differently, build your logic differently, etc.
Consequently, if you've read the manual for Nuke, good for you, but you still need to read the manual for Fusion
Or just keep coming here, that works too
*doesn't mean Fusion's flow view is perfect, in fact there's a lot of room for improvement. You will notice that, too, and when you do, please let BMD know?