acceptable looking high frame rate footage

User avatar
marconoe
Posts: 4
Joined: Tue Jun 18, 2019 5:04 pm
Real name: marco

acceptable looking high frame rate footage

#1

Post by marconoe » Tue Aug 06, 2019 12:22 am

hi everyone,

does anyone have any thoughts on how to retime high frame rate footage to appear similar to standard 24-25 fps play back please?

here's an example of what im talking about:



there's certain shots in this video that look pretty close to 24-25 fps but then it slows right down so i think the editor has done a fantastic job at making the high frame rate footage played back in real time look good and not choppy how it does when i play mine back

would appreciate any tips or rabbit holes to go down! :)

cheers

marco

Added in 11 minutes 46 seconds:
woops sorry this shouldnt be in fusion sorry ill see iif i can move it

User avatar
SecondMan
Site Admin
Posts: 3473
Joined: Thu Jul 31, 2014 5:31 pm
Answers: 5
Location: Vancouver, Canada
Been thanked: 92 times
Contact:

Re: acceptable looking high frame rate footage

#2

Post by SecondMan » Tue Aug 06, 2019 1:47 am

Moved to the Resolve forum, no worries :)

PS. You don't need to wrap YouTube links in URL tags. WSL embeds them automatically...

User avatar
SirEdric
Fusionator
Posts: 1869
Joined: Tue Aug 05, 2014 10:04 am
Answers: 2
Real name: Eric Westphal
Been thanked: 112 times
Contact:

Re: acceptable looking high frame rate footage

#3

Post by SirEdric » Tue Aug 06, 2019 10:24 am

It pretty much depends on the framerate of the HFR Footage.
Something I learned during some fantastic, unforgettable weeks at Doug's place when working on Ufotog.
He invented 'Showscan' many, many years ago, and the concept is quite simple (as many good concepts are...:-))
Basically, if you shoot at a high framerate that is multiple of the playback framerate....but ... Instead of explaining it here, just have a look:

User avatar
Tory
Fusioneer
Posts: 68
Joined: Fri Apr 13, 2018 11:29 am
Real name: Tory Hooton
Been thanked: 6 times
Contact:

Re: acceptable looking high frame rate footage

#4

Post by Tory » Tue Aug 06, 2019 1:27 pm

How would one go about summing the motion blur of the frames?

User avatar
SirEdric
Fusionator
Posts: 1869
Joined: Tue Aug 05, 2014 10:04 am
Answers: 2
Real name: Eric Westphal
Been thanked: 112 times
Contact:

Re: acceptable looking high frame rate footage

#5

Post by SirEdric » Tue Aug 06, 2019 1:50 pm

Tory wrote:
Tue Aug 06, 2019 1:27 pm
How would one go about summing the motion blur of the frames?
It's all explained in the video....:-)

User avatar
Tory
Fusioneer
Posts: 68
Joined: Fri Apr 13, 2018 11:29 am
Real name: Tory Hooton
Been thanked: 6 times
Contact:

Re: acceptable looking high frame rate footage

#6

Post by Tory » Tue Aug 06, 2019 2:55 pm

The concept is explained but not the specifics... I get shooting 120 and then sub sampling. But all that is said is that you "combine 3 frames into one and delete the next two."

This has sent me on a bunny trial and I also watched a round table with Douglas "the great frame rate debate" from 2012. There Douglas mentioned that he has applied for a patent for his method. I am wondering if this tech has made its way to a tool or software solution yet.

This is obviously more than a simple pull down.

I have been intrigued by this for a while... a few years ago I saw an FXPHD interview with Trumbull where he was running down his argument for 120 frames as the acquisition standard as you can derive the other standards from that, but as yet I haven's seen how to do that :)

Added in 41 minutes 15 seconds:
This video goes a bit further but still does not explain how the down conversion to 24 with the associated, motion blur, is possible. It might be a part of the patented secret sauce ;)



There would have to be some motion analysis going on? Optical Flow? or something? One of the terms it seems he was using before the Magi branding was "Frame Integrated Motion Analysis"

I was hoping, especially since Fusion, is featured in his development process that this process might be possible with our built in tools?

User avatar
SirEdric
Fusionator
Posts: 1869
Joined: Tue Aug 05, 2014 10:04 am
Answers: 2
Real name: Eric Westphal
Been thanked: 112 times
Contact:

Re: acceptable looking high frame rate footage

#7

Post by SirEdric » Wed Aug 07, 2019 12:25 am

Hmmm...that 'integrated motion blur' should be basically what you get from frame blending or even OFlow.
Resolve does a pretty good job in retiming footage.
And if you (just for the fun of it) what you get out of a TimeSpeed in Fusion on a 120fps setup, compared to 24fps MoBlur, it's not too shabby at all...:-)
You do not have the required permissions to view the files attached to this post.

User avatar
Tory
Fusioneer
Posts: 68
Joined: Fri Apr 13, 2018 11:29 am
Real name: Tory Hooton
Been thanked: 6 times
Contact:

Re: acceptable looking high frame rate footage

#8

Post by Tory » Wed Aug 07, 2019 8:03 am

Sure... I was hoping for something a bit better. At least if one were going to go so far as to always capture in 120 and then derive the 24. The problem I see is that while blending is close and there is a similar "volume" of blur, it is stepped. So there ends up being ghosting. It is more apparent on faces but. You can still see it here.

I didn't see any artifacts like this in their examples which is intriguing to me :)

Retime Blur.JPG
You do not have the required permissions to view the files attached to this post.

User avatar
SirEdric
Fusionator
Posts: 1869
Joined: Tue Aug 05, 2014 10:04 am
Answers: 2
Real name: Eric Westphal
Been thanked: 112 times
Contact:

Re: acceptable looking high frame rate footage

#9

Post by SirEdric » Wed Aug 07, 2019 8:56 am

True.
In this case a higher SampleSpread like 0.4 (remove the expression) reduces the stepping considerably.

User avatar
Chad
Fusionator
Posts: 1408
Joined: Fri Aug 08, 2014 1:11 pm
Been thanked: 14 times

Re: acceptable looking high frame rate footage

#10

Post by Chad » Thu Aug 08, 2019 7:23 am

Tory wrote:
Tue Aug 06, 2019 3:36 pm

I have been intrigued by this for a while... a few years ago I saw an FXPHD interview with Trumbull where he was running down his argument for 120 frames as the acquisition standard as you can derive the other standards from that, but as yet I haven's seen how to do that :)
There's some patent issues going on that prevent people from talking openly about it. There's in general some open questions about what is "best" anyway. James Cameron thinks optical flow is the way to go, Ang Lee thinks weighted summing is the way to go, etc..

Ignoring optical flow as too obvious, the weighted summing is very interesting especially when you consider very large windows.

A long long time ago I had a WP blog at a former employer and there I posted some experiments in temporal smoothing and sharpening. Like what happens if you do a Sobel or Difference of Gaussians in the temporal domain only?

Today, in Fusion, you could take an LD, add 24 TimeStretchers to get 25 different frames concurrently, weighting each one with a LUT Curve Control (or just use the built in resize kernels by scaling a tiny 1D image to a slightly less tiny 1D image and a bunch of Probes), and sum the results together. If you're going from 120fps to 24fps (and this is where the patent stuff comes in), you could think of the 2 frames before and after the center frame as being the time interval for an ideal digitally acquired 24fps frame. But instead of a square "top hat" weighting, you could do a very sharp Sinc sampling and get smoother motion blur while maintaining some sharpness. Or you could make it a fatter curve and make it really creamy. The fun part is that the far outside samples let you use negative weighting so there's actually some "ringing" to the motion which makes it appear less motion blurred.

What's fun is that if you have the tooling set up in Fusion correctly, it's realtime (3CuS or a plugin/fuse helps on this, but it's not 100% needed). So you can play your scene from RAM and adjust the curve shape while it's playing out and see the results live. It's not intuitive what settings will be best for any shot or sequence or whole movie, even. Doug proposes that you might not even want the same sampling in the same frame!

In case you're extra curious about how this has been used in actual Hollywood productions, http://www.reald.com/#/truemotion. I won't get into their exact process, save that I think you can do the same thing using better software, but that's not an uncommon theme around here. :D

User avatar
Tory
Fusioneer
Posts: 68
Joined: Fri Apr 13, 2018 11:29 am
Real name: Tory Hooton
Been thanked: 6 times
Contact:

Re: acceptable looking high frame rate footage

#11

Post by Tory » Thu Aug 08, 2019 10:32 am

Thanks Chad,
That provides some great directions for me to research further :) I would love to see a macro of fuse for this I am a bit lost with the LUT curve control and 1D images ;) but I will be looking into it now.

I am trying to conceptualize what is going on though. Is it kind of like how Red is describing their Soft Shutter? We would use the extra frames to sample values and weight the exposure? While Red is doing that with a modulating electronic shutter. This method would sample after by probing the "extra" frames?

https://www.red.com/red-101/cinema-temporal-aliasing

Summing together is a simple merge? But the secret sauce is in the weighted LUT that would control how much influence each frame has?

Sorry it couldn't be a merge but would have to have values added together in something like a custom tool?

Thanks again this is very interesting!

User avatar
Chad
Fusionator
Posts: 1408
Joined: Fri Aug 08, 2014 1:11 pm
Been thanked: 14 times

Re: acceptable looking high frame rate footage

#12

Post by Chad » Fri Aug 09, 2019 7:46 am

Unlike what Red does, this is a post-process, so it works with any camera and you don't have to set anything special up at the time of the shoot. It also uses a larger sample window.

https://tessive.com/synthetic-shutters

Before RealD bought them out, Tessive sold a hardware and then software product. The former was a variable ND filter that worked like what you see with Red, but it was usable for more cameras, as it sat in the matte box.

Yes, the summing is just addition after you multiply by the weighting factor. It's very fast if you do it in one tool, but you could do it slowly with 24 Merge2D.

Tessive's kernels are symmetric, so you could emulate them by just making one half and mirroring it. That's where the LUT curve control makes sense. I know it's not in wide release yet, but that's the UI approach I used for ReflectionBlur3D's variable blur weighting. That's weighting over the reflection lobe, but the GUI would be the same concept, you'd only define half and it would automatically mirror. Of course, you don't HAVE to, that's how you get cartoon motion blur, or the motion trail stuff they used to do with photo illustrations.

User avatar
Tory
Fusioneer
Posts: 68
Joined: Fri Apr 13, 2018 11:29 am
Real name: Tory Hooton
Been thanked: 6 times
Contact:

Re: acceptable looking high frame rate footage

#13

Post by Tory » Tue Aug 13, 2019 8:50 am

WOW I just had an epiphany :) I was having a hard time understanding how you could just add the frames together to get the motion blur... but I was stuck thinking in 180° shutters. But if the shutter was shot at 360° then adding would be like having the shutter open for that long ;)

So that means that getting to "standard" 24 at 180° Shutter IF you shot at 120 360° shutter you just need a setup to add three frames together and skip 2? It is only if you want to take this further creatively that you would use curves right?
Chad wrote:
Thu Aug 08, 2019 7:23 am
But instead of a square "top hat" weighting, you could do a very sharp Sinc sampling and get smoother motion blur while maintaining some sharpness. Or you could make it a fatter curve and make it really creamy.
I am really starting to get the concept... I really wish I understood the specific tools better :ugeek:
Chad wrote:
Thu Aug 08, 2019 7:23 am
Today, in Fusion, you could take an LD, add 24 TimeStretchers to get 25 different frames concurrently, weighting each one with a LUT Curve Control
So weighting each one with a LUT Curve Control... you are talking about your 1D LUT plugin?
https://indicated.com/blackmagic-fusion ... al-lut-3d/
But then that is setting up for the creative weighting... if you were just "converting" to standard 24 you could just add right?
Chad wrote:
Thu Aug 08, 2019 7:23 am
(or just use the built in resize kernels by scaling a tiny 1D image to a slightly less tiny 1D image and a bunch of Probes), and sum the results together.
I am not following this... but I feel like if I did I would understand the universe a bit better ;)
Chad wrote:
Thu Aug 08, 2019 7:23 am
What's fun is that if you have the tooling set up in Fusion correctly, it's realtime (3CuS or a plugin/fuse helps on this, but it's not 100% needed).
So the 3CuS plugin is able to help by copying the LUT curve to the other side? or is that what you are suggesting using this for...
Chad wrote:
Fri Aug 09, 2019 7:46 am
but that's the UI approach I used for ReflectionBlur3D's variable blur weighting.
Thank you so much for sharing your knowledge!

P.S. Does anyone know what is happening under the hood in Resolves re time controls? Or how it is different or similar to interpreting the footage as a frame rate or dropping it into a timeline with a different frame rate?

User avatar
Tory
Fusioneer
Posts: 68
Joined: Fri Apr 13, 2018 11:29 am
Real name: Tory Hooton
Been thanked: 6 times
Contact:

Re: acceptable looking high frame rate footage

#14

Post by Tory » Tue Aug 13, 2019 12:31 pm

I built a small comp testing my simple understanding of this... I have three time stretchers with expressions on the source time so I have time-1, time, time+1. then I merge them together and set it to Burn In to get a true Add operation. Then I have a CC with gain set to 1/3. Finally I have a time speed with speed set to 5 to get playback to 24.
Code: [Select all] [Expand/Collapse] [Download] (WSLsnippet-2019-08-13--13.55.15.setting)
  1. {
  2.     Tools = ordered() {
  3.         Previous = TimeStretcher {
  4.             ViewInfo = OperatorInfo { Pos = { 385, 49.5 } },
  5.             NameSet = true,
  6.             Inputs = {
  7.                 SourceTime = Input {
  8.                     Expression = "time-1",
  9.                     Value = 1549
  10.                 },
  11.                 Input = Input {
  12.                     Source = "Output",
  13.                     SourceOp = "PipeRouter1"
  14.                 }
  15.             }
  16.         },
  17.         Frame = TimeStretcher {
  18.             NameSet = true,
  19.             CtrlWZoom = false,
  20.             ViewInfo = OperatorInfo { Pos = { 385, 105.656478881836 } },
  21.             Inputs = {
  22.                 SourceTime = Input {
  23.                     Expression = "time",
  24.                     Value = 1550
  25.                 },
  26.                 Input = Input {
  27.                     Source = "Output",
  28.                     SourceOp = "PipeRouter1"
  29.                 }
  30.             }
  31.         },
  32.         BurnIn1 = Merge {
  33.             ViewInfo = OperatorInfo { Pos = { 550, 105.656478881836 } },
  34.             NameSet = true,
  35.             Inputs = {
  36.                 PerformDepthMerge = Input { Value = 0 },
  37.                 Background = Input {
  38.                     Source = "Output",
  39.                     SourceOp = "Previous"
  40.                 },
  41.                 Foreground = Input {
  42.                     Source = "Output",
  43.                     SourceOp = "Frame"
  44.                 },
  45.                 BurnIn = Input { Value = 1 }
  46.             }
  47.         },
  48.         PipeRouter1 = PipeRouter {
  49.             ViewInfo = PipeRouterInfo { Pos = { 275, 105.656478881836 } }
  50.         },
  51.         Next = TimeStretcher {
  52.             ViewInfo = OperatorInfo { Pos = { 385, 162.256820678711 } },
  53.             NameSet = true,
  54.             Inputs = {
  55.                 SourceTime = Input {
  56.                     Expression = "time+1",
  57.                     Value = 1551
  58.                 },
  59.                 Input = Input {
  60.                     Source = "Output",
  61.                     SourceOp = "PipeRouter1"
  62.                 }
  63.             }
  64.         },
  65.         BurnIn2 = Merge {
  66.             ViewInfo = OperatorInfo { Pos = { 550, 162.256820678711 } },
  67.             NameSet = true,
  68.             Inputs = {
  69.                 PerformDepthMerge = Input { Value = 0 },
  70.                 Background = Input {
  71.                     Source = "Output",
  72.                     SourceOp = "BurnIn1"
  73.                 },
  74.                 Foreground = Input {
  75.                     Source = "Output",
  76.                     SourceOp = "Next"
  77.                 },
  78.                 BurnIn = Input { Value = 1 }
  79.             }
  80.         },
  81.         EXPCompensation = ColorCorrector {
  82.             ViewInfo = OperatorInfo { Pos = { 715, 162.256820678711 } },
  83.             NameSet = true,
  84.             Inputs = {
  85.                 MasterRGBGain = Input { Value = 0.333333333333333 },
  86.                 HistogramIgnoreTransparent = Input { Value = 1 },
  87.                 Input = Input {
  88.                     Source = "Output",
  89.                     SourceOp = "BurnIn2"
  90.                 },
  91.                 ColorRanges = Input {
  92.                     Value = ColorCurves {
  93.                         Curves = {
  94.                             {
  95.                                 Points = {
  96.                                     { 0, 1 },
  97.                                     { 0.4, 0.2 },
  98.                                     { 0.6, 0 },
  99.                                     { 1, 0 }
  100.                                 }
  101.                             },
  102.                             {
  103.                                 Points = {
  104.                                     { 0, 0 },
  105.                                     { 0.4, 0 },
  106.                                     { 0.6, 0.2 },
  107.                                     { 1, 1 }
  108.                                 }
  109.                             }
  110.                         }
  111.                     }
  112.                 }
  113.             }
  114.         },
  115.         TimeSpeed1 = TimeSpeed {
  116.             ViewInfo = OperatorInfo { Pos = { 880, 162.256820678711 } },
  117.             Inputs = {
  118.                 InterpolateBetweenFrames = Input { Value = 0 },
  119.                 Input = Input {
  120.                     Source = "Output",
  121.                     SourceOp = "EXPCompensation"
  122.                 },
  123.                 SampleSpread = Input { Disabled = true },
  124.                 Speed = Input { Value = 5 }
  125.             }
  126.         }
  127.     }
  128. }
Is this correct? Am I missing something at this level of complexity? Using the sample footage from RealD and Tessive websites it seems like this works to get a simple 24 180° from 120 360° source. But I am not sure if I am technically getting the right frames to drop out, I kind of doubt that it matters too much?

To expand out a bit I am assuming that you still center around the fame and add a time stretcher with time-2, and time+2? Then exposure compensation needs to be 1/5 and you end up with 24 with a 360° shutter?
Code: [Select all] [Expand/Collapse] [Download] (WSLsnippet-2019-08-13--14.23.57.setting)
  1. {
  2.     Tools = ordered() {
  3.         Previous_1 = TimeStretcher {
  4.             ViewInfo = OperatorInfo { Pos = { 385, -16.5 } },
  5.             NameSet = true,
  6.             Inputs = {
  7.                 SourceTime = Input {
  8.                     Expression = "time-2",
  9.                     Value = 1458
  10.                 },
  11.                 Input = Input {
  12.                     Source = "Output",
  13.                     SourceOp = "PipeRouter1"
  14.                 }
  15.             }
  16.         },
  17.         Previous = TimeStretcher {
  18.             ViewInfo = OperatorInfo { Pos = { 385, 49.5 } },
  19.             NameSet = true,
  20.             Inputs = {
  21.                 SourceTime = Input {
  22.                     Expression = "time-1",
  23.                     Value = 1459
  24.                 },
  25.                 Input = Input {
  26.                     Source = "Output",
  27.                     SourceOp = "PipeRouter1"
  28.                 }
  29.             }
  30.         },
  31.         Merge1 = Merge {
  32.             ViewInfo = OperatorInfo { Pos = { 550, 49.5 } },
  33.             Inputs = {
  34.                 PerformDepthMerge = Input { Value = 0 },
  35.                 Background = Input {
  36.                     Source = "Output",
  37.                     SourceOp = "Previous_1"
  38.                 },
  39.                 Foreground = Input {
  40.                     Source = "Output",
  41.                     SourceOp = "Previous"
  42.                 },
  43.                 BurnIn = Input { Value = 1 }
  44.             }
  45.         },
  46.         BurnIn1 = Merge {
  47.             ViewInfo = OperatorInfo { Pos = { 550, 105.656478881836 } },
  48.             NameSet = true,
  49.             Inputs = {
  50.                 PerformDepthMerge = Input { Value = 0 },
  51.                 Background = Input {
  52.                     Source = "Output",
  53.                     SourceOp = "Merge1"
  54.                 },
  55.                 Foreground = Input {
  56.                     Source = "Output",
  57.                     SourceOp = "Frame"
  58.                 },
  59.                 BurnIn = Input { Value = 1 }
  60.             }
  61.         },
  62.         Next_1 = TimeStretcher {
  63.             ViewInfo = OperatorInfo { Pos = { 385, 214.5 } },
  64.             NameSet = true,
  65.             Inputs = {
  66.                 SourceTime = Input {
  67.                     Expression = "time+2",
  68.                     Value = 1462
  69.                 },
  70.                 Input = Input {
  71.                     Source = "Output",
  72.                     SourceOp = "PipeRouter1"
  73.                 }
  74.             }
  75.         },
  76.         Next = TimeStretcher {
  77.             ViewInfo = OperatorInfo { Pos = { 385, 162.256820678711 } },
  78.             NameSet = true,
  79.             Inputs = {
  80.                 SourceTime = Input {
  81.                     Expression = "time+1",
  82.                     Value = 1461
  83.                 },
  84.                 Input = Input {
  85.                     Source = "Output",
  86.                     SourceOp = "PipeRouter1"
  87.                 }
  88.             }
  89.         },
  90.         Merge2 = Merge {
  91.             ViewInfo = OperatorInfo { Pos = { 550, 214.5 } },
  92.             Inputs = {
  93.                 PerformDepthMerge = Input { Value = 0 },
  94.                 Background = Input {
  95.                     Source = "Output",
  96.                     SourceOp = "Next_1"
  97.                 },
  98.                 Foreground = Input {
  99.                     Source = "Output",
  100.                     SourceOp = "Next"
  101.                 },
  102.                 BurnIn = Input { Value = 1 }
  103.             }
  104.         },
  105.         BurnIn2 = Merge {
  106.             ViewInfo = OperatorInfo { Pos = { 550, 162.256820678711 } },
  107.             NameSet = true,
  108.             Inputs = {
  109.                 PerformDepthMerge = Input { Value = 0 },
  110.                 Background = Input {
  111.                     Source = "Output",
  112.                     SourceOp = "BurnIn1"
  113.                 },
  114.                 Foreground = Input {
  115.                     Source = "Output",
  116.                     SourceOp = "Merge2"
  117.                 },
  118.                 BurnIn = Input { Value = 1 }
  119.             }
  120.         },
  121.         EXPCompensation = ColorCorrector {
  122.             ViewInfo = OperatorInfo { Pos = { 715, 162.256820678711 } },
  123.             NameSet = true,
  124.             Inputs = {
  125.                 MasterRGBGain = Input { Value = 0.2 },
  126.                 HistogramIgnoreTransparent = Input { Value = 1 },
  127.                 Input = Input {
  128.                     Source = "Output",
  129.                     SourceOp = "BurnIn2"
  130.                 },
  131.                 ColorRanges = Input {
  132.                     Value = ColorCurves {
  133.                         Curves = {
  134.                             {
  135.                                 Points = {
  136.                                     { 0, 1 },
  137.                                     { 0.4, 0.2 },
  138.                                     { 0.6, 0 },
  139.                                     { 1, 0 }
  140.                                 }
  141.                             },
  142.                             {
  143.                                 Points = {
  144.                                     { 0, 0 },
  145.                                     { 0.4, 0 },
  146.                                     { 0.6, 0.2 },
  147.                                     { 1, 1 }
  148.                                 }
  149.                             }
  150.                         }
  151.                     }
  152.                 }
  153.             }
  154.         },
  155.         TimeSpeed1 = TimeSpeed {
  156.             ViewInfo = OperatorInfo { Pos = { 880, 162.256820678711 } },
  157.             Inputs = {
  158.                 InterpolateBetweenFrames = Input { Value = 0 },
  159.                 Input = Input {
  160.                     Source = "Output",
  161.                     SourceOp = "EXPCompensation"
  162.                 },
  163.                 SampleSpread = Input { Disabled = true },
  164.                 Speed = Input { Value = 5 }
  165.             }
  166.         },
  167.         PipeRouter1 = PipeRouter {
  168.             ViewInfo = PipeRouterInfo { Pos = { 275, 105.656478881836 } }
  169.         },
  170.         Frame = TimeStretcher {
  171.             ViewInfo = OperatorInfo { Pos = { 385, 105.656478881836 } },
  172.             NameSet = true,
  173.             Inputs = {
  174.                 SourceTime = Input {
  175.                     Expression = "time",
  176.                     Value = 1461
  177.                 },
  178.                 Input = Input {
  179.                     Source = "Output",
  180.                     SourceOp = "PipeRouter1"
  181.                 }
  182.             }
  183.         }
  184.     }
  185. }
So... you could keep going and add more motion blur than you could capture with a shutter...

I still don't get how LUT shaping could technically work. What ever "weighting" the frame had would need to be accounted for at the exposure compensation stage right?

User avatar
Chad
Fusionator
Posts: 1408
Joined: Fri Aug 08, 2014 1:11 pm
Been thanked: 14 times

Re: acceptable looking high frame rate footage

#15

Post by Chad » Tue Aug 13, 2019 2:10 pm

You could put BC's after the TSt's with the gain set to the weight. So the outside ones might be, say, .25, and the inner ones might be .85 and the center one might be 1 (so no BC). Then your exposure compensation in the CC (could have been a BC) would be 1/5(0.75+0.15+0.0+0.15+0.75).