FAQ/GL Renderer slow in int16/float32
From VFXPedia
The way the GL Renderer interprets the depth selection (on the radioactive tab) was changed from Fusion 5 to Fusion 6.
Fusion 5: The GL Renderer always renders to int8 on the graphics card and the results are placed into an image of the appropiate color depth. This means you only ever effectively had int8 precision and HDR values in your image were chopped. The reason the GL Renderer was hardlocked to int8 was because of poor support for int16 and float16/float32 on graphics cards at the time Fusion 5 launched. Video card support of float32/int16 is constantly improving and in Fusion 6 we have removed this hardlock.
Fusion 6: The GL Renderer will now render to int8/int16/float16/float32 buffers on the graphics card based upon the depth you have selected in the radioactive tab, which means you get the full color depth in your output image. Some current (2009) graphics cards still have poor support for rendering int16 and the drivers will use a software emulation fallback causing the render process to crawl like a turtle in molasses. Some older cards have poor float32 support and they will crawl also. If you experiencing this problem, try changing your color depth on the Renderer3D tool to float16 (or float32).
Note: Some users have noticed that old cards seem to render int16 fast while their newer cards crawl. It is up to the video card drivers what to do when Fusion requests an int16 render buffer. Some drivers will fallback to software emulation (causing very slow but correct renders) while other drivers will silently hand Fusion an int8 buffer (which renders quickly but with incorrect precision). Even using the same driver version you may find that a recent card falls back to software emulation of int16 while an older card behaves differently and drops down to int8.