Bad Stereo aka Brain Shear

The below is James Cameron's viewpoint on good stereo as notated by Jon Landau. Many thanks to Chuck Comisky from Lightstorm Entertainment for bringing us this information.

Brain Shear: "the brain's inability to reconcile the images received by the left and right eyes into a coherent stereo image, which causes it to send corrective messages to the eye muscles, which try to compensate but can't fix the problems baked into the image on the screen, creating an uncomfortable feedback loop and physical fatigue of eye muscles, which causes the eye muscles to scream at the brain to fuck off, at which point the brain decides to fuse the image the hard way, internally, which may take several seconds or not be possible at all --- all of which leads to headache and sometimes nausea."

People will not pay extra for this.

To prevent brain shear you should follow the New Rules of Stereo, also known as…

10 RULES FOR GOOD STEREO

1) THERE IS NO SCREEN. Whenever somebody starts talking about stuff coming "off the screen", ignore them. They are charlatans. The brain does not think there's a screen there at all. It is fooled into thinking there is a window there -- a window looking through into an alternate reality. In fact, the brain is barely aware of the boundaries of that window, or of how far away that window is, which is why objects which break the frame edges may be shot at distances closer than the actual screen plane -- which classical stereography texts will tell you won't work. Not only does it work, it is ESSENTIAL to doing good narrative 3D that this old rule be broken as frequently as possible. The exception to the new rule is when doing an "eye-poker" gag. If you're bringing something very close to the audience's noses as a featured visual flourish, that object (or the nearer part of it) should not break frame.

2) Stereo is very subjective. No two people process it exactly the same. Dr. Jim of course has the reference eyes, also known as the Calibration Eyes. But it's important to get a group consensus. We need to please the majority of eyes out there amongst the Great Unwashed.

3) Analyzing stereospace on freeze frames can be misleading. You can work this way, but the final judgment needs to be done with the shots flowing, ideally in the actual cut. Generally they look worse stopped than moving, because the eye gets depth cues from motion as well as parallax. However, excessive strobing caused by the 24P display rate may actually worsen the comfort factor in some shots.

4) Convergence CANNOT fix stereo-space problems. This is critical to remember. Correct convergence does two things and ONLY two things: it allows the eye to fuse very quickly (ideally instantaneously) when cutting from one shot to another. And it can be used to reduce ghosting caused by bleed through of the glasses on high-contrast subjects in the background depth planes. The eye will fuse a given object in frame in direct proportion to how closely converged it is -- more converged, faster fusion. You can only converge to one image plane at a time -- make sure it is the place the audience (or the majority of the audience) is looking. If it's Tom Cruise smiling, you know with 99% certainty where they're looking. If it's a wide shot with a lot of characters on different depth-planes doing interesting things, your prediction rate goes down.

5) Convergence is almost always set on the subject of greatest interest, and follows the operating paradigm for focus -- the eyes of the actor talking. If focus is racked during the shot to another subject, then convergence should rack. An exception to the rule of following focus exactly is a shot with a strongly spread foreground object which is NOT the center of interest (such as in an OTS), in which case a convergence-split may be used (easing the convergence forward slightly, to soften the effect). This should be combined with control of interocular to yield a pleasing result. Convergence splits are limited by high contrast edges at the plane of interest, which may cause ghosting in passive viewing systems.

6) Interocular distance varies in direct proportion to subject distance from the lens: The closer the subject, the smaller the interocular. The farther the larger. A shot of the Grand Canyon from half a mile away may have a 5' interocular. A shot of a bug from a few inches away may have a 1/4" interocular. Interocular tolerance is subjective, but there is a constant value of background split which cannot be exceeded.

7) Interocular and convergence should both vary dynamically throughout moving shots.

8) In a composite, the foreground and background may want to have different interoculars. For example, in an OTS, the stereo-space between the two foreground characters may be compressed, and the stereospace in the background not. Conversely, in a problematic greenscreen comp where the interocular was baked in too wide, the background may be brought closer to some extent by shifting one eye horizontally relative to the other. These fixes only work in shots with an empty mid-ground between the foreground elements and the nearest objects in the background. This technique can be used or abused.

9) When stereo looks bad to the eye (visual cortex) it is important to eliminate the possible problems sequentially:

- Synch -- the number one killer of young eyeballs.

- Reverse-Stereo -- this will look equally egregious. Some shots may actually appear to almost work as stereo, but foreground objects will look "cut out", as if you are looking through a window. Turning the glasses upside down is the test. If it improves, it's reverse stereo.
NOTE: when a shot is FLOPPED editorially, the L and R eyes must be reversed, or you'll get reverse stereo.

- Zoom Mismatch (technically it's focal-length mismatch) -- characterized by a radial interference pattern when L-R images are viewed overlaid. This can be a vexing source of brain shear.

- Vertical Alignment. The eye can tolerate a lot of horizontal alignment mismatch (this is equivalent to incorrect convergence) but very little vertical misalignment.

- Color or Density Mismatch. The brain is more sensitive to density mismatch than color, but both should be matched.
NOTE: with linear polarization, there will always be a slight magenta/cyan shift between the eyes. This should NOT be corrected in the color timing of the master, because some systems use circular polarization, which doesn't have this shift.

- Render Errors or element drop-outs between eyes -- some actual thing, object, shadow or lighting artifact is missing from one eye.

- Specular Highlights -- because the angle of reflection is different for glossy or mirror surfaces as viewed from left or right eyes, highlights may exist in one eye but not the other.

- Lens Flare, matte box shadows -- these may strike one lens, not the other.

- Image Warping -- this can happen at the edges of frame with certain lenses, and can happen with warped beamsplitters.

- Movement or vibration which is different in L-R. This shows up in some camera systems (not ours). It takes a lot of jiggle between eyes to become apparent.

ONLY when all these possible sources of brain-shear have been eliminated, should inter-ocular be re-examined.

10) Some shots just can't be fixed. If they are photographic shots with the interocular baked in, they must be re-done or they must be left in the film as non-stereo shots (L-L). If they are CG shots, the interocular can be reduced to a very low value, to give a sense of some stereospace, even though it is inconsistent with the rest of the sequence -- in the dramatic flow it will work.