What is ‘Six Degrees of Freedom’ 360° video?

Six Degrees of Freedom – or 6DoF – is a system of recording scenes that when played back allow the viewer to change their view using six kinds (‘degrees’) of movement. Today common spherical video recoding uses multiple sensors attached to a spherical rig to record everything that can be seen from a single point. This means when the video is played, the viewer can…

  • turn to the left or right
  • look up or down
  • twist their head to rotate their view

…as look around inside a sphere of video.

If information has been recorded from two points close together, we perceive depth – a feeling of 3D known to professionals as ‘stereoscopic video.’ This feeling of depth applies as long as we don’t twist our heads too much or look up or down too far – because ‘stereo 360°’ only captures information on the horizontal plane.

6DoF camera systems record enough information so that three more degrees of movement are allowed. Viewers can now move their heads

  • up and down
  • left and right
  • back and forward

…a short distance.

As information about the environment can be calculated from multiple positions near the camera rig, the stereoscopic effect of perceiving depth also will apply when viewers look up and down as well as when they rotate their view.

Here is an animated gif taken from a video of a session about six degrees of freedom systemsgiven at the Facebook developer conference in April 2017:

Six degrees of freedom recording systems must capture enough information that the view from all possible eye positions within six degrees of movement can be simulated on playback.

A great deal of computing power is used to analyse the information coming from adjacent sensors to estimate the distance of each pixel captured in the environment. This process is known as ‘Spherical Epipolar Depth Estimation.’ The sensors and their lenses are arranged so that each object in the environment around the camera is captured by multiple sensors. Knowing the position in 3D space of the sensors and the specification of their lenses means that the distance of a specific object from the camera can be estimated.

6DoF: simulations based everything you can see from a single point… plus depth

Post-processing the 6DoF camera data results in a single spherical video that includes a depth map. A depth map is a greyscale image that stores an estimated distance for every pixel in a frame of video. Black represents ‘as close as can be determined’ and white represents ‘things too far away for us to determine where they are relative to each other – usually 10s of metres away (this distance can be increased by positioning the sensors further apart or by increasing their resolution).

Once there is a sphere with a depth map, the playback system can simulate X, Y and Z axis movement by moving pixels further away more slowly than pixels that are closer as the viewer moves their head. Stereoscopic depth can be simulated by sending slightly different images to each eye based on how far away each pixel is.

Moving millimetres, not metres

The first three degrees of environment video freedom – rotate – allow us to look at anywhere from a fixed point. 360° to the left or right and 180° up and down. The next three allow is to move our heads a little: a few millimetres along the X, Y and Z axes. They do not yet let us move our bodies around an environment. The small distances that the three ‘move’ degrees of freedom allow make a big difference to the feeling of immersion, because playback can now respond to the small subconscious movements we make in day to day real life when assessing where we are and what is around us.

25th November 2017

Free plugin finds and tracks faces in footage

30th November 2017

4K: Only the beginning for UK’s Hangman Studios’ Final Cut Pro X productions