Make Your 360 Video a Reality

You probably have heard about virtual reality and 360 video. Maybe that was from us at Fenton, on Google, at CES or through office chatter. With so much discussion about the technology, things can get overwhelming very quickly. How does one actually go about planning and shooting a 360 video? What are the key considerations when planning a 360 video? How is it stitched together? Mike Donaghey, co-founder of Scratch Empire, tackled those very questions in an extended interview with Fenton.

What are some of the key differences between producing a 360 video shoot and a traditional video shoot?

Similar to a traditional video production, diligent preproduction, both in terms of technical preparation and story development, as well as clear communication and management of expectations about what the final viewing experience will be, are crucial to success. Key considerations when planning a 360 video experience include:

Capture:

  • # of setups/shots
  • camera motion (is it necessary?)
  • sound capture (use of binaural audio?)
  • lighting the scene
  • viewer’s perspective
  • introducing characters and events

Post Production:

  • data offload/management
  • achieving camera synch for each setup/shot
  • stitching
  • color correction
  • sound mix
  • exporting for particular viewing environments

What is the physical “footprint” of a 360 video rig on a set? Does it need to be “fenced off”?

If the camera is stationary, then it is likely the only visible footprint will be the base of the mount used to secure the camera, typically a monopod or tripod. If the camera is in motion, it is likely that post-production will be necessary to mask or remove the mount. For instance, a drone overhead or person holding a boom pole.

It is important to understand that essentially everything is in frame when shooting in 360. Another variable to consider is the proximity of the camera to the subject and other objects in the scene (walls, rails, etc). Depending on the rig and number of cameras used, the distance will vary, but a safe bet is to assume anything within a 5 ft. diameter of the camera is too close and will produce stitching errors.

 

STUDIO_TOUR_6CAM

 

What are some of the factors a 360 video shooting crew must consider, and why?

For the purpose of this discussion, I will assume most readers/shooters will be working with a multi camera GoPro system such as the Freedom360 or 360Heros setup.

Preparation:

To ensure a successful capture when using 6 to 14  cameras, it is extremely important that all SD cards are of the same make, model and write speed, that cameras are of the same make and model and have identical settings (ISO, white balance, sharpness, color profile, frame rate, and resolution…960p, 1440p, 2.7K 4:3). Label all cameras and corresponding SD cards and clearly mark which camera or camera pair will be the front facing camera angle, also called front view. Shooting at higher frame rates (60 fps or higher) will provide more temporal flexibility when synching all camera angles in post. Pairing all cameras to a single Wi-Fi remote to start/stop recording will help minimize time spent establishing a synch point later on in post. Additionally, it is helpful to capture both an audio and visual synch at the beginning of each recording. This can be achieved using a dog training clicker or loud clap for the audio synch and simply rotating the camera rig back and forth for the visual synch. Be aware of the inherent limitations of GoPro cameras including short battery life, poor lowlight performance and file size limit.

Capture:

Intention is the name of the game when creating a 360 film experience. Traditional cinematic devices such as framing, panning and tilting are now in the hands of the viewer. The rules of engagement for how we as filmmakers shape the cinematic experience and direct a viewer’s attention in a scene and along a narrative arc must be reconsidered with respect to the viewing environment (for instance a Head Mounted Display vs. a browser based 360 video player vs. an immersive projection display). Perspective, blocking, pacing, simulated motion and transition between scenes are important variables to consider, specifically with regard to content viewed in a HMD.

Binaural audio provides an interesting opportunity to cue and direct a viewer’s attention. If binaural audio will be used in your final viewing environment, then specific microphones and additional audio mixing are required.

Remember…everything is in frame. Keep this in mind as you position your camera rig and consider lighting your scene. Inform talent and crew of the safe zone (approx. 5-foot diameter around the camera rig). Whenever possible, it is best to position the front of the rig so that your subject does not fall along a stitch line. Additional considerations are similar to those of traditional shooting: power, framing, blocking, lighting, camera motion, sound capture and data management.

Post:

Post production can be thought of in three steps:

  1. Ingesting and organizing your data.
  2. Creating and correcting your stitch.
  3. Finalizing your video (color correction, sound mix, export and compression for specific viewing environments).

Ingesting and organizing media from all cameras can be automated and expedited using software such as 360CamMan by 360Heros or ManyCams by PurplePill VR. Once all camera data has been organized and labeled, the next step is to create a panorama or equirectangular projection. This is achieved by stitching, the process of blending and correcting all cameras perspectives to produce a seamless panoramic image.

Autopano Pro by Kolar and VideoStitch are two popular software options for stitching multi­ camera GoPro projects. These programs are well suited to stitch and correct 2D and stereoscopic 3D video projects. Do note that considerable graphics processing power is required to stitch and render 360 video content, most especially on high resolution stereoscopic 3D projects. For a deeper dive into the post process check out the free Udemy course by Nick Kraakman of PurplePill VR and also the VideoStitch tutorial by Freedom360.

How long is the typical post process per minute on a 360 video?

It really varies per project based on several factors including: project parameters (2D vs 3D), resolution, if there is motion in the shot, complexity of the shot being stitched, the amount of correction required (simple base stitching vs detailed correction of stitching errors and complex compositing work), type of audio mix desired and GPU processing specs on the computer(s).

Working in 3D (vs 2D) will generally double your post timeline and cost. As a rule of thumb, you should always anticipate some detailed image correction and over-budget post production costs.  Like everything else in this business, it’s fast, cheap and good…choose two.

__

Mike Donaghey is a video producer, artist and co-founder of Scratch Empire. Follow him on Instagram and Twitter: @pazdelamente

Scratch Empire is a creative collective and digital design studio based in NYC and Dublin, Ireland. Positioned at the intersections of art and technology, we curate dynamic teams of storytellers, artists, technologists, and creators to meet the needs of our clients. Our approach is rooted in collaboration and our process is informed by the unique needs that each project presents. Our primary focus is commercial video production and the application of video content, such as projection mapping and immersive I interactive media experiences.