Soon: More audio timelines that can automatically be modified to match changes in video timelines

Wednesday, 06 December 2017

In many video editing workflows, assistant have the thankless task of making special versions of timelines that generate files for others in postproduction. A special timeline for VFX people. A special timeline for colour. A special timeline for exporting for broadcast. A special timeline for audio. Transferring timelines to other departments is called ‘doing turnovers.’

Final Cut Pro X is the professional video editing application that automates the most turnovers. It seems that Apple want to stop the need for special timelines to be created. Special timelines that can go out of sync if the main picture edit changes. Final Cut video and audio roles mean that turnovers for broadcast no longer require special timelines.

The Vordio application aims to make the manual audio reconform process go away. At the moment problems arise when video timelines change once the audio team start work one their version of the timeline. Sound editors, designers and mixers can do a great deal of work on a film and then be told that there have been changes to the picture edit.

What’s new? What’s moved? What has been deleted?

Vordio offers audio autoreconform. That’s if (when) the picture timeline changes Vordio looks at the NLE-made changes and produces a change list that can be applied to the audio timeline in the DAW. It currently does this with Final Cut Pro X and Adobe Premiere timelines. If the sound team have already made changes in Reaper (a popular alternative to ProTools) and they need to know what changes have since been made to the video edit, Vordio can make changes to the audio timeline that reflect the new video edit. This includes labelling new clips, clips that have moved and showing which clips have been deleted.

It looks like Vordio will soon work with other DAWs by using the Hammerspoon UI scripting toolkit.

StudioOne is a useful DAW that has a free version.

I expect timeline autoreconform come to all timelines. To get a preview of what it could be like, check out Vordio.

Film from a single point, then wander around inside a cloud of pixels in 3D

Monday, 04 December 2017

People wearing 360° spherical video headsets will get a feeling of presence when the small subconscious movements they make are reflected in what they say. This is the first aim of Six Degrees of Freedom video (6DoF). The scene changes as the viewer turns in three axes and moves in three axes. 6DoF video is stored as a sphere of pixels and a channel of information that defines how far each of those pixels are from the camera.

Josh Gladstone has been experimenting with creating point clouds of pixels. His 4th video in a series about working on a sphere of pixels plus depth shows him wondering around a 3D environment that was captured by filming from a single point.

The scenes he uses in his series were filmed on an GoPro Odyssey camera. The footage recorded by its 16 sensors was then processed by the Google Jump online service to produce a sphere of pixels plus a depth map.

The pixels that are closest to the camera have the brighter corresponding pixels in the depth map.

360° spherical video point clouds are made up of a sphere of pixels whose distance from the centre point have been modified based on a depth map.

Josh has written scripts in Unity - a 3D game development environment - that allow real-time rendering of these point clouds. Real time is important because users will expect VR headsets to be able to render in real time as they turn their heads and move around inside virtual spaces.

You can move around inside this cloud of pixels filmed from a single point:

In the latest video in his series Josh Gladstone simulates how a VR headset can be used to move around inside point clouds generated from information captured by 360° spherical video camera rigs. He also shows how combining multiple point clouds based on video taken from multiple positions could be the basis of recording full 3D environments:

What starts as an experiment in a 3D game engine is destined to be in post production applications like Apple’s Motion 5 and Adobe After Effects, and maybe eventually in NLEs like Final Cut Pro X.

I’m looking forward to playing around inside point clouds.

28 videos, 53 million views (so far) - advice for your video essay YouTube channel

Sunday, 03 December 2017

Every Frame a Painting is a YouTube channel made up of video essays about visual storytelling. It has 1.3 million subscribers and millions of views. The creators Taylor Ramos and Tony Zhou have decided to close close it. Luckily for us they have written an essay on what they learned - including tips for others considering making videos in this form.

All the videos were made with Final Cut Pro X:

Every Frame a Painting was edited entirely in Final Cut Pro X for one reason: keywords.

The first time I watch something, I watch it with a notebook. The second time I watch it, I use FCPX and keyword anything that interests me.



Keywords group everything in a really simple, visual way. This is how I figured out to cut from West Side Story to Transformers. From Godzilla to I, Robot. From Jackie Chan to Marvel films. On my screen, all of these clips are side-by-side because they share the same keyword.

Organization is not just some anal-retentive habit; it is literally the best way to make connections that would not happen otherwise.

Even if you don't make scholarly videos on the nature of visual storytelling, there is a lot to be learnt from their article and the 28 video essays in their channel.

iPhone-mounted camera will capture 3D environments that can be fully explored in VR

Friday, 01 December 2017

Photogrammetry is the method of capturing a space in 3D using a series of still photos. It usually requires a great deal of complex computing power. A forthcoming software update for the $199 Giroptic iO (a 360° spherical video camera you mount onto your iPhone or Android phone) will give users of the  the ability to capture full VR models of the spaces they move through.

Mic Ty of 360 Rumors writes:

the photographer simply took 30 photos, then uploaded them to cloud servers for processing. The software generates the 3D model, and can even automatically remove the photographer from the VR model, even though the 360 photos had the photographer in them.

Once the model is generated it can be included in full VR systems that can be explored in VR headsets. This will work especially well in devices such as the HTC Vive, which can detect where you are in 3D space and move the 3D model in VR to match. Remember though that many VR experiences are about interactivity, and in order to add that to a 3D environment, users will have to use a VR authoring system.

3D environments in post production applications

For those making 360° spherical videos, it is likely that they will want their post tools to be able to handle the kind of 3D models generated by systems like these. Storytellers range from animators (users of applications like Blackmagic Fusion) to editors and directors (users of Final Cut Pro X and Adobe Premiere). Developers should bear in mind the way they integrate 3D environments in post applications should vary based on the nature of the storyteller.

However, it looks like there'll be a new skill to develop for 360° spherical photographers: where to take pictures in a space to capture the full environment in 3D.

Go over to 360 Rumors to see a video of the system in action.

 

Amazon launches Rekognition Video content tagging for third-party applications

Thursday, 30 November 2017

Amazon have announced an content recognition service that developers can use to add features to their video applications, Streaming Media reports:

Rekognition Video is able to track people across videos, detect activities, and identify faces and objects. Celebrity identification is built in. It identifies faces even if they're only partially in view, provides automatic tagging for locations and objects (such as beach, sun, or child), and tracks multiple people at once. The service goes beyond basic object identification, using context to provide richer information. The service is available today.

The videos need to be hosted in or streamed via Amazon S3 storage.

Apple are unlikely to incorporate Amazon Rekognition Video in their video applications and services. Luckily the Final Cut Pro X and Adobe Premiere ecosystems allow third-party developers to create tools that use this service. Post tools makers can then concentrate on integrating their workflow with their NLE while Amazon invest in improving the machine learning they can apply to video.

4K: Only the beginning for UK’s Hangman Studios’ Final Cut Pro X productions

Thursday, 30 November 2017

Some think that Final Cut Pro X has problems working with 8K footage. Hangman Studios has been making concert films with this workflow since 2015. There’s a new case study by Ronny Courtens of Lumaforge at fcp.co:

Two years ago I made a conscious decision to get rid of all of my HD cameras. We decided that everything from now on had to be 4K and up.

…our boutique post production services in London are newly designed and built for 8K workflows and high end finishing. Drawing upon 17 years of broadcast post experience we've designed a newer, more simplified and efficient workflow for the new age of broadcast, digital and cinema. We’re completely Mac based running a mix of older MacPro 12-cores (mid 2010) with the newer MacPro (2013) models.

I imagine there’ll be space in their West London studios for at least one new iMac Pro. When Apple gave a sneak preview of Final Cut Pro 10.4 and Motion 5.4 as part of the FCPX Creative Summit at the end of October, they showed in easily running an 8K timeline on a prerelease iMac Pro.

Apple have said that Final Cut Pro X 10.4 will able to support 8K HEVC/H.265 footage on macOS High Sierra. This kind of media is produced by 360º spherical video systems such as the Insta360 Pro. When 10.4 comes out in December, editors will be able to do even more at high resolutions.

What is ‘Six Degrees of Freedom’ 360° video?

Sunday, 26 November 2017

Six Degrees of Freedom – or 6DoF – is a system of recording scenes that when played back allow the viewer to change their view using six kinds (‘degrees’) of movement. Today common spherical video recoding uses multiple sensors attached to a spherical rig to record everything that can be seen from a single point. This means when the video is played, the viewer can…

  • turn to the left or right
  • look up or down
  • twist their head to rotate their view

…as look around inside a sphere of video.

If information has been recorded from two points close together, we perceive depth - a feeling of 3D known to professionals as ‘stereoscopic video.’ This feeling of depth applies as long as we don't twist our heads too much or look up or down too far - because ‘stereo 360°’ only captures information on the horizontal plane. 

6DoF camera systems record enough information so that three more degrees of movement are allowed. Viewers can now move their heads

  • up and down
  • left and right
  • back and forward

…a short distance.

As information about the environment can be calculated from multiple positions near the camera rig, the stereoscopic effect of perceiving depth also will apply when viewers look up and down as well as when they rotate their view.

Here is an animated gif taken from a video of a session about six degrees of freedom systems given at the Facebook developer conference in April 2017:

Six degrees of freedom recording systems must capture enough information that the view from all possible eye positions within six degrees of movement can be simulated on playback. 

A great deal of computing power is used to analyse the information coming from adjacent sensors to estimate the distance of each pixel captured in the environment. This process is known as ‘Spherical Epipolar Depth Estimation.’ The sensors and their lenses are arranged so that each object in the environment around the camera is captured by multiple sensors. Knowing the position in 3D space of the sensors and the specification of their lenses means that the distance of a specific object from the camera can be estimated.

6DoF: simulations based everything you can see from a single point… plus depth

Post-processing the 6DoF camera data results in a single spherical video that includes a depth map. A depth map is a greyscale image that stores an estimated distance for every pixel in a frame of video. Black represents ‘as close as can be determined’ and white represents ‘things too far away for us to determine where they are relative to each other - usually 10s of metres away (this distance can be increased by positioning the sensors further apart or by increasing their resolution).

Once there is a sphere with a depth map, the playback system can simulate X, Y and Z axis movement by moving pixels further away more slowly than pixels that are closer as the viewer moves their head. Stereoscopic depth can be simulated by sending slightly different images to each eye based on how far away each pixel is.

Moving millimetres, not metres

The first three degrees of environment video freedom - rotate - allow us to look at anywhere from a fixed point. 360° to the left or right and 180° up and down. The next three allow is to move our heads a little: a few millimetres along the X, Y and Z axes. They do not yet let us move our bodies around an environment. The small distances that the three ‘move’ degrees of freedom allow make a big difference to the feeling of immersion, because playback can now respond to the small subconscious movements we make in day to day real life when assessing where we are and what is around us.

Free plugin finds and tracks faces in footage

Saturday, 25 November 2017

For legal reasons it is sometimes necessary to have to hide the identity of people in footage. ‘Secret Identity’ is a free plugin for Final Cut Pro X, Adobe Premiere, Adobe After Effects and Motion that works out where all the faces are in a clip. It can also automatically track their positions as they move. You can then choose which people's identity you wish to hide. The plugin can then obscure their whole face, their eyes or their mouth. It can also obscure everything but people's faces.

Here's a demo video showing how it works:

Secret Identity from Dashwood Cinema Solutions is available for free if you install the free FxFactory post production app store. It only is available for macOS.

Move over 800MB/s USB 3.1 externals, here come Thunderbolt 3 drives

Tuesday, 21 November 2017

There seems to be some competition improving the state of external drives. Most workflows are more than served by the kind of bandwidth available through the USB 3.1 protocol. There are always jobs that need more. Barefeats have done a new test comparing the fastest bus-powered SSD from last year with this year’s Thunderbolt 3 drives and enclosures from Sonnet, Netstor, AKiTiO and LaCie.

See how fast they can read and write data over on the Bare Feats site.

VR: Six 4K ProRes streams to the same drive?

Although read speeds are getting very high, write speeds are becoming more important for some productions. As well as quickly needing to make backups for gigabytes of camera media, some VR cameras can have external devices attached. The Insta360 Pro currently has a USB connection for an external SSD. It records media from six sensors at the same time to HEVC/H.265. Soon producers will want to record high-quality ProRes from 6 (or more) sensors at a time, and Thunderbolt 3 might be the answer.

120 Animation Transitions for Final Cut Pro X - Special Black Friday offer - $39 for one week

Tuesday, 21 November 2017

From today Tuesday 21st November, there is a special offer on my Alex4D Animation Transitions pack - which applies for one week only, until the end of 'Cyber Monday' - November 27th.

Very rarely the FxFactory professional tools app store offers sales on all the plugins. From today, they are offering 20% off everything they distribute - including my first product.

There is no special ‘Black Friday’ or Thanksgiving offer code to apply at checkout. For one week, everything is automatically 20% cheaper.

Alex4D Animation Transitions is a pack of 120 different ways of animating content on and off the screen. Instead of having to apply a series of complex keyframes to multiple clip parameters, just drop one of these transitions on for instant animation. The advantage of using keyframes is that you can quickly adjust the start time, finish time and duration of the animation by dragging the transition or changing its duration.

Here’s a new video showing how it works:

 

  • Spin, scale and fade clips onto the screen
  • Move clips from any location: drag on-screen control to choose
  • Change animation speed and timing without using keyframes by dragging transitions in the timeline
  • Animate overlaid logos
  • Animate titles
  • Animate connected stills and videos
  • Animate between full-screen clips in the main storyline
  • Animate between clips in secondary storylines
  • Animate off the screen using the same settings, or opposite settings to keep clips moving, spinning and scaling in the same direction as they animated on
  • Scale and spin around around any point on the screen: drag on-screen control to choose
  • Divide clips into two and control the timing and animation of each part separately
  • Crop animations  
  • Works in all resolutions from 480p up to 5K and higher
  • Works at any frame rate
  • Works in any aspect ratio: landscape 20:1, 16:9, 4:3, square and portrait 3:4, 9:16, 1:20
  • 32 page PDF manual (10.6MB)

Transitions range from subtle and straightforward presets for editors who want quick results to complex and fully-customisable presets for designers who want instant advanced motion graphics in the Final Cut Pro X timeline.

25 minutes tutoriel vidéo en français par YakYakYak.fr

Traducción de esta pagina en español por Final Cut Argentina.

Buy now for $39 

atransitions-logo-with-shadow

Buy by credit card via FxFactory

Download free trial

A fully-functional watermarked trial version of Alex4D Animation Transitions is available through at FxFactory post-production app store. The trial version includes all 120 transitions and a 32 page PDF manual.  

 

Icon for FxFactory application

Free trial via FxFactory

 

If you don’t have FxFactory, click the ‘Download FxFactory’ button.

A little more help on installing FxFactory.

Restart Final Cut Pro X to see a new ‘Alex4D Animation’ category in the Transitions Browser.

Removing the watermarks

Trial version transitions include a watermark. To remove the watermark, select one of the applied transitions in the inspector and click the Buy button in Final Cut Pro, or in the FxFactory application, click the price button next to the Animation Transitions icon in the Alex4D section of the catalog. If you have entered your credit card and billing information, a dialogue box will appear to confirm your purchase. For more information on activating Alex4D Animation Transitions, visit the FxFactory website.

Generate centre-cutout guides for ARRI shoots using free online tool

Monday, 20 November 2017

The highest resolution most feature films and high-end TV shows need to be delivered in is 4K - 4096 by 2304. That doesn't mean there aren't benefits to shooting at higher resolutions. 

The advantage of using cameras such as the ARRI 65 is that 6K allows for reframing in post. The camera operator can shoot with a very loose frame knowing that editors can choose which part of the 6K frame to include in the 4K master. Also VFX can benefit from the pixels from outside the visible frame.

In order to make sure a 6K camera is being operated so that the 4K area of interest is framed correctly, it is useful to have a frame guide in the camera. ARRI have a free tool that generates these frame guides so that they can be shown on set:

You can choose which ARRI camera that is planned to be to used on your shoot and choose which guides you want to show centre cutout. In this case the 6560x3100 ARRI 65 has guides for 5K and 4K framing (based on a 2.39:1 aspect ratio).

These guides are useful in post, so the tool can also generate transparent PNGs that can be used in the production and post production workflow.

Try out the ARRI Frameline Composer on the ARRI website.

When Final Cut Pro X importing is not enough: A guide to rsync - free media copying tool

Sunday, 19 November 2017

There is a point in post production workflow when only using your NLE’s importing function is not enough. When insurance companies want to know how you are confirming data transfers and where your redundant backups will be stored. Instead of investing a dedicated application for media management, Seth Goldin suggests a free OS alternative:

As far as I can tell, rsync remains superior to pretty much every other professional application for media ingest, like Imagine Products ShotPut Pro, Red Giant Offload, DaVinci Resolve’s Clone Tool, Pomfort Silverstack, or CopyToN. Each of these applications are great in their own rights, and they deliver what they promise, but they can be slow, expensive, and CPU-intensive. In contrast, rsync is fast, completely free of charge, and computationally lightweight.

It looks like the tradeoff is much more power in return for learning a command-line based interface. Seth has written a post that explains rsync's advantages, how to install it and how to use it on Medium entitled ‘A gentle introduction to rsync, a free, powerful tool for media ingest.’ He includes how to use rsync to copy 9 camera cards onto three hard drives so that the process uses the minimum amount of CPU power while making the most of the maximum speed of each of the hard drives.

Although you may not need to learn it today, it could be the right solution for a friend now, or you soon.

9:16, 1:1, 1:2, 4:5… (Social) Media aspect ratios primer

Sunday, 19 November 2017

Many experienced film makers decry vertical and square video. The fact is, millions of people watch stories that way on their personal devices. Facebook is now not just ‘social media’ - it is ‘media.’ 20 years ago editors started to deal with other aspect ratios than 4:3. Here's the specifications from Facebook on the various aspect ratios their platforms work with:

View in new window or see PDF on Facebook site.

If you aren’t working in non-16:9 now, you will soon, or at the least need to prepare your work for others who will.

1:1 and 9:16 video are likely to become more popular, so learn to be effective in these aspect ratios!

Updated to add: Chris Roberts wrote an article earlier in 2017 on how to make 1:1 videos using Final Cut Pro X.

Apple’s VR production patent by Tim Dashwood

Thursday, 12 October 2017

Within weeks of third-party Final Cut Pro X developer Tim Dashwood joining the ProApps team, Apple applied for a patent that changes the way computers connect to VR and AR head-mounted devices: ‘Method and System for 360 Degree Head-Mounted Display Monitoring Between Software Program Modules Using Video or Image Texture Sharing’ (PDF version).

It turns out that Tim is doing more for Apple than being part of adding VR video editing features to applications. His work is part of the way macOS works in all sorts of applications.

Direct to Display = Less OS overhead

Up until now, head-mounted devices like the Oculus Rift and HTC Vive connect as specialised displays. As far as macOS or Windows is concerned, an attached device is just another monitor - albeit with an odd aspect ratio and frame rate.

The new method is for VR/AR tools to connect to Apple devices in such a way that there is no longer a 'simulate a monitor' overhead. Apple is aiming for a 1/90th of second refresh rate for VR and AR experiences. Even if you are viewing a VR video that is playing at 60 frames a second, for smooth movement it is best if what the viewer sees updates 90 times a second, so if they turn quickly, the content keeps up with them.

If macOS, iOS and tvOS are spending less time simulating a monitor display. That means more of the 90th of a second between refreshes can be spent on rendering content. Also less powerful GPUs will be able to render advanced VR content and AR overlays - because there's less OS delay in getting it in front of users' eyes.

The idea is for VR/AR applications to modify image data in a form that the OS automatically feeds to devices without simulating a monitor: 

…methods and systems for transmitting monoscopic or stereoscopic 180 degree or 360 degree still or video images from a host editing or visual effects software program as equirectangular projection, or other spherical projection, to the input of a simultaneously running software program on the same device that can continuously acquire the orientation and position data from a wired or wirelessly connected head-mounted display's orientation sensors, and simultaneously render a representative monoscopic or stereoscopic view of that orientation to the head mounted display, in real time.

For more on how HMD software must predict user actions in order to keep up with their movement, watch the 2017 Apple WWDC ‘VR with Metal 2’ session video: One guest speaker was Nat Brown of Valve Software who talked about SteamVR on macOS High Sierra:

Our biggest request to Apple, a year ago, was for this Direct to Display feature. Because it's critical to ensure that the VR compositor has the fastest time predictable path to the headset display panels. We also, really needed super accurate low variance VBL, vertical blank, events. So, that we could set the cadence of the VR frame presentation timing, and we could predict those poses accurately.

VR production

Although the patent is about how all kinds of applications work with VR and 3D VR, it also mentions a mode where the production application UI appears in the device overlaid on the content being produced:

FIG. 5 illustrates the user interface of a video or image editing or graphics manipulation software program501 with an equirectangularly projected spherical image displayed in the canvas502 and a compositing or editing timeline503. The image output of the video or image editing or graphics manipulation software program can be output via a video output processing software plugin module504 and passed to a GPU image buffer shared memory and then passed efficiently to the image receiver507 of the head-mounted display processing program506. The 3D image processing routine508 of the head-mounted display processing program will texture the inside of a virtual sphere or cube with a 3D viewpoint at the center of said sphere or cube. The virtual view for each of the left and right eyes will be accordingly cropped, duplicated (if necessary), distorted and oriented based on the lens/display specifications and received orientation data509 of the wired or wirelessly connected head-mounted display's510 orientation sensor data. Once the prepared image is rendered by the 3D image processing routine, the image can then be passed to the connected head-mounted display511 for immediate presentation to the wearer within the head-mounted display.

Additionally, since wearing a head-mounted display will obscure the wearer's view of the UI of the video or image editing or graphics manipulation software program, it is also possible to capture the computer display's user interface as an image using a screen image capture software program module512 and pass it to an image receiver/processor513 for cropping an scaling before being composited on the left and right eye renders from the 3D image processing routine508, 514, 515 and then the composited image can be passed to the connected head-mounted display for immediate presentation to the wearer within the head-mounted display.

Further, a redundant view can be displayed in a window516 on the computer's display so others can see what the wearer of the head-mounted display is seeing, or if a head-mounted display is not available

Tim has been demonstrating many interesting many 3D and VR production tool ideas over the years. Good to see his inventions now have the support of Apple Computer. I'm looking forward to the other ideas he brings to the world through Apple.

Adobe Premiere used on big new 10-part Netflix TV series

Wednesday, 13 September 2017

It was tough ask for Adobe Premiere to tackle the needs of David Fincher's 'Gone Girl' feature film in 2014. In recent months, it has been used on a bigger project: ‘Mindhunter’ - a 10 hour David Fincher exec-produced high-end TV series soon to be available on Netflix. 

Instead of a single team working on a two hour film, TV series have multiple director-cinematographer-editor teams working in parallel. In this case the pilot was directed by David Fincher. The way TV works in the US is that the pilot director gets an executive producer credit for the whole series because the decisions they make define the feel of the show from then on. Fincher brought along some of the team who worked on Gone Girl. While they worked on the pilot post production, other teams shot and edited later episodes in the series.

The fact that the production company and the studio were happy for the workflow to be based around Premiere Pro CC is a major step up for Adobe in Hollywood.

The high-end market Adobe is going for is too small to support profitable software development. Even if they sold a subscription to all professional editors in the USA, that would not be enough to pay for the costs in maintaining Adobe Premiere. Its use in high-end TV and features is a marketing message that Adobe must think contributes to people choosing to subscribe to the Adobe Creative Cloud - even if renters will never edit a Hollywood film or TV show.

What about Final Cut Pro X?

Directors Glenn Ficarra and John Requa are happy to use Final Cut Pro X in studio features. They haven't been able to use Final Cut in the TV shows they have directed. Glenn and John directed the pilot and three other episodes of ‘This is Us’ - a big success for NBC in the US last year. Although directors have much less power in TV than in features, pilot directors do have some power to set standards for the rest of the series. I don’t know why Final Cut wasn’t used on ‘This is Us.’ It could be a lack of enough collaboration features or a lack of enough Final Cut-experienced crew. It may take a while before both of these reasons no longer apply.

Although the 10.3 update for Final Cut Pro X was nearly all about features requested by people who work on high-end production, it seems the majority of the ProApps team time is spent on features for the majority of Final Cut users. 

Is the use of Final Cut Pro X in a smattering of Hollywood productions enough to support Apple’s marketing message? Will Apple invest more in Final Cut’s use in Hollywood? 

When it comes to the opinions of Hollywood insiders, it seems that Premiere is currently the only viable alternative to Avid Media Composer. Although the ProApps team is very likely to want Final Cut to be the choice people make at all levels of production, will they be able to get the investment they need from the rest of Apple to make that happen? We’ll see in the coming months and years.

IMF: Output any version you need from a single master

Tuesday, 12 September 2017

Interoprable Master Format is a system that allows you to specify all the versions of a feature film using a set of rules. Instead of rendering out every combination of language, aspect ratio, certification, distributor standard, you define how their rules apply to your movie. When a specific version is called for, it can then be rendered out automatically based on the media and the specific timeline included in an IMP (Interoperable Mastering Package).

Even if you don't work in high-end features, it is worth learning about this because it is coming to TV and online delivery in 2018. For now this IMF is for high-end tools, services and suppliers, but the nature of video production means that it will be the eventual standard most NLEs will support - maybe even directly, with few external tools.

This video – presented by Bruce Devlin (@mrMXF on Twitter) – is an introduction to IMF, and the first in a series, should you want to learn more:

(My YouTube playlist of videos on Interoperable Mastering Format in order)

The nature of Final Cut Pro X makes it potentially the best NLE to work with IMF. Apple could add features to the timeline required to generate IMPs. Compressor could generate specific versions of a film or TV show based on an IMP.

If Apple considers this the kind of feature best left to third-parties, I hope they add the required hooks to Final Cut so Frame.io (for example) could add IMF management to their Final Cut Pro X service.

Apple Goes to Hollywood: For more than just TV production

Friday, 01 September 2017

Apple have had offices in Los Angeles for many years. The number of Apple employees in the area rose significantly when the company bought Beats Music in 2014. Now it looks like there’ll be more to the LA operation than music.

The Financial Times reports [paywall link] that Apple are looking for more space in Culver City, Los Angeles County. The FT say that Apple is thinking of leasing space at The Culver Studios. Culver City isn’t exactly close to Hollywood, but from a production perspective, it counts as Hollywood: both Gone With the Wind and Citizen Kane were filmed at The Culver Studios.

The FT headline ‘Apple eyes iconic studio as base for Hollywood production push’ implies that they want space to make high-end TV and feature films - including bidding to produce a TV show for Netflix. Interesting that they suggest that Apple plan to make TV for others - instead of commissioning others to make TV for them. That would mean Apple investing in the hardware and infrastructure to make high-end TV directly.

Office space for…

However, the body of the article says that Apple is primarily looking for office space. It seems that the large amount of office space that Beats lease won’t be enough. It could be that Apple Music administration needs more people (The Culver Studios is only 15 minutes walk from Beats). On the hand, what else could Apple be doing in LA?

They certainly need to need to hire enough new staff to be involved in their $1bn push into TV. They could be based in Los Angeles County.

Part of the Mac team seems to be based in Culver City. A recent vacancy listed on the Apple jobs site was for an expert to set up a post production workflow lab in Culver City. That is likely to be primarily about making sure the next iteration of the Mac Pro fits the future needs of Hollywood TV and film production:

Help shape the future of the Mac in the creative market. The Macintosh team is seeking talented technical leadership in a System Architecture team. This is an individual contributor role. The ideal candidate has core competencies in one or more professional artists content creation areas with specific expertise in video, and photo, audio, and 3D animation.

The pro workflow expert will be responsible for thoroughly comprehending all phases of professional content creation, working closely with 3rd party apps developers and some key customers, thoroughly documenting, and working with architects to instrument systems for performance analysis.

It seems that some of Apple’s ProApps team is based in Culver City too. Recent job openings for a Video Applications Graphics Engineering Intern and a Senior macOS/iOS Software Engineer for Video Applications are based there.

Also, if I was going to develop a VR and AR content business, it might be a good idea to create custom-designed studio resources for VR and AR content production. Los Angeles would be a good location to experiment with the future of VR and AR.

Adobe discontinues Speedgrade, will live on as Adobe Premiere panel - More integration to come?

Wednesday, 23 August 2017

Will all Adobe video applications end up as panels in Adobe Premiere? Adobe doesn't see the need to make an application dedicated to the colour grading process any more. Adobe have announced that they are discontinuing their Speedgrade colour grading application:

Producing a separate application for color grading was born out of necessity some 35 years ago – it was never a desirable split from a creative perspective.

I don’t think audio post people would say the same about picture editing.

…the paradigm of consolidating toolsets for a specific task into a single panel has led to further innovation. The Essential Sound Panel and the new Essential Graphics panel are designed with the same goal in mind: streamlining professional and powerful workflows made for editors.

Maybe this is a sign that Blackmagic’s Resolve 12 and 14 updates are putting pressure on Adobe. Which other Adobe video applications do you think will end up as panels in Premiere?

Who will define the immersive video experience file format? MPEG, Apple, Adobe or Facebook?

Tuesday, 22 August 2017

We have file formats and codecs to store 2D video as seen from a single point. Soon we will need ways recording light information in a 3D space, so immersed viewers will be able to move around inside and choose what to look at, and where to look from.

In 1994 Apple tried to kick off VR on the Mac using an extension to their QuickTime video framework: QuickTimeVR. As with the Newton personal digital assistant, it was the right idea, wrong time.

Today different are companies are hoping to earn money from creating VR and AR experience standards, markets and distribution systems. The Motion Pictures Experts Group think it is time to encourage the development of a standard - so as to prevent multiple VR and AR ‘walled gardens’ (where individual companies hope to capture users in limited ecosystems).

This summer Apple announced that their 4K+ codec of choice is HEVC. That can encode video at very high resolutions. Apple also plan to incorporate depth information capture, encoding, editing and playback into iOS and macOS. 

Structured light encoding

Knowing the depth of the environment corresponding to every pixel in a flat 2D video frame is very useful. With VR video, that flat 2D video can represent all the pixels from the point of view of a single point. Soon we will want more. Structured light recording is more advanced. It captures the light in a given 3D volume. Currently light field sensors do this by capturing the light information arriving at multiple points on a 2D plane (instead of the single point we use today in camera lenses). The larger the 2D plane, the larger the distance viewers will be able to move their heads when immersed in the experience to see from different points of view. 

However the light information is captured, we will need file formats and codecs to encode, store and decode structured light information.

Streaming Media has written about MPEG-I, a standard that is being developed:

The proposed ISO/ IEC 23090 (or MPEG-I) standard targets future immersive applications. It's a five-stage plan which includes an application format for omnidirectional media (OMAF) "to address the urgent need of the industry for a standard is this area"; and a common media application format (CMAF), the goal of which is to define a single format for the transport and storage of segmented media including audio/video formats, subtitles, and encryption. This is derived from the ISO Base Media File Format (ISOBMFF).

While a draft OMAF is expected by end of 2017 and will build on HEVC and DASH, the aim by 2022 is to build a successor codec to HEVC, one capable of lossy compression of volumetric data.

"Light Field scene representation is the ultimate target," according to Gilles Teniou, Senior Standardisation Manager - Content & TV services at mobile operator Orange. "If data from a Light Field is known, then views from all possible positions can be reconstructed, even with the same depth of focus by combining individual light rays. Multiview, freeview point, 360° are subsampled versions of the Light Field representation. Due to the amount of data, a technological breakthrough – a new codec - is expected."

This breakthrough assumes that capture devices will have advanced by 2022 – the date by which MPEG aims to enable lateral and frontal translations with its new codec. MPEG has called for video test material, including plenoptic cameras and camera arrays, in order to build a database for the work.

Already too late?

I wonder if taking until 2022 for MPEG to finish work on MPEG I 1 will be too late. In 2016 there was debate about the best way of encoding ambisonic audio for VR video. The debate wasn't settled by MPEG or SMPTE. Google’s YouTube and Facebook agreed on the format they would support. That became the de facto standard.

Apple have advertised a job vacancy for a CoreMedia VR File Format Engineer with ‘Direct experience with implementing and/or designing media file formats.’

Facebook have already talked about 6 degrees of freedom video at their 2017 developer conference. They showed alpha versions of VR video plugins from Mettle running in Premiere Pro CC for 6DoF experiences. Adobe have since acquired Mettle.

Facebook won’t want to wait until 2022 to have serve immersive experiences where users will be able to move left, right, up, down, back and forth while video plays back.

Now the race is on to define the first immersive video file format.