Latest Posts

BlogAdobe Premiere used on big new 10-part Netflix TV seriesWednesday, September 13 2017

It was tough ask for Adobe Premiere to tackle the needs of David Fincher’s ‘Gone Girl’ feature film in 2014. In recent months, it has been used on a bigger project: ‘Mindhunter’ – a 10 hour David Fincher exec-produced high-end TV series soon to be available on Netflix.

Instead of a single team working on a two hour film, TV series have multiple director-cinematographer-editor teams working in parallel. In this case the pilot was directed by David Fincher. The way TV works in the US is that the pilot director gets an executive producer credit for the whole series because the decisions they make define the feel of the show from then on. Fincher brought along some of the team who worked on Gone Girl. While they worked on the pilot post production, other teams shot and edited later episodes in the series.

The fact that the production company and the studio were happy for the workflow to be based around Premiere Pro CC is a major step up for Adobe in Hollywood.

The high-end market Adobe is going for is too small to support profitable software development. Even if they sold a subscription to all professional editors in the USA, that would not be enough to pay for the costs in maintaining Adobe Premiere. Its use in high-end TV and features is a marketing message that Adobe must think contributes to people choosing to subscribe to the Adobe Creative Cloud – even if renters will never edit a Hollywood film or TV show.

What about Final Cut Pro X?

Directors Glenn Ficarra and John Requa are happy to use Final Cut Pro X in studio features. They haven’t been able to use Final Cut in the TV shows they have directed. Glenn and John directed the pilot and three other episodes of ‘This is Us’ – a big success for NBC in the US last year. Although directors have much less power in TV than in features, pilot directors do have some power to set standards for the rest of the series. I don’t know why Final Cut wasn’t used on ‘This is Us.’ It could be a lack of enough collaboration features or a lack of enough Final Cut-experienced crew. It may take a while before both of these reasons no longer apply.

Although the 10.3 update for Final Cut Pro X was nearly all about features requested by people who work on high-end production, it seems the majority of the ProApps team time is spent on features for the majority of Final Cut users.

Is the use of Final Cut Pro X in a smattering of Hollywood productions enough to support Apple’s marketing message? Will Apple invest more in Final Cut’s use in Hollywood?

When it comes to the opinions of Hollywood insiders, it seems that Premiere is currently the only viable alternative to Avid Media Composer. Although the ProApps team is very likely to want Final Cut to be the choice people make at all levels of production, will they be able to get the investment they need from the rest of Apple to make that happen? We’ll see in the coming months and years.

Read more
BlogApple Goes to Hollywood: For more than just TV productionFriday, September 1 2017

Apple have had offices in Los Angeles for many years. The number of Apple employees in the area rose significantly when the company bought Beats Music in 2014. Now it looks like there’ll be more to the LA operation than music.

The Financial Times reports [paywall link] that Apple are looking for more space in Culver City, Los Angeles County. The FT say that Apple is thinking of leasing space at The Culver Studios. Culver City isn’t exactly close to Hollywood, but from a production perspective, it counts as Hollywood: both Gone With the Wind and Citizen Kane were filmed at The Culver Studios.

The FT headline ‘Apple eyes iconic studio as base for Hollywood production push’ implies that they want space to make high-end TV and feature films – including bidding to produce a TV show for Netflix. Interesting that they suggest that Apple plan to make TV for others – instead of commissioning others to make TV for them. That would mean Apple investing in the hardware and infrastructure to make high-end TV directly.

Office space for…

However, the body of the article says that Apple is primarily looking for office space. It seems that the large amount of office space that Beats lease won’t be enough. It could be that Apple Music administration needs more people (The Culver Studios is only 15 minutes walk from Beats). On the hand, what else could Apple be doing in LA?

They certainly need to need to hire enough new staff to be involved in their $1bn push into TV. They could be based in Los Angeles County.

Part of the Mac team seems to be based in Culver City. A recent vacancy listed on the Apple jobs site was for an expert to set up a post production workflow lab in Culver City. That is likely to be primarily about making sure the next iteration of the Mac Pro fits the future needs of Hollywood TV and film production:

Help shape the future of the Mac in the creative market. The Macintosh team is seeking talented technical leadership in a System Architecture team. This is an individual contributor role. The ideal candidate has core competencies in one or more professional artists content creation areas with specific expertise in video, and photo, audio, and 3D animation.

The pro workflow expert will be responsible for thoroughly comprehending all phases of professional content creation, working closely with 3rd party apps developers and some key customers, thoroughly documenting, and working with architects to instrument systems for performance analysis.

It seems that some of Apple’s ProApps team is based in Culver City too. Recent job openings for a Video Applications Graphics Engineering Intern and a Senior macOS/iOS Software Engineer for Video Applications are based there.

Also, if I was going to develop a VR and AR content business, it might be a good idea to create custom-designed studio resources for VR and AR content production. Los Angeles would be a good location to experiment with the future of VR and AR.

Read more
BlogAdobe discontinues Speedgrade, will live on as Adobe Premiere panel – More integration to come?Wednesday, August 23 2017

Will all Adobe video applications end up as panels in Adobe Premiere? Adobe doesn’t see the need to make an application dedicated to the colour grading process any more. Adobe have announced that they are discontinuing their Speedgrade colour grading application:

Producing a separate application for color grading was born out of necessity some 35 years ago – it was never a desirable split from a creative perspective.

I don’t think audio post people would say the same about picture editing.

…the paradigm of consolidating toolsets for a specific task into a single panel has led to further innovation. The Essential Sound Panel and the new Essential Graphics panel are designed with the same goal in mind: streamlining professional and powerful workflows made for editors.

Maybe this is a sign that Blackmagic’s Resolve 12 and 14 updates are putting pressure on Adobe. Which other Adobe video applications do you think will end up as panels in Premiere?

Read more
BlogWho will define the immersive video experience file format? MPEG, Apple, Adobe or Facebook?Tuesday, August 22 2017

We have file formats and codecs to store 2D video as seen from a single point. Soon we will need ways recording light information in a 3D space, so immersed viewers will be able to move around inside and choose what to look at, and where to look from.

In 1994 Apple tried to kick off VR on the Mac using an extension to their QuickTime video framework: QuickTimeVR. As with the Newton personal digital assistant, it was the right idea, wrong time.

Today different are companies are hoping to earn money from creating VR and AR experience standards, markets and distribution systems. The Motion Pictures Experts Group think it is time to encourage the development of a standard – so as to prevent multiple VR and AR ‘walled gardens’ (where individual companies hope to capture users in limited ecosystems).

This summer Apple announced that their 4K+ codec of choice is HEVC. That can encode video at very high resolutions. Apple also plan to incorporate depth information capture, encoding, editing and playback into iOS and macOS.

Structured light encoding

Knowing the depth of the environment corresponding to every pixel in a flat 2D video frame is very useful. With VR video, that flat 2D video can represent all the pixels from the point of view of a single point. Soon we will want more. Structured light recording is more advanced. It captures the light in a given 3D volume. Currently light field sensors do this by capturing the light information arriving at multiple points on a 2D plane (instead of the single point we use today in camera lenses). The larger the 2D plane, the larger the distance viewers will be able to move their heads when immersed in the experience to see from different points of view.

However the light information is captured, we will need file formats and codecs to encode, store and decode structured light information.

Streaming Media has written about MPEG-I, a standard that is being developed:

The proposed ISO/ IEC 23090 (or MPEG-I) standard targets future immersive applications. It’s a five-stage plan which includes an application format for omnidirectional media (OMAF) “to address the urgent need of the industry for a standard is this area”; and a common media application format (CMAF), the goal of which is to define a single format for the transport and storage of segmented media including audio/video formats, subtitles, and encryption. This is derived from the ISO Base Media File Format (ISOBMFF).

While a draft OMAF is expected by end of 2017 and will build on HEVC and DASH, the aim by 2022 is to build a successor codec to HEVC, one capable of lossy compression of volumetric data.

“Light Field scene representation is the ultimate target,” according to Gilles Teniou, Senior Standardisation Manager – Content & TV services at mobile operator Orange. “If data from a Light Field is known, then views from all possible positions can be reconstructed, even with the same depth of focus by combining individual light rays. Multiview, freeview point, 360° are subsampled versions of the Light Field representation. Due to the amount of data, a technological breakthrough – a new codec – is expected.”

This breakthrough assumes that capture devices will have advanced by 2022 – the date by which MPEG aims to enable lateral and frontal translations with its new codec. MPEG has called for video test material, including plenoptic cameras and camera arrays, in order to build a database for the work.

Already too late?

I wonder if taking until 2022 for MPEG to finish work on MPEG I 1 will be too late. In 2016 there was debate about the best way of encoding ambisonic audio for VR video. The debate wasn’t settled by MPEG or SMPTE. Google’s YouTube and Facebook agreed on the format they would support. That became the de facto standard.

Apple have advertised a job vacancy for a CoreMedia VR File Format Engineer with ‘Direct experience with implementing and/or designing media file formats.’

Facebook have already talked about 6 degrees of freedom video at their 2017 developer conference. They showed alpha versions of VR video plugins from Mettle running in Premiere Pro CC for 6DoF experiences. Adobe have since acquired Mettle.

Facebook won’t want to wait until 2022 to have serve immersive experiences where users will be able to move left, right, up, down, back and forth while video plays back.

Read more
BlogAdobe Premiere and Final Cut Pro creator Randy Ubillos honoured with 2017 SMPTE awardTuesday, August 22 2017

SMPTE (The Society of Motion Picture & Television Engineers) have announced that former Adobe, Macromedia and Apple employee Randy Ubillos will be receiving the Workflow Systems Medal at the SMPTE 2017 Awards later this year.

The Workflow Systems Medal, sponsored by Leon Silverman, recognizes outstanding contributions related to the development and integration of workflows, such as integrated processes, end-to-end systems or industry ecosystem innovations that enhance creativity, collaboration, and efficiency, or novel approaches to the production, postproduction, or distribution process.

The award will be presented to Randy Ubillos in recognition of his role in establishing the foundation of accessible and affordable digital nonlinear editing software that fundamentally shaped the industry landscape and changed the way visual stories are created and told. Ubillos’ revolutionary work with creating and designing lower-cost editing software such as Final Cut Pro® and Adobe® Premiere® shifted the film and television industry toward a more inclusive future, giving storytellers of diverse backgrounds and experience levels the ability to tell their stories and rise as filmmakers, technicians, engineers, and key players in every facet of media and entertainment.

His work significantly enhanced and transformed the world of postproduction, popularizing and commoditizing file-based workflows while removing significant barriers to the creative editing process for millions of users worldwide.

I interviewed Randy at the first FCPX Creative Summit in 2015. Topics covered included where Adobe Premiere 1.0 came from, the story of Final Cut Pro at Macromedia and working with Steve Jobs:

Ubillos: iMovie’s codename was RoughCut, it was conceived originally as a front end to Final Cut – for creating a rough edit for Final Cut. I worked with a graphic designer to make it look good. When I did a demo of it to Steve [Jobs] in about three minutes he said “That’s the next iMovie.” So I asked when it was supposed to ship, and he said “Eight months.”

[…]

The very last conversation I had with Steve Jobs was right after the launch of Final Cut Pro X. I was getting ready to get on a plane to go to London to record the second set of movie trailers – we’d hired the London Symphony Orchestra [to perform the music that was going to be bundled with the next version of iMovie] – and Steve caught me at home: “What the heck is going on with this Final Cut X thing?” I said “We knew this was coming, we knew that people were going to freak out when we changed everything out from under them. We could have done this better. We should have. Final Cut 7 should be back on the market. We should have an FAQ that lists what this is all about.” He said “Yeah, let’s get out and fund this thing, let’s make sure we get on top of this thing, move quickly with releases…” and he finished by asking: “Do you believe in this?” I said “Yes.” He said “then I do too.”

Congratulations to Randy. Although he is probably making the most of his retirement, I hope his contributions to the history of video literacy are not over.

Read more
Blog2017 iPhone 3D face scanner for ID unlock fast enough for video depth captureMonday, August 21 2017

In reports coming from Asia it is rumoured that one of the iPhones Apple plans to announce later this year will have a face scanner that will allow users to unlock their phones without using a TouchID fingerprint scanner:

The Korea Herald yesterday:

The new facial recognition scanner with 3-D sensors can deeply sense a user’s face in the millionths of a second. Also, 3-D sensors are said to be adopted for the front and rear of the device to realize AR applications, which integrate 3-D virtual images with user’s environment in real time.

This is interesting news for those who want to work with footage that includes depth information. The kind of camera required to quickly distinguish between different faces probably needs to sample the depth in a very short space of time to counteract phone and face movement.

Apple’s ARkit and Metal 2 are based around a 90fps refresh rate, so the sampling rate for both cameras is more than enough for VR and AR experiences.

Another tid-bit is that the new Apple phone is said to be able to recognise a person’s face even when the phone is lying on a table. That says to me that the volume that the phone’s depth sensor will be able to capture will be much wider than the light sensor in the phone’s camera. The ‘angle of view’ required for a sensor in a phone lying on a table to read a nearby face would have to be at least 120º.

Now we need applications that can help people use this depth information in creative ways. At the very least, it will mean there will be no need to use green and blue screens to separate people from backgrounds. All objects further than a specific distance from the camera can be defined as transparent for production purposes.

Read more