Latest Posts

BlogWho will define the immersive video experience file format? MPEG, Apple, Adobe or Facebook?Tuesday, August 22 2017

We have file formats and codecs to store 2D video as seen from a single point. Soon we will need ways recording light information in a 3D space, so immersed viewers will be able to move around inside and choose what to look at, and where to look from.

In 1994 Apple tried to kick off VR on the Mac using an extension to their QuickTime video framework: QuickTimeVR. As with the Newton personal digital assistant, it was the right idea, wrong time.

Today different are companies are hoping to earn money from creating VR and AR experience standards, markets and distribution systems. The Motion Pictures Experts Group think it is time to encourage the development of a standard – so as to prevent multiple VR and AR ‘walled gardens’ (where individual companies hope to capture users in limited ecosystems).

This summer Apple announced that their 4K+ codec of choice is HEVC. That can encode video at very high resolutions. Apple also plan to incorporate depth information capture, encoding, editing and playback into iOS and macOS.

Structured light encoding

Knowing the depth of the environment corresponding to every pixel in a flat 2D video frame is very useful. With VR video, that flat 2D video can represent all the pixels from the point of view of a single point. Soon we will want more. Structured light recording is more advanced. It captures the light in a given 3D volume. Currently light field sensors do this by capturing the light information arriving at multiple points on a 2D plane (instead of the single point we use today in camera lenses). The larger the 2D plane, the larger the distance viewers will be able to move their heads when immersed in the experience to see from different points of view.

However the light information is captured, we will need file formats and codecs to encode, store and decode structured light information.

Streaming Media has written about MPEG-I, a standard that is being developed:

The proposed ISO/ IEC 23090 (or MPEG-I) standard targets future immersive applications. It’s a five-stage plan which includes an application format for omnidirectional media (OMAF) “to address the urgent need of the industry for a standard is this area”; and a common media application format (CMAF), the goal of which is to define a single format for the transport and storage of segmented media including audio/video formats, subtitles, and encryption. This is derived from the ISO Base Media File Format (ISOBMFF).

While a draft OMAF is expected by end of 2017 and will build on HEVC and DASH, the aim by 2022 is to build a successor codec to HEVC, one capable of lossy compression of volumetric data.

“Light Field scene representation is the ultimate target,” according to Gilles Teniou, Senior Standardisation Manager – Content & TV services at mobile operator Orange. “If data from a Light Field is known, then views from all possible positions can be reconstructed, even with the same depth of focus by combining individual light rays. Multiview, freeview point, 360° are subsampled versions of the Light Field representation. Due to the amount of data, a technological breakthrough – a new codec – is expected.”

This breakthrough assumes that capture devices will have advanced by 2022 – the date by which MPEG aims to enable lateral and frontal translations with its new codec. MPEG has called for video test material, including plenoptic cameras and camera arrays, in order to build a database for the work.

Already too late?

I wonder if taking until 2022 for MPEG to finish work on MPEG I 1 will be too late. In 2016 there was debate about the best way of encoding ambisonic audio for VR video. The debate wasn’t settled by MPEG or SMPTE. Google’s YouTube and Facebook agreed on the format they would support. That became the de facto standard.

Apple have advertised a job vacancy for a CoreMedia VR File Format Engineer with ‘Direct experience with implementing and/or designing media file formats.’

Facebook have already talked about 6 degrees of freedom video at their 2017 developer conference. They showed alpha versions of VR video plugins from Mettle running in Premiere Pro CC for 6DoF experiences. Adobe have since acquired Mettle.

Facebook won’t want to wait until 2022 to have serve immersive experiences where users will be able to move left, right, up, down, back and forth while video plays back.

Read more
BlogAdobe Premiere and Final Cut Pro creator Randy Ubillos honoured with 2017 SMPTE awardTuesday, August 22 2017

SMPTE (The Society of Motion Picture & Television Engineers) have announced that former Adobe, Macromedia and Apple employee Randy Ubillos will be receiving the Workflow Systems Medal at the SMPTE 2017 Awards later this year.

The Workflow Systems Medal, sponsored by Leon Silverman, recognizes outstanding contributions related to the development and integration of workflows, such as integrated processes, end-to-end systems or industry ecosystem innovations that enhance creativity, collaboration, and efficiency, or novel approaches to the production, postproduction, or distribution process.

The award will be presented to Randy Ubillos in recognition of his role in establishing the foundation of accessible and affordable digital nonlinear editing software that fundamentally shaped the industry landscape and changed the way visual stories are created and told. Ubillos’ revolutionary work with creating and designing lower-cost editing software such as Final Cut Pro® and Adobe® Premiere® shifted the film and television industry toward a more inclusive future, giving storytellers of diverse backgrounds and experience levels the ability to tell their stories and rise as filmmakers, technicians, engineers, and key players in every facet of media and entertainment.

His work significantly enhanced and transformed the world of postproduction, popularizing and commoditizing file-based workflows while removing significant barriers to the creative editing process for millions of users worldwide.

I interviewed Randy at the first FCPX Creative Summit in 2015. Topics covered included where Adobe Premiere 1.0 came from, the story of Final Cut Pro at Macromedia and working with Steve Jobs:

Ubillos: iMovie’s codename was RoughCut, it was conceived originally as a front end to Final Cut – for creating a rough edit for Final Cut. I worked with a graphic designer to make it look good. When I did a demo of it to Steve [Jobs] in about three minutes he said “That’s the next iMovie.” So I asked when it was supposed to ship, and he said “Eight months.”

[…]

The very last conversation I had with Steve Jobs was right after the launch of Final Cut Pro X. I was getting ready to get on a plane to go to London to record the second set of movie trailers – we’d hired the London Symphony Orchestra [to perform the music that was going to be bundled with the next version of iMovie] – and Steve caught me at home: “What the heck is going on with this Final Cut X thing?” I said “We knew this was coming, we knew that people were going to freak out when we changed everything out from under them. We could have done this better. We should have. Final Cut 7 should be back on the market. We should have an FAQ that lists what this is all about.” He said “Yeah, let’s get out and fund this thing, let’s make sure we get on top of this thing, move quickly with releases…” and he finished by asking: “Do you believe in this?” I said “Yes.” He said “then I do too.”

Congratulations to Randy. Although he is probably making the most of his retirement, I hope his contributions to the history of video literacy are not over.

Read more
Blog2017 iPhone 3D face scanner for ID unlock fast enough for video depth captureMonday, August 21 2017

In reports coming from Asia it is rumoured that one of the iPhones Apple plans to announce later this year will have a face scanner that will allow users to unlock their phones without using a TouchID fingerprint scanner:

The Korea Herald yesterday:

The new facial recognition scanner with 3-D sensors can deeply sense a user’s face in the millionths of a second. Also, 3-D sensors are said to be adopted for the front and rear of the device to realize AR applications, which integrate 3-D virtual images with user’s environment in real time.

This is interesting news for those who want to work with footage that includes depth information. The kind of camera required to quickly distinguish between different faces probably needs to sample the depth in a very short space of time to counteract phone and face movement.

Apple’s ARkit and Metal 2 are based around a 90fps refresh rate, so the sampling rate for both cameras is more than enough for VR and AR experiences.

Another tid-bit is that the new Apple phone is said to be able to recognise a person’s face even when the phone is lying on a table. That says to me that the volume that the phone’s depth sensor will be able to capture will be much wider than the light sensor in the phone’s camera. The ‘angle of view’ required for a sensor in a phone lying on a table to read a nearby face would have to be at least 120º.

Now we need applications that can help people use this depth information in creative ways. At the very least, it will mean there will be no need to use green and blue screens to separate people from backgrounds. All objects further than a specific distance from the camera can be defined as transparent for production purposes.

Read more
BlogApple spending $1bn on TV production – how much on Final Cut Pro X post?Sunday, August 20 2017

Last week the Wall Street Journal reported that Apple will spend $1 billon on making its own TV content over the next year:

Combined with the company’s marketing clout and global reach, the step immediately makes Apple a considerable competitor in a crowded market, where both new and traditional media players are vying for original shows. Apple’s budget is about half of what Time Warner Inc.’s HBO spent on content last year, and on par with estimates of what Amazon.com Inc. spent in 2013, one year after it announced its move into original programming.

Apple could acquire and produce as many as 10 television shows, according to the people familiar with the plan, helping fulfill Apple Senior Vice President Eddy Cue’s vision of offering high-quality video, similar to shows such as HBO’s “Game of Thrones,” on its streaming-music service or possibly a new, video-focused service.

Given that post production costs on feature films and high-end TV usually amount to 1-3% of total budgets, that means around $10-30 million will be spent on picture editing, sound editing, compositing and mastering.

How much of that $10-30 million will be spent on Final Cut Pro X-based workflows?

How prescriptive will Apple be?

Judging from recent success in TV production, the trick that HBO, Netflix and Amazon have mastered is to be less hands-on with the creative people involved. Their policy is to invest in people will proven track records and not to manage them too closely.

That means Apple are unlikely to be insisting that each TV show is as precisely designed and produced as an iPhone. They will not insist on Apple products and services being at the core of story ideas, or even being placed clearly on screen. This kind of thing would be instantly counter-productive.

Although what goes into the writing and on screen is unlikely to be influenced by Apple, that restriction might not apply to the aspects of production that viewers won’t be able to judge. That means Apple could require that preproduction, production and post-production use a specific amount of Apple products, services and software.

Even Apple isn’t forced to use Apple

Today Apple doesn’t force all suppliers and staff to only use Apple products and services. Marketing vacancies at apple.jobs.com include requirements that people know how to use Adobe products for which there are Apple equivalents. Some Apple TV commercials are not edited using Final Cut Pro X, some motion graphics are not created in Motion 5, audio post production is not limited to Logic Pro X.

Apple sensibly want to be able to work with the best people and suppliers – and not be limited to those that only use Apple products. On the other hand, Apple’s hardware, software and services teams proceed on the basis that what they make would be the best tools to make high-end TV and feature films.

Train the talented in the tools you want them to use

There are two things Apple can do here: firstly, they need to improve their products to make them more suited for high-end production. Secondly, they could invest in the education aspects of the post ecosystem. Production companies who are required to use Final Cut Pro X to edit the next House of Cards or Stranger Things are likely to say: ‘There aren’t enough editors, assistant editors, apprentices, post production supervisors and VFX producers who know Final Cut Pro X and its ecosystem.’

Oversupply is a requirement

The catch is that although there are some people in Los Angeles and New York who could be employed in these roles, they don’t have enough experience in high-end TV. The sad thing about TV and film production is that you need a whole ecosystem of people and suppliers who know a specific post system: so you can have the security of knowing you can fire individuals and drop companies when you want. Production company management techniques expect that kind of control over costs. That’s working in the VFX industry is so tough – those paying the bills know that there is enough oversupply to keep them in a very good negotiation position.

Specific demands of commissioners such as Amazon are changing post workflows. They specify that shows that they fund must be made using a 4K workflow. In practice almost no-one will benefit from more pixels being streamed to their TVs at home, but Amazon consider 4K an important marketing distinction. But production companies will change their workflow in order to be in line for some Amazon money.

Apple are unlikely to require a Final Cut Pro X workflow for the TV shows and films on their Apple TV streaming service. They could encourage its use by allowing production companies to get double the usual money for post budgets if they use Final Cut. Dangling that carrot won’t make adoption possible any time soon. $1bn of TV production makes around 10 big (‘Fargo’/‘The Walking Dead’-sized) shows per year. If the post production of five or six of these shows are done in Los Angeles, because of a lack of a Final Cut Pro X people and supplier ecosystem, I guess only two of those could be made simultaneously using a Final Cut-based workflow.

A few million on a big plan

As Apple is about to spend millions of dollars in Los Angeles with creative people, perhaps it is time for Apple to prime the Final Cut Pro X L.A. post production ecosystem: Train the trainers, plan the courses, do the marketing to post people, train experienced post people and generate case studies. Create the oversupply that makes producers feel like they have enough control.

There is time. TV show development takes months. While new Apple commissioning people make their plans and start working with talented people and production companies, there is time other Apple people can set about preparing a much bigger ecosystem to support production and post production using Apple hardware, software and services.

‘Negative cost’ Final Cut Pro X training

What could Apple do with $1m in Los Angeles? Pay experienced post production people to attend Final Cut Pro X training. For editors, assistant editors, VFX supervisors, VFX staffers, producers, writers, directors, reporters. Pay them their normal daily rates to be trained in what people their roles need to know about Apple’s Pro Apps.

Who needs to be convinced?

The argument isn’t about ‘tracks vs. the magnetic timeline’ – it’s about money. All the talk of convincing post people to use Final Cut Pro X is a nice, kind way of doing things. The people who need convincing are those with the money. Post didn’t move to Avid 20 years ago because it was better than film. The money people were convinced by the economics of computer-based editing, and ordered the post people to make the change.

Sorting the supply of people and third-party services is the start of this. The next stage is gathering the evidence of how much money will be saved. Once that happens, improvements in the magnetic timeline or the Final Cut Pro X version of bin locking will be irrelevant. Once it can be shown that a switch to Final Cut Pro X makes post twice as cheap and twice as flexible as any other method, that’s when the switch will happen.

Time to for Apple start planning and make a big change in high-end post production.

Read more
BlogThe end of public advertisingFriday, August 11 2017

How much would you pay for all advertising to be removed from your view as you go about your daily life? All is it needs is the ability to interpolate what advertsing covers up, and replace all advertising with that.

This TechCrunch article buries the lede:

Facebook buys computer vision startup focused on adding objects to video

Adding objects to video isn’t as hard as removing objects, Facebook has bought a company that has that technology:

Facebook has acquired a German company called Fayteq that builds software add-ons for video editing that can remove and add whole objects from captured video using computer vision

My emphasis.

Facebook also have a company that makes glasses you can wear to run the software.

I would guess that Apple and others will also develop such software to run on their AR devices. Once a majority of people won’t be able to see shared public advertising, how long until it is no longer put up?

Read more
BlogApple Patent: Personalised Programming for Streaming Media BreaksTuesday, August 8 2017

Apple have been awarded a patent for ‘content pods’

A content delivery system determines what personal content is available on the user device through connecting to available information sources. The delivery system then assembles the content pod from these elements in addition to invitational content from content providers. In some embodiments, a bumper message is included in the content pod to provide a context for the elements that are being assembled in combination with each other. Once the content pod is generated, it is sent to the user device to be played during content breaks within the online streaming playback.

The patent doesn’t specify whether this pod is made for breaks in video streaming – Apple TV – or audio – Apple Music. This means automatically generated audio and video content to pepper the ‘stream’ (or Facebook/Twitter/Instagram feed). Apple already creates animated video ‘Memories’ based on photos on iOS and macOS.

‘Pod’?

Interesting that Apple refers to these bundles of content as ‘pods.’ Seems that when they applied for this patent, they saw the value of the podcast brand. As people have had problems widening understanding of podcasts outside their niche, perhaps Apple were considering modifying the meaning of ‘pod’ to integrated customised programming bundle.

On the advent of Apple’s ‘iTunes Radio’ in 2013, I had some thoughts on what else might be included in automatically generated personalised media feeds might be like:

Adding the visual to a media feed would make a playlist item an act of a TV show or feature film, a short film, a YouTube video or a family video. It would include content from broadcast TV (news and sport and drama premieres), purchased TV, feature films and content from streamed subscription services. If you wanted to jump into a TV series or Soap after the first episodes, recap content would be playlisted in advance of the show you want to start with.

Almost 10 years ago Apple got a patent for inserting advertising into a feed. Just because Apple has a patent, it doesn’t mean they will produce a product or service that relies on the patent.

Read more