Latest Posts

BlogHappy 7th birthday Final Cut Pro X – better is not enoughThursday, June 21 2018

Final Cut Pro X 10.0 was launched 7 years ago today. Why hasn’t it taken over the world of TV and film editing?

Final Cut is better than the rest. That isn’t enough.

Despite the efforts of the Apple’s Video Applications team, the ‘top’ 0.25% of editors don’t trust Apple as a whole: The wider Apple that makes Mac hardware that seems more and more out of date. The Apple that still can’t share its plans in a useful way.

The biggest problem: They don’t trust the Apple that doesn’t nurture a deep post ecosystem.

To switch from ‘the way we’ve always done it’ to a new way requires that the new way is over 50% better. ‘A little better’ or ‘cheaper’ isn’t worth the pain of switching. In practice ‘much cheaper’ is a bad sign in post. Charging too much less is a sign that you can’t be as professional as the status quo.

Apple would like high-end users to invest in their hardware and software, yet Apple doesn’t seem to care about others who have invested in businesses that support the high end. They still don’t trust Apple because of the way Final Cut Pro 7 was discontinued 7 years ago. Improving features in the application itself is not enough to win back trust.

Is Final Cut serious?

To take Final Cut seriously, those making TV shows and feature films require ‘Final Cut Pro X versions’ of every stage of the traditional Avid workflow. The workflow is a throwback – a digital version of the 20th-century ways – but the fact that there are businesses at each stage making money providing these services makes feel safer than a modern alternative.

A few minutes search online will turn up perfectly good Final Cut ways of making TV shows and feature films. There are high-end solutions for every stage in the process. Sadly, the high end wants more than that. They also want competition between these solutions – competitors to the Lumaforge Jellyfish for example. Another example: they want multiple competing dailies companies who fight for their Final Cut Pro X workflow business.

It is also about people. Post-production supervisors want a variety of teams and individuals to choose from. Once a team has been put together, heads of department want the reassurance of being able to replace any member of the team with others who are almost as good. Knowing Avid means that you can be relied upon until you are easily replaced. Today there aren’t enough people in cities associated with TV and film production with Final Cut Pro X experience to hire and fire.

Worth the effort?

I am not convinced that the vocal tiny minority in feature films and TV are worth supporting. The previous generation of post suppliers sees a big benefit in marketing messages like ‘buy our product – it is used by award-winning editors.’ Apple seems to think that messages like this don’t convince those who are choosing their first paid editing application.

If it was your money, would you put millions of dollars into appealing to a few thousand people to use your application in order to appeal to the millions of other people?

I expect the wider Apple appreciates the Video Applications team’s contributions to mainstream success through Clips for iOS and iMovie for iOS and macOS. The continuing profitability of Final Cut Pro X insures its survival – alongside Apple’s commitment to not trusting third parties to make applications that make the most of high-end hardware.

What can the Video Applications team do? If they want to appeal to the vocal (but probably unimportant) high end, what is the investment case they need to put before the wider Apple?

The trick: Not for me

The last gap in the Final Cut Pro X feature set is collaboration: where multiple people can work the same media and timelines at the same time. Post professionals don’t want a new take – they would be happy with a version of 90s-style Avid bin-locking. The Final Cut team don’t seem able to implement features this way. They are still building version 1 of a 21st century editing application: they are not in the business of adding shiny new code and hardware drivers to 20th century metaphors – like DaVinci Resolve and Adobe Premiere.

The answer is to implement collaboration into Final Cut Pro X that supports a much larger proportion of the market. If it can work for real people, it will also work for the vocal minority. Businesses supporting high-end post will then adapt these features to use for their market. Individuals in post will add Final Cut to the portfolio of applications they learn to use in their workflows.

It is likely that over 95% of videos made in the world are made by a single person. Apple should implement collaboration features that help those people do more. Instead of using XML and Finder-level integration with third-party tools, tools in the Final Cut interface should support individuals helping individuals.

Instead of saying ‘help me’ – I’m saying ‘help millions – including me at the high end.’

Apple should support a mid-market video consultancy ecosystem – following their FileMaker Pro model. The pitch would be: ‘If you are unhappy with what you do today, you could set yourself up as a freelance video consultant. You will be able to support yourself by proposing video production solutions to small businesses and organisations in your locality. You will be able to make money on Mac hardware, on software, developing workflows, providing support and evolving workflows over months and years. They will want to pay your monthly fee because of the services you will be able to supply.’

For consultants to be able to do this, they need to make tools to extend Final Cut. Tools that don’t require doctors, dentists, builders, teachers, lawyers, assistants or secretaries to have to understand terms like ‘XML’ and ‘transfer library.’ It is unlikely that Apple will want third-parties to touch the Final Cut UI. They are very far from Adobe-style third-party free access to windows in Final Cut itself.

Most small- and medium-sized businesses use databases to organise the relationship with their customers and suppliers. There is a huge market for freelancers and small companies to design, implement, support and improve these custom databases. A significant proportion of small businesses would benefit from being able to tell stories using video.

I would imagine that the wider Apple would be more interested in helping millions of people change their future using video than investing in the special needs of the high end.

No feature requests please, we are Apple

Apple don’t like to be told what to do. They like stories. The story of the unsatisfactory present – followed by a story from a bright future. Apple want to then choose how to get to that future.

The present: there are millions of freelancers and small businesses all over the world who shy away from telling their stories using video. They associate video production with high-costs and lack of control – using professional video production companies of all sizes.

The bright future: tens of millions of small and medium-size businesses supported by a new class of freelancer – who can provide services that empower individuals and organisations to tell stories using video.

…or Final Cut Pro 8.

Read more
BlogApple WWDC 2018 hardware hope: A ‘NPU’ familyFriday, June 1 2018

In recent years Apple have used the keynote presentation of their annual Worldwide Developers Conference as a showcase for new hardware launches. This year Intel’s delays in producing significant CPU updates makes it less likely we will see new MacBooks, Macs or Mac Pros this time.

As this event is for those making software and hardware for iOS, tvOS, watchOS and macOS devices, I hope Apple launches a new hardware plan.

VPU?

Yesterday Arm announced three new chip families for smaller devices: a new CPU, a new GPU and a new VPU (video processor).

AnandTech reports that Arm’s new VPU includes hardware support for VP9 10-bit, H.264 10-bit and HEVC 10-bit – with the ability to play 8K 60fps video.

In concert with their display processor, the new video processor is currently able to handle HDR10 and HLG formatted HDR video. Meanwhile support for HDR10+ – which is HDR10 with support for dynamic metadata – is set to arrive in the future.

This shows what Apple could be doing with the A-series chips they use in iOS devices (and future VR/AR devices).

NPU?

The rate at which Apple have improved their A-series CPUs is the envy of phone makers. In September 2017, Wired reported that the new A11 processor in the iPhone 8 and X includes what Apple calls its ‘Neural Engine’:

The engine has circuits tuned to accelerate certain kinds of artificial-intelligence software, called artificial neural networks, that are good at processing images and speech.

Apple said the neural engine would power the algorithms that recognize your face to unlock the phone and transfer your facial expressions onto animated emoji. It also said the new silicon could enable unspecified “other features.”

What if Apple created a very small single-core ‘neural processing unit’ – the N1 NPU?

If N1s are included in all Apple product updates in coming months, it would make simpler for developers to add modern features to their products and services. Including developers within Apple.

Using the cell concept of multiple processing units, the number of N1s used in an Apple device would depend on its power budget, memory and profit margin:

  • Apple remote: 1
  • Apple Pencil: 1
  • Apple Watch: 1-2
  • Apple TV: 3-4
  • iPhone: 3-4
  • Apple VR/AR device: 3-4
  • iPhone Plus: 4-6
  • Apple TV 4K: 4-6
  • iPad: 4-6
  • iPhone X: 6-8
  • macBook: 6-8
  • iPad Pro: 6-8
  • Mac mini: 6-16
  • iMac: 12-32
  • MacBook Pro: 16-24
  • Mac Pro: 32-128

New connector for neural processing farms?

Once we having devices with the processing power to deal with the increased demands of modern uses, Apple could facilitate connecting them together for combined power.

Although Apple would prefer for its devices to have no physical hardware connections, for those that would gain from sharing processing power with other devices, it might be worth it. For example, although the new Mac Pro might connect with some devices using multiple Thunderbolt 3 buses, it would be even better if it had a connector that would interface with other Mac Pros – or a stack of new Mac minis that combine together like Lego.

Tune into Apple’s livestream from WWDC18 on Monday to see what is coming.

Read more
BlogFinal Cut Pro: Apple’s macOS and Mac demo application?Wednesday, May 30 2018

From 1999 to 2009 Final Cut versions 1 to 7 were used by Apple to show off the latest their technologies could do. Especially the QuickTime API. Next week is WWDC 2018 – Apple’s annual developer conference.

For the first time in many years Final Cut Pro X requires the current version of macOS: High Sierra 10.13. I hope this means the Video Applications team are able to show developers what can be done with macOS frameworks. The more that applications use macOS-only features, the more Macs Apple will sell.

Not much of the WWDC 2018 schedule has been revealed yet, but one session shows what Apple could do:

Object Tracking in Vision

Vision is a high-level framework that provides an easy to use API for handling many computer vision tasks. We’ll dive deep into a particularly powerful feature of Vision—tracking objects in video streams. Learn best practices for using Vision in your app. Gain a greater understanding of how request handlers function in terms of lifecycle, performance, and memory utilization.

This kind of tracking is much more useful than tracking pixels or planes. Tools that understand (using machine learning-built models) what the objects in a video clip can be much more powerful. Instead of tracking a number of regions in a clip and working out how they are moving in 3D space relative to each other, object tracking understands how to recognise and track specific objects such as faces, people, vehicles, signs and buildings. Once tracked, the appearance of these things can be modified by the application. This will work when the objects seem to move and turn in the shot and even when are they are obscured by other objects.

Next week I’ll tune in to see forthcoming features of macOS (and iOS) that are relevant to Apple’s video applications.

In a few days the rest of the agenda will be announced. Next week people all over the world will be able to watch live streams of the sessions as they happen. A few hours after each session, videos will be available to watch on the Apple website.

 

Read more
BlogApple Compressor: Make the most of unused cores and nearby MacsFriday, May 18 2018

If you have lots of video to transcode and tight deadlines, sometimes even Apple Compressor isn’t fast enough for the job. If you have a Mac with multiple cores and lots of RAM or a network of Macs going to spare, you can use this power to speed up video conversions and transcodes.

Using more cores on your Mac

If you have many CPU cores and enough RAM, you can have multiple copies (‘instances’) of Compressor run on your iMac or Mac Pro at the same time. Each copy works on a different frames of the source video.

The number of Compressor instances you can set up on a Mac depends on the number of cores and the amount of RAM installed. You need to have at least 8 cores and 4GB of RAM to have at least one additional instance of Compressor run on your Mac.

Maximum number of additional instances of Compressor that can run on a Mac:

GB RAM: 2 4 6 8 12 16 32 64
4 Cores 0 0 0 0 0 0 0 0
8 Cores 0 1 1 1 1 1 1 1
12 Cores 0 1 2 2 2 2 2 2
16 Cores 0 1 2 3 3 3 3 3
24 Cores 0 1 2 3 5 5 5 5

This means that your Mac needs to have a minimum of 8 cores and 4GB of RAM to have two instances of Compressor running at the same time. MacBook Pros (as of Spring 2018) have a maximum of 4 cores – described as ‘quad-core’ CPUs.

From Apple’s support document: Compressor: Create additional instances of Compressor:

To enable instances of Compressor

  1. Choose Compressor > Preferences (or press Command-Comma).
  2. Click Advanced.
  3. Select the “Enable additional Compressor instances” checkbox, then choose a number of instances from the pop-up menu.

Important: If you don’t have enough cores or memory, the “Enable additional Compressor instances” checkbox in the Advanced preferences pane is dimmed.

Using using additional Macs on your network

Once you install Compressor on your other Macs, you can use those Macs to help with video transcoding tasks.

To create a group of computers to transcode your videos:

  1. Set the preferences in Compressor on each Mac in the network to “Allow other computers to process batches on my computer” (in the ‘My Computer’ tab of the preferences dialog).
  2. On the Mac you want to use to control the video transcoding, use the ‘Shared Computers’ section of Compressor preferences to make a group of shared computers.
  3. Add a new untitled group (using the ‘+’ button)
  4. Name it by double-clicking it and replacing ‘Untitled’ and pressing Return.
  5. In the list of available computers (on the right), select the checkbox next to each computer that you want to add to the group.

Once this group is set up, use Compressor to set up a transcode as normal. Before clicking the Start button, click the “Process on” pop-up menu and choose the group of computers that you want to use to process your batch.

There are more details in Apple’s support document: Compressor: Transcode batches with multiple computers.

Read more
BlogAudio for 360° video and VR experiences – Apple job postingsMonday, May 14 2018

More from Apple’s job site. This time signs that they are looking to develop features for their applications, OSes and hardware to support spatial audio. Spatial audio allows creators to define soundscapes in terms of the relative position of sound sources to listeners. This means that if I hear someone start talking to my left, if I turn towards them, the sound should seem to come from what I’m looking at – from the front. Useful for 360° spherical audio, fully-interactive VR and experiences plus future OS user interfaces.

At the moment there are four relevant vacancies:

Apple Hardware Engineering is looking for a Audio Experience & Prototyping Engineer:

Apple’s Technology Development Group is looking for an Audio Experience and Prototyping Engineer to help prototype and define new audio UI/UX paradigms. This engineer will work closely with the acoustic design, audio software, product design, experience prototyping, and other teams to guide the future of Apple’s audio technology and experience The ideal candidate will have a background in spatial audio experience design (binaural headphone rendering, HOA, VBAP), along with writing audio supporting software and plugins.

Experience in the following strongly preferred:

  • Sound design for games or art installations
  • Writing apps using AVAudioEngine
  • Swift / Objective-C / C++
  • Running DAW software such as Logic, ProTools, REAPER, etc.

Closer to post production, Apple’s Interactive Media Group Core Audio team is looking for a Spatial Audio Software Engineer to work in Silicon Valley:

IMG’s Core Audio team provides audio foundation for various high profile features like Siri, phone calls, Face Time, media capture, playback, and API’s for third party developers to enrich our platforms. The team is looking for talented engineers who are passionate about building audio software products for millions of customers and care about overall user experience. You will be pushing the boundaries of spatial audio experience for future technologies.

  • Key Advantage : Experience with audio engines that are part of Digital Audio Workstations or Game audio systems
  • Advantage : Experience with Spatial audio formats (Atmos, HOA etc) is desirable.

I gather that the Logic Pro digital audio workstation team are based in Germany. Apple are also looking for a Spatial Audio Software Engineer to work in Berlin.

For iOS and macOS, Apple are also looking for a Core Audio Software Engineer in Zurich:

The team is looking for talented engineers who are passionate about building audio software products for millions of customers and care about overall user experience. You will be pushing the boundaries of spatial audio experience for future technologies.

If you think this kind of activity is too little too late, there was at least one vacancy for a Spatial Audio Software Engineer back in July 2017.

Although Apple explore many technical directions for products that never see the light of day, I expect that spatial audio has a good future at Apple.

Read more
BlogApple Video job postings 2018… Cloud, IP production, 3D/VR in 2019?Saturday, April 28 2018

A good way of seeing what Apple plans to work on is to check out their jobs site. A July 2017 job posting for a pro workflow expert to set up a studio ends up with Apple giving a journalist a tour of the lab in April 2018.

Here is a round-up of recent Apple Pro Apps-related job posts. They hint as to what might be appearing in Apple’s video applications in 2019.

Many start with this description of the Apple Video Applications group:

The Video Applications group develops leading media creation apps including Memories, Final Cut Pro X, iMovie, Motion, and Clips. The team is looking for a talented software engineer to help design and develop future features for these applications.

This is an exciting opportunity to apply your experience in video application development to innovative media creation products that reach millions of users.

Senior Engineer, Cloud Applications

Job number 113527707, posted March 2, 2018:

The ideal candidate will have in-depth experience leveraging both database and client/server technologies. As such, you should be fluent with cloud application development utilizing CloudKit or other PAAS (“Platform as a Service”) platforms.

The main NLE makers have come to cloud-enabling their tools relatively late compared to other creative fields. Apple currently allow multiple people to edit the same document in iWorks at the same time. Sharing multiple-gigabytes of video data is much harder than keeping a Pages or Numbers document in sync across the internet. Avid have recently announced Amazon-powered video editing in the cloud services coming this year. It looks like Apple isn’t shying away from at least exploring cloud-based editing in 2018.

Cloud features aren’t just for macOS video applications: There was an October 2017 posting for a MacOS/iOS Engineer – Video Applications (Cloud) – Job number 113167115.

Senior Software Engineer, Live Video

Job number 113524253, posted February 27, 2018:

The ideal candidate will have in-depth experience leveraging video editing, compositing, compression, and broadcasting technologies.

The key phrase here is ‘Live Video’ – this could be Apple making sure their tools will be able to work in a IP-enable post workflows. Broadcasters are now connecting their hardware via Ethernet instead of the older SDI technology. Engineering this sort of thing is about keeping everything in sync – sharing streams of video across 10-Gigabit Ethernet.

I wrote about BBC R&D exploring IP production in June 2017. Recently they’ve been seeing how IP production could use cloud services: “Beyond Streams and Files – Storing Frames in the Cloud”.

Sr. Machine Learning Engineer – Video Apps

Job number 113524253, posted April 12, 2018:

Apple is seeking a Machine Learning (ML) technologist to help set technology strategy for our Video Applications Engineering team. Our team develops Apple’s well-known video applications, including Final Cut Pro, iMovie, Memories part of the Photos app, and the exciting new Clips mobile app.

We utilize both ML and Computer Vision (CV) technologies in our applications, and are doing so at an increasing pace.

We are looking for an experienced ML engineer/scientist who has played a significant role in multiple ML implementations — ideally both in academia and in industry — to solve a variety of problems.

You will advise and consult on multiple projects within our organization, to identify where ML can best be employed, and in areas of media utilization not limited to images and video.

We expect that you will have significant software development and integration knowledge, in order to be both an advisor to, and significant developer on, multiple projects.

This follows on from a vacancy last July for a video applications software engineer ‘with machine learning experience.’

It looks like the Video Applications team are stepping up their investments in machine learning – expecting to use it in multiple projects: maybe different features in the different applications they work on.

One example would be improving tracking of objects in video. Instead of tracking individual pixels to hide or change a sign on the side of a moving vehicle, machine learning would recognise the changing position of the vehicle, the sign and be able to interpret the graphics and text in the sign itself.

MacOS High Sierra 10.13 introduced machine learning features in Autumn 2017. Usually Pro Apps users would need to wait at least a year to get features available in the newest version of macOS – because editors didn’t want to update their systems until the OS felt reliable enough for post production. Interesting with the Final Cut Pro 10.4.1 update, the Video Applications team have forced the issue – the current version of Final Cut (plus Motion) won’t run on macOS Sierra 10.12. At least that means new Final Cut features can start relying on new macOS features introduced last year. I wrote about Apple WWDC sessions on media in June 2017.

Senior UI Engineer, Video Applications (3D/VR)

Job number 113524287, posted February 23, 2018:

Your responsibilities will include the development and improvement of innovative and intuitive 3D and VR user interface elements. You will collaborate closely with human interface designers, and other engineers on designing and implementing the best possible user experience. The preferred candidate should have an interest and relevant experience in developing VR user interfaces.

Additional Requirements

  • Experience with OpenGL, OpenGL ES or Metal
  • Experience developing AR/VR software (SteamVR / OpenVR)
  • macOS and/or iOS development experience

Notice here that this is not a user interface engineer who will create UI for a 3D application. Apple plan to at least investigate developing 3D user interfaces that will work in VR. Although this engineer is being sought by the video applications team, who knows where else in Apple is looking for 3D interface design to be used in VR.

See also VR Jobs at Apple – July 2017.

 

 

 

 

 

 

Read more