Going to IBC 2018 in Amsterdam next month? IBC is the main European event where the TV industry get together to learn about high-end broadcasting technology.
As usual Apple won’t have a stand. The good news is that Apple will be giving presentations about their professional video editing application. The Final Cut Pro X focus point on the show floor is the Atomos stand – D.25 in Hall 11.
Atomos will demonstrate the Final Cut Pro X ProRes RAW workflow from capture to HDR monitoring.
They will have a Final Cut Pro X advanced broadcast workflows theatre: a series of presentations by international professionals about how well the recent updates to Final Cut Pro X support high-end workflows. This is being delivered by London-based Soho Editors.
Apple Product Marketing are presenting every day from Friday September 14th to Tuesday September 18th.
Some presentation titles:
You might also bump into Ronny Courtens and Sam Mestman from LumaForge. LumaForge make advanced workgroup servers for post production that connect to Macs using 10 gigabit and 40 gigabit Ethernet cables. Ronny writes:
Sam and I will frequently be on the Atomos booth. Special LumaForge demos will be in our off-site demo room on an invitation-only basis. If you would like to see the JellyFish at work with our brand-new management software, drop me an e-mail.
Lumberjack System is a method for making the most of information known while shooting when in the Final Cut Pro X edit. If you would like to schedule a one-on-one with the experts at Lumberjack System, they have a booking page on their site.
I’ll update this post when I hear about more Final Cut related activities at IBC 2018.
If you haven’t registered for IBC 2018 yet, be quick – the free exhibition pass deadline is almost up.
Final Cut Pro X 10.0 was launched 7 years ago today. Why hasn’t it taken over the world of TV and film editing?
Final Cut is better than the rest. That isn’t enough.
Despite the efforts of the Apple’s Video Applications team, the ‘top’ 0.25% of editors don’t trust Apple as a whole: The wider Apple that makes Mac hardware that seems more and more out of date. The Apple that still can’t share its plans in a useful way.
The biggest problem: They don’t trust the Apple that doesn’t nurture a deep post ecosystem.
To switch from ‘the way we’ve always done it’ to a new way requires that the new way is over 50% better. ‘A little better’ or ‘cheaper’ isn’t worth the pain of switching. In practice ‘much cheaper’ is a bad sign in post. Charging too much less is a sign that you can’t be as professional as the status quo.
Apple would like high-end users to invest in their hardware and software, yet Apple doesn’t seem to care about others who have invested in businesses that support the high end. They still don’t trust Apple because of the way Final Cut Pro 7 was discontinued 7 years ago. Improving features in the application itself is not enough to win back trust.
To take Final Cut seriously, those making TV shows and feature films require ‘Final Cut Pro X versions’ of every stage of the traditional Avid workflow. The workflow is a throwback – a digital version of the 20th-century ways – but the fact that there are businesses at each stage making money providing these services makes feel safer than a modern alternative.
A few minutes search online will turn up perfectly good Final Cut ways of making TV shows and feature films. There are high-end solutions for every stage in the process. Sadly, the high end wants more than that. They also want competition between these solutions – competitors to the Lumaforge Jellyfish for example. Another example: they want multiple competing dailies companies who fight for their Final Cut Pro X workflow business.
It is also about people. Post-production supervisors want a variety of teams and individuals to choose from. Once a team has been put together, heads of department want the reassurance of being able to replace any member of the team with others who are almost as good. Knowing Avid means that you can be relied upon until you are easily replaced. Today there aren’t enough people in cities associated with TV and film production with Final Cut Pro X experience to hire and fire.
I am not convinced that the vocal tiny minority in feature films and TV are worth supporting. The previous generation of post suppliers sees a big benefit in marketing messages like ‘buy our product – it is used by award-winning editors.’ Apple seems to think that messages like this don’t convince those who are choosing their first paid editing application.
If it was your money, would you put millions of dollars into appealing to a few thousand people to use your application in order to appeal to the millions of other people?
I expect the wider Apple appreciates the Video Applications team’s contributions to mainstream success through Clips for iOS and iMovie for iOS and macOS. The continuing profitability of Final Cut Pro X insures its survival – alongside Apple’s commitment to not trusting third parties to make applications that make the most of high-end hardware.
What can the Video Applications team do? If they want to appeal to the vocal (but probably unimportant) high end, what is the investment case they need to put before the wider Apple?
The last gap in the Final Cut Pro X feature set is collaboration: where multiple people can work the same media and timelines at the same time. Post professionals don’t want a new take – they would be happy with a version of 90s-style Avid bin-locking. The Final Cut team don’t seem able to implement features this way. They are still building version 1 of a 21st century editing application: they are not in the business of adding shiny new code and hardware drivers to 20th century metaphors – like DaVinci Resolve and Adobe Premiere.
The answer is to implement collaboration into Final Cut Pro X that supports a much larger proportion of the market. If it can work for real people, it will also work for the vocal minority. Businesses supporting high-end post will then adapt these features to use for their market. Individuals in post will add Final Cut to the portfolio of applications they learn to use in their workflows.
It is likely that over 95% of videos made in the world are made by a single person. Apple should implement collaboration features that help those people do more. Instead of using XML and Finder-level integration with third-party tools, tools in the Final Cut interface should support individuals helping individuals.
Instead of saying ‘help me’ – I’m saying ‘help millions – including me at the high end.’
Apple should support a mid-market video consultancy ecosystem – following their FileMaker Pro model. The pitch would be: ‘If you are unhappy with what you do today, you could set yourself up as a freelance video consultant. You will be able to support yourself by proposing video production solutions to small businesses and organisations in your locality. You will be able to make money on Mac hardware, on software, developing workflows, providing support and evolving workflows over months and years. They will want to pay your monthly fee because of the services you will be able to supply.’
For consultants to be able to do this, they need to make tools to extend Final Cut. Tools that don’t require doctors, dentists, builders, teachers, lawyers, assistants or secretaries to have to understand terms like ‘XML’ and ‘transfer library.’ It is unlikely that Apple will want third-parties to touch the Final Cut UI. They are very far from Adobe-style third-party free access to windows in Final Cut itself.
Most small- and medium-sized businesses use databases to organise the relationship with their customers and suppliers. There is a huge market for freelancers and small companies to design, implement, support and improve these custom databases. A significant proportion of small businesses would benefit from being able to tell stories using video.
I would imagine that the wider Apple would be more interested in helping millions of people change their future using video than investing in the special needs of the high end.
Apple don’t like to be told what to do. They like stories. The story of the unsatisfactory present – followed by a story from a bright future. Apple want to then choose how to get to that future.
The present: there are millions of freelancers and small businesses all over the world who shy away from telling their stories using video. They associate video production with high-costs and lack of control – using professional video production companies of all sizes.
The bright future: tens of millions of small and medium-size businesses supported by a new class of freelancer – who can provide services that empower individuals and organisations to tell stories using video.
…or Final Cut Pro 8.Read more
In recent years Apple have used the keynote presentation of their annual Worldwide Developers Conference as a showcase for new hardware launches. This year Intel’s delays in producing significant CPU updates makes it less likely we will see new MacBooks, Macs or Mac Pros this time.
As this event is for those making software and hardware for iOS, tvOS, watchOS and macOS devices, I hope Apple launches a new hardware plan.
Yesterday Arm announced three new chip families for smaller devices: a new CPU, a new GPU and a new VPU (video processor).
AnandTech reports that Arm’s new VPU includes hardware support for VP9 10-bit, H.264 10-bit and HEVC 10-bit – with the ability to play 8K 60fps video.
In concert with their display processor, the new video processor is currently able to handle HDR10 and HLG formatted HDR video. Meanwhile support for HDR10+ – which is HDR10 with support for dynamic metadata – is set to arrive in the future.
This shows what Apple could be doing with the A-series chips they use in iOS devices (and future VR/AR devices).
The rate at which Apple have improved their A-series CPUs is the envy of phone makers. In September 2017, Wired reported that the new A11 processor in the iPhone 8 and X includes what Apple calls its ‘Neural Engine’:
The engine has circuits tuned to accelerate certain kinds of artificial-intelligence software, called artificial neural networks, that are good at processing images and speech.
Apple said the neural engine would power the algorithms that recognize your face to unlock the phone and transfer your facial expressions onto animated emoji. It also said the new silicon could enable unspecified “other features.”
What if Apple created a very small single-core ‘neural processing unit’ – the N1 NPU?
If N1s are included in all Apple product updates in coming months, it would make simpler for developers to add modern features to their products and services. Including developers within Apple.
Using the cell concept of multiple processing units, the number of N1s used in an Apple device would depend on its power budget, memory and profit margin:
Once we having devices with the processing power to deal with the increased demands of modern uses, Apple could facilitate connecting them together for combined power.
Although Apple would prefer for its devices to have no physical hardware connections, for those that would gain from sharing processing power with other devices, it might be worth it. For example, although the new Mac Pro might connect with some devices using multiple Thunderbolt 3 buses, it would be even better if it had a connector that would interface with other Mac Pros – or a stack of new Mac minis that combine together like Lego.
Tune into Apple’s livestream from WWDC18 on Monday to see what is coming.Read more
From 1999 to 2009 Final Cut versions 1 to 7 were used by Apple to show off the latest their technologies could do. Especially the QuickTime API. Next week is WWDC 2018 – Apple’s annual developer conference.
For the first time in many years Final Cut Pro X requires the current version of macOS: High Sierra 10.13. I hope this means the Video Applications team are able to show developers what can be done with macOS frameworks. The more that applications use macOS-only features, the more Macs Apple will sell.
Not much of the WWDC 2018 schedule has been revealed yet, but one session shows what Apple could do:
Vision is a high-level framework that provides an easy to use API for handling many computer vision tasks. We’ll dive deep into a particularly powerful feature of Vision—tracking objects in video streams. Learn best practices for using Vision in your app. Gain a greater understanding of how request handlers function in terms of lifecycle, performance, and memory utilization.
This kind of tracking is much more useful than tracking pixels or planes. Tools that understand (using machine learning-built models) what the objects in a video clip can be much more powerful. Instead of tracking a number of regions in a clip and working out how they are moving in 3D space relative to each other, object tracking understands how to recognise and track specific objects such as faces, people, vehicles, signs and buildings. Once tracked, the appearance of these things can be modified by the application. This will work when the objects seem to move and turn in the shot and even when are they are obscured by other objects.
Next week I’ll tune in to see forthcoming features of macOS (and iOS) that are relevant to Apple’s video applications.
In a few days the rest of the agenda will be announced. Next week people all over the world will be able to watch live streams of the sessions as they happen. A few hours after each session, videos will be available to watch on the Apple website.
If you have lots of video to transcode and tight deadlines, sometimes even Apple Compressor isn’t fast enough for the job. If you have a Mac with multiple cores and lots of RAM or a network of Macs going to spare, you can use this power to speed up video conversions and transcodes.
If you have many CPU cores and enough RAM, you can have multiple copies (‘instances’) of Compressor run on your iMac or Mac Pro at the same time. Each copy works on a different frames of the source video.
The number of Compressor instances you can set up on a Mac depends on the number of cores and the amount of RAM installed. You need to have at least 8 cores and 4GB of RAM to have at least one additional instance of Compressor run on your Mac.
Maximum number of additional instances of Compressor that can run on a Mac:
This means that your Mac needs to have a minimum of 8 cores and 4GB of RAM to have two instances of Compressor running at the same time. MacBook Pros (as of Spring 2018) have a maximum of 4 cores – described as ‘quad-core’ CPUs.
From Apple’s support document: Compressor: Create additional instances of Compressor:
To enable instances of Compressor
- Choose Compressor > Preferences (or press Command-Comma).
- Click Advanced.
- Select the “Enable additional Compressor instances” checkbox, then choose a number of instances from the pop-up menu.
Important: If you don’t have enough cores or memory, the “Enable additional Compressor instances” checkbox in the Advanced preferences pane is dimmed.
Once you install Compressor on your other Macs, you can use those Macs to help with video transcoding tasks.
To create a group of computers to transcode your videos:
Once this group is set up, use Compressor to set up a transcode as normal. Before clicking the Start button, click the “Process on” pop-up menu and choose the group of computers that you want to use to process your batch.
There are more details in Apple’s support document: Compressor: Transcode batches with multiple computers.Read more
More from Apple’s job site. This time signs that they are looking to develop features for their applications, OSes and hardware to support spatial audio. Spatial audio allows creators to define soundscapes in terms of the relative position of sound sources to listeners. This means that if I hear someone start talking to my left, if I turn towards them, the sound should seem to come from what I’m looking at – from the front. Useful for 360° spherical audio, fully-interactive VR and experiences plus future OS user interfaces.
At the moment there are four relevant vacancies:
Apple Hardware Engineering is looking for a Audio Experience & Prototyping Engineer:
Apple’s Technology Development Group is looking for an Audio Experience and Prototyping Engineer to help prototype and define new audio UI/UX paradigms. This engineer will work closely with the acoustic design, audio software, product design, experience prototyping, and other teams to guide the future of Apple’s audio technology and experience The ideal candidate will have a background in spatial audio experience design (binaural headphone rendering, HOA, VBAP), along with writing audio supporting software and plugins.
Experience in the following strongly preferred:
- Sound design for games or art installations
- Writing apps using AVAudioEngine
- Swift / Objective-C / C++
- Running DAW software such as Logic, ProTools, REAPER, etc.
Closer to post production, Apple’s Interactive Media Group Core Audio team is looking for a Spatial Audio Software Engineer to work in Silicon Valley:
IMG’s Core Audio team provides audio foundation for various high profile features like Siri, phone calls, Face Time, media capture, playback, and API’s for third party developers to enrich our platforms. The team is looking for talented engineers who are passionate about building audio software products for millions of customers and care about overall user experience. You will be pushing the boundaries of spatial audio experience for future technologies.
- Key Advantage : Experience with audio engines that are part of Digital Audio Workstations or Game audio systems
- Advantage : Experience with Spatial audio formats (Atmos, HOA etc) is desirable.
I gather that the Logic Pro digital audio workstation team are based in Germany. Apple are also looking for a Spatial Audio Software Engineer to work in Berlin.
For iOS and macOS, Apple are also looking for a Core Audio Software Engineer in Zurich:
The team is looking for talented engineers who are passionate about building audio software products for millions of customers and care about overall user experience. You will be pushing the boundaries of spatial audio experience for future technologies.
If you think this kind of activity is too little too late, there was at least one vacancy for a Spatial Audio Software Engineer back in July 2017.
Although Apple explore many technical directions for products that never see the light of day, I expect that spatial audio has a good future at Apple.Read more