If you have lots of video to transcode and tight deadlines, sometimes even Apple Compressor isn’t fast enough for the job. If you have a Mac with multiple cores and lots of RAM or a network of Macs going to spare, you can use this power to speed up video conversions and transcodes.
If you have many CPU cores and enough RAM, you can have multiple copies (‘instances’) of Compressor run on your iMac or Mac Pro at the same time. Each copy works on a different frames of the source video.
The number of Compressor instances you can set up on a Mac depends on the number of cores and the amount of RAM installed. You need to have at least 8 cores and 4GB of RAM to have at least one additional instance of Compressor run on your Mac.
Maximum number of additional instances of Compressor that can run on a Mac:
This means that your Mac needs to have a minimum of 8 cores and 4GB of RAM to have two instances of Compressor running at the same time. MacBook Pros (as of Spring 2018) have a maximum of 4 cores – described as ‘quad-core’ CPUs.
From Apple’s support document: Compressor: Create additional instances of Compressor:
To enable instances of Compressor
- Choose Compressor > Preferences (or press Command-Comma).
- Click Advanced.
- Select the “Enable additional Compressor instances” checkbox, then choose a number of instances from the pop-up menu.
Important: If you don’t have enough cores or memory, the “Enable additional Compressor instances” checkbox in the Advanced preferences pane is dimmed.
Once you install Compressor on your other Macs, you can use those Macs to help with video transcoding tasks.
To create a group of computers to transcode your videos:
Once this group is set up, use Compressor to set up a transcode as normal. Before clicking the Start button, click the “Process on” pop-up menu and choose the group of computers that you want to use to process your batch.
There are more details in Apple’s support document: Compressor: Transcode batches with multiple computers.Read more
More from Apple’s job site. This time signs that they are looking to develop features for their applications, OSes and hardware to support spatial audio. Spatial audio allows creators to define soundscapes in terms of the relative position of sound sources to listeners. This means that if I hear someone start talking to my left, if I turn towards them, the sound should seem to come from what I’m looking at – from the front. Useful for 360° spherical audio, fully-interactive VR and experiences plus future OS user interfaces.
At the moment there are four relevant vacancies:
Apple Hardware Engineering is looking for a Audio Experience & Prototyping Engineer:
Apple’s Technology Development Group is looking for an Audio Experience and Prototyping Engineer to help prototype and define new audio UI/UX paradigms. This engineer will work closely with the acoustic design, audio software, product design, experience prototyping, and other teams to guide the future of Apple’s audio technology and experience The ideal candidate will have a background in spatial audio experience design (binaural headphone rendering, HOA, VBAP), along with writing audio supporting software and plugins.
Experience in the following strongly preferred:
- Sound design for games or art installations
- Writing apps using AVAudioEngine
- Swift / Objective-C / C++
- Running DAW software such as Logic, ProTools, REAPER, etc.
Closer to post production, Apple’s Interactive Media Group Core Audio team is looking for a Spatial Audio Software Engineer to work in Silicon Valley:
IMG’s Core Audio team provides audio foundation for various high profile features like Siri, phone calls, Face Time, media capture, playback, and API’s for third party developers to enrich our platforms. The team is looking for talented engineers who are passionate about building audio software products for millions of customers and care about overall user experience. You will be pushing the boundaries of spatial audio experience for future technologies.
- Key Advantage : Experience with audio engines that are part of Digital Audio Workstations or Game audio systems
- Advantage : Experience with Spatial audio formats (Atmos, HOA etc) is desirable.
I gather that the Logic Pro digital audio workstation team are based in Germany. Apple are also looking for a Spatial Audio Software Engineer to work in Berlin.
For iOS and macOS, Apple are also looking for a Core Audio Software Engineer in Zurich:
The team is looking for talented engineers who are passionate about building audio software products for millions of customers and care about overall user experience. You will be pushing the boundaries of spatial audio experience for future technologies.
If you think this kind of activity is too little too late, there was at least one vacancy for a Spatial Audio Software Engineer back in July 2017.
Although Apple explore many technical directions for products that never see the light of day, I expect that spatial audio has a good future at Apple.Read more
A good way of seeing what Apple plans to work on is to check out their jobs site. A July 2017 job posting for a pro workflow expert to set up a studio ends up with Apple giving a journalist a tour of the lab in April 2018.
Here is a round-up of recent Apple Pro Apps-related job posts. They hint as to what might be appearing in Apple’s video applications in 2019.
Many start with this description of the Apple Video Applications group:
The Video Applications group develops leading media creation apps including Memories, Final Cut Pro X, iMovie, Motion, and Clips. The team is looking for a talented software engineer to help design and develop future features for these applications.
This is an exciting opportunity to apply your experience in video application development to innovative media creation products that reach millions of users.
Job number 113527707, posted March 2, 2018:
The ideal candidate will have in-depth experience leveraging both database and client/server technologies. As such, you should be fluent with cloud application development utilizing CloudKit or other PAAS (“Platform as a Service”) platforms.
The main NLE makers have come to cloud-enabling their tools relatively late compared to other creative fields. Apple currently allow multiple people to edit the same document in iWorks at the same time. Sharing multiple-gigabytes of video data is much harder than keeping a Pages or Numbers document in sync across the internet. Avid have recently announced Amazon-powered video editing in the cloud services coming this year. It looks like Apple isn’t shying away from at least exploring cloud-based editing in 2018.
Cloud features aren’t just for macOS video applications: There was an October 2017 posting for a MacOS/iOS Engineer – Video Applications (Cloud) – Job number 113167115.
Job number 113524253, posted February 27, 2018:
The ideal candidate will have in-depth experience leveraging video editing, compositing, compression, and broadcasting technologies.
The key phrase here is ‘Live Video’ – this could be Apple making sure their tools will be able to work in a IP-enable post workflows. Broadcasters are now connecting their hardware via Ethernet instead of the older SDI technology. Engineering this sort of thing is about keeping everything in sync – sharing streams of video across 10-Gigabit Ethernet.
I wrote about BBC R&D exploring IP production in June 2017. Recently they’ve been seeing how IP production could use cloud services: “Beyond Streams and Files – Storing Frames in the Cloud”.
Job number 113524253, posted April 12, 2018:
Apple is seeking a Machine Learning (ML) technologist to help set technology strategy for our Video Applications Engineering team. Our team develops Apple’s well-known video applications, including Final Cut Pro, iMovie, Memories part of the Photos app, and the exciting new Clips mobile app.
We utilize both ML and Computer Vision (CV) technologies in our applications, and are doing so at an increasing pace.
We are looking for an experienced ML engineer/scientist who has played a significant role in multiple ML implementations — ideally both in academia and in industry — to solve a variety of problems.
You will advise and consult on multiple projects within our organization, to identify where ML can best be employed, and in areas of media utilization not limited to images and video.
We expect that you will have significant software development and integration knowledge, in order to be both an advisor to, and significant developer on, multiple projects.
This follows on from a vacancy last July for a video applications software engineer ‘with machine learning experience.’
It looks like the Video Applications team are stepping up their investments in machine learning – expecting to use it in multiple projects: maybe different features in the different applications they work on.
One example would be improving tracking of objects in video. Instead of tracking individual pixels to hide or change a sign on the side of a moving vehicle, machine learning would recognise the changing position of the vehicle, the sign and be able to interpret the graphics and text in the sign itself.
MacOS High Sierra 10.13 introduced machine learning features in Autumn 2017. Usually Pro Apps users would need to wait at least a year to get features available in the newest version of macOS – because editors didn’t want to update their systems until the OS felt reliable enough for post production. Interesting with the Final Cut Pro 10.4.1 update, the Video Applications team have forced the issue – the current version of Final Cut (plus Motion) won’t run on macOS Sierra 10.12. At least that means new Final Cut features can start relying on new macOS features introduced last year. I wrote about Apple WWDC sessions on media in June 2017.
Job number 113524287, posted February 23, 2018:
Your responsibilities will include the development and improvement of innovative and intuitive 3D and VR user interface elements. You will collaborate closely with human interface designers, and other engineers on designing and implementing the best possible user experience. The preferred candidate should have an interest and relevant experience in developing VR user interfaces.
- Experience with OpenGL, OpenGL ES or Metal
- Experience developing AR/VR software (SteamVR / OpenVR)
- macOS and/or iOS development experience
Notice here that this is not a user interface engineer who will create UI for a 3D application. Apple plan to at least investigate developing 3D user interfaces that will work in VR. Although this engineer is being sought by the video applications team, who knows where else in Apple is looking for 3D interface design to be used in VR.
See also VR Jobs at Apple – July 2017.
The specification page for Final Cut Pro, Motion and Compressor states that the minimum requirements have changed from macOS Sierra 10.12.6 to macOS High Sierra 10.13.2 or later. In order to get today’s free updates for Final Cut Pro, Motion and Compressor, your Mac must be running 10.13.2 or newer. You won’t see these updates in the Mac App store if you are using an older version of the OS.Read more
ProRes RAW bit depth depends upon what the camera sends out. So for Varicam it would be 14 bit, for Sony FS it would be 12 bit
Mitch Gross, Panasonic Cinema product manager:
Both the EVA1 and VariCam LT RAW outputs will be supported by the Atomos recorders for ProRes RAW capture. 4K60p/2K240p at launch on Monday, EVA1 5.7K30p in May.
I’m certain “RAW” will now permanently change to mean a Bayer pattern
The point is I don’t see ProRes RAW helping with any of this, and I find Almost all clients are editing in Premier or Avid […] ProRes RAW is unlikely to work on a 2012 Mac Pro.
I do expect ProRes Raw to enable some productions to move from ProRes/Rec709 to a raw workflow and HDR. […] It will matter to a lot of my productions though. R3D nearly breaks a lot of their post workflows. ProRes is easy, but a little too constraining. It will shift the industry, especially the low end and mid range, in ways we should all be excited about.
While I agree that ProRes RAW is a pretty terrific opportunity to “bring RAW to the masses” let’s all make sure not to get too carried away. ProRes RAW may be (Apple) processor friendly, don’t forget that the files are still something like three to four times the size of something like AVC-Ultra or All-I codecs. And they’re approaching 10 times the size of a high quality LongGOP.
I think we need to think a bit differently to how we do now. We tend to assume raw must be graded, must have a load of post work done, when really that’s not necessarily the case.
We should not be working in 709 any more, the tail ends of the gamma curve just compress usable highlight and shadow detail, it’s a delivery gamma, not a workflow one.
Also some of us need all the full range linear in post.
So if Apple had slammed down a ProRes Linear intermediate codec, with VBR and maybe a couple of quality settings and found a way to read that data in ‘simple’ mode for decimating the output for speed then i for one would be all over that. Basically EXR and ACES for the masses, with Piz or DW compression.
I just don’t get what ProRes RAW will bring professionals
ProRes RAW will work because it is Apple. With a single step it is supported in lord-knows-how-many-thousands of systems and a host of cameras. These cameras were like ships without a home port, wandering the seas with no effective and manageable RAW workflow. Uncompressed CinemaDNGs? The data load is ridiculous and the workflow a bit mercurial from one camera to another and one post system to another.
ProRes RAW makes it easy. It levels the playing field. All those cameras go into it and will work just fine in FCPX. Finding and applying the correct LUT is easy. Everything just works. That’s the beauty of it.
There are many great points being made, so if you want a deep dive, follow along with the evolving discussion at the Cinematography Mailing List.Read more
REDSHARK have captured an exclusive video with Jeromy Young, the CEO of Atomos to get their take on ProRes RAW. Thanks to Charles Wren for the link.
Here are a few excerpts from the 17 minute interview:
He said that Atomos supporting ProRes RAW is the culmination of years of work. Atomos aim to supply 80% of the market – ARRI have the top-end cinema workflow sorted. They see it as their task to take best of that workflow and make it for everyone else.
Although Atomos could capture 4K60 from the RAW output of the Sony FS5…
…we couldn’t do it justice when we went to ProRes. It was the right solution for 10 years ago.
We approached Apple and asked would you guys be interested in giving us a standard to go to.
CinemaDNG is about individual frames…
With ProRes RAW we’re dealing with a whole video package that has metadata in it that the application can read that you can apply and transform each pixel into video to see in whatever way you want.
Jeromy believes that individual camera makers will produce plugins that run in NLEs that will make the most of the ProRes RAW that was recorded by their cameras.
The ProRes RAW software for the Shogun Inferno and the Sumo 19 will be a free update. Because ProRes RAW file sizes are much smaller than CinemaDNG, Atomos devices can remain with SATA storage even when recording 4K60.
He also discusses whether NLEs other than Final Cut Pro X will support ProRes RAW and how Atomos’ market aligns with Final Cut.Read more