A good way of seeing what Apple plans to work on is to check out their jobs site. A July 2017 job posting for a pro workflow expert to set up a studio ends up with Apple giving a journalist a tour of the lab in April 2018.
Here is a round-up of recent Apple Pro Apps-related job posts. They hint as to what might be appearing in Apple’s video applications in 2019.
Many start with this description of the Apple Video Applications group:
The Video Applications group develops leading media creation apps including Memories, Final Cut Pro X, iMovie, Motion, and Clips. The team is looking for a talented software engineer to help design and develop future features for these applications.
This is an exciting opportunity to apply your experience in video application development to innovative media creation products that reach millions of users.
Job number 113527707, posted March 2, 2018:
The ideal candidate will have in-depth experience leveraging both database and client/server technologies. As such, you should be fluent with cloud application development utilizing CloudKit or other PAAS (“Platform as a Service”) platforms.
The main NLE makers have come to cloud-enabling their tools relatively late compared to other creative fields. Apple currently allow multiple people to edit the same document in iWorks at the same time. Sharing multiple-gigabytes of video data is much harder than keeping a Pages or Numbers document in sync across the internet. Avid have recently announced Amazon-powered video editing in the cloud services coming this year. It looks like Apple isn’t shying away from at least exploring cloud-based editing in 2018.
Cloud features aren’t just for macOS video applications: There was an October 2017 posting for a MacOS/iOS Engineer – Video Applications (Cloud) – Job number 113167115.
Job number 113524253, posted February 27, 2018:
The ideal candidate will have in-depth experience leveraging video editing, compositing, compression, and broadcasting technologies.
The key phrase here is ‘Live Video’ – this could be Apple making sure their tools will be able to work in a IP-enable post workflows. Broadcasters are now connecting their hardware via Ethernet instead of the older SDI technology. Engineering this sort of thing is about keeping everything in sync – sharing streams of video across 10-Gigabit Ethernet.
I wrote about BBC R&D exploring IP production in June 2017. Recently they’ve been seeing how IP production could use cloud services: “Beyond Streams and Files – Storing Frames in the Cloud”.
Job number 113524253, posted April 12, 2018:
Apple is seeking a Machine Learning (ML) technologist to help set technology strategy for our Video Applications Engineering team. Our team develops Apple’s well-known video applications, including Final Cut Pro, iMovie, Memories part of the Photos app, and the exciting new Clips mobile app.
We utilize both ML and Computer Vision (CV) technologies in our applications, and are doing so at an increasing pace.
We are looking for an experienced ML engineer/scientist who has played a significant role in multiple ML implementations — ideally both in academia and in industry — to solve a variety of problems.
You will advise and consult on multiple projects within our organization, to identify where ML can best be employed, and in areas of media utilization not limited to images and video.
We expect that you will have significant software development and integration knowledge, in order to be both an advisor to, and significant developer on, multiple projects.
This follows on from a vacancy last July for a video applications software engineer ‘with machine learning experience.’
It looks like the Video Applications team are stepping up their investments in machine learning – expecting to use it in multiple projects: maybe different features in the different applications they work on.
One example would be improving tracking of objects in video. Instead of tracking individual pixels to hide or change a sign on the side of a moving vehicle, machine learning would recognise the changing position of the vehicle, the sign and be able to interpret the graphics and text in the sign itself.
MacOS High Sierra 10.13 introduced machine learning features in Autumn 2017. Usually Pro Apps users would need to wait at least a year to get features available in the newest version of macOS – because editors didn’t want to update their systems until the OS felt reliable enough for post production. Interesting with the Final Cut Pro 10.4.1 update, the Video Applications team have forced the issue – the current version of Final Cut (plus Motion) won’t run on macOS Sierra 10.12. At least that means new Final Cut features can start relying on new macOS features introduced last year. I wrote about Apple WWDC sessions on media in June 2017.
Job number 113524287, posted February 23, 2018:
Your responsibilities will include the development and improvement of innovative and intuitive 3D and VR user interface elements. You will collaborate closely with human interface designers, and other engineers on designing and implementing the best possible user experience. The preferred candidate should have an interest and relevant experience in developing VR user interfaces.
- Experience with OpenGL, OpenGL ES or Metal
- Experience developing AR/VR software (SteamVR / OpenVR)
- macOS and/or iOS development experience
Notice here that this is not a user interface engineer who will create UI for a 3D application. Apple plan to at least investigate developing 3D user interfaces that will work in VR. Although this engineer is being sought by the video applications team, who knows where else in Apple is looking for 3D interface design to be used in VR.
See also VR Jobs at Apple – July 2017.
The specification page for Final Cut Pro, Motion and Compressor states that the minimum requirements have changed from macOS Sierra 10.12.6 to macOS High Sierra 10.13.2 or later. In order to get today’s free updates for Final Cut Pro, Motion and Compressor, your Mac must be running 10.13.2 or newer. You won’t see these updates in the Mac App store if you are using an older version of the OS.
It is rare that Final Cut Pro needs such a relatively new version of macOS. Since 2011, the ProApps team have only required that the OS is as old as 16 months old.
This means that Final Cut will have access to parts of macOS introduced in last year’s Apple Worldwide Developer conference – the most likely feature being added will be eGPU compatibility – as introduced in the most recent update to High Sierra. Although parts of Final Cut Pro 10.4 and earlier can be sped up by attaching an eGPU, some core parts weren’t.
If you haven’t updated Final Cut Pro on your computer before, there is a support page from Apple that gives useful tips.Read more
ProRes RAW bit depth depends upon what the camera sends out. So for Varicam it would be 14 bit, for Sony FS it would be 12 bit
Mitch Gross, Panasonic Cinema product manager:
Both the EVA1 and VariCam LT RAW outputs will be supported by the Atomos recorders for ProRes RAW capture. 4K60p/2K240p at launch on Monday, EVA1 5.7K30p in May.
I’m certain “RAW” will now permanently change to mean a Bayer pattern
The point is I don’t see ProRes RAW helping with any of this, and I find Almost all clients are editing in Premier or Avid […] ProRes RAW is unlikely to work on a 2012 Mac Pro.
I do expect ProRes Raw to enable some productions to move from ProRes/Rec709 to a raw workflow and HDR. […] It will matter to a lot of my productions though. R3D nearly breaks a lot of their post workflows. ProRes is easy, but a little too constraining. It will shift the industry, especially the low end and mid range, in ways we should all be excited about.
While I agree that ProRes RAW is a pretty terrific opportunity to “bring RAW to the masses” let’s all make sure not to get too carried away. ProRes RAW may be (Apple) processor friendly, don’t forget that the files are still something like three to four times the size of something like AVC-Ultra or All-I codecs. And they’re approaching 10 times the size of a high quality LongGOP.
I think we need to think a bit differently to how we do now. We tend to assume raw must be graded, must have a load of post work done, when really that’s not necessarily the case.
We should not be working in 709 any more, the tail ends of the gamma curve just compress usable highlight and shadow detail, it’s a delivery gamma, not a workflow one.
Also some of us need all the full range linear in post.
So if Apple had slammed down a ProRes Linear intermediate codec, with VBR and maybe a couple of quality settings and found a way to read that data in ‘simple’ mode for decimating the output for speed then i for one would be all over that. Basically EXR and ACES for the masses, with Piz or DW compression.
I just don’t get what ProRes RAW will bring professionals
ProRes RAW will work because it is Apple. With a single step it is supported in lord-knows-how-many-thousands of systems and a host of cameras. These cameras were like ships without a home port, wandering the seas with no effective and manageable RAW workflow. Uncompressed CinemaDNGs? The data load is ridiculous and the workflow a bit mercurial from one camera to another and one post system to another.
ProRes RAW makes it easy. It levels the playing field. All those cameras go into it and will work just fine in FCPX. Finding and applying the correct LUT is easy. Everything just works. That’s the beauty of it.
There are many great points being made, so if you want a deep dive, follow along with the evolving discussion at the Cinematography Mailing List.Read more
REDSHARK have captured an exclusive video with Jeromy Young, the CEO of Atomos to get their take on ProRes RAW. Thanks to Charles Wren for the link.
Here are a few excerpts from the 17 minute interview:
He said that Atomos supporting ProRes RAW is the culmination of years of work. Atomos aim to supply 80% of the market – ARRI have the top-end cinema workflow sorted. They see it as their task to take best of that workflow and make it for everyone else.
Although Atomos could capture 4K60 from the RAW output of the Sony FS5…
…we couldn’t do it justice when we went to ProRes. It was the right solution for 10 years ago.
We approached Apple and asked would you guys be interested in giving us a standard to go to.
CinemaDNG is about individual frames…
With ProRes RAW we’re dealing with a whole video package that has metadata in it that the application can read that you can apply and transform each pixel into video to see in whatever way you want.
Jeromy believes that individual camera makers will produce plugins that run in NLEs that will make the most of the ProRes RAW that was recorded by their cameras.
The ProRes RAW software for the Shogun Inferno and the Sumo 19 will be a free update. Because ProRes RAW file sizes are much smaller than CinemaDNG, Atomos devices can remain with SATA storage even when recording 4K60.
He also discusses whether NLEs other than Final Cut Pro X will support ProRes RAW and how Atomos’ market aligns with Final Cut.Read more
Apple have announced the next version of Final Cut Pro X will have two features for high-end workflows. The free update will include ProRes RAW for better footage acquisition and flexible closed captioning for media distribution. It will be available from Monday April 9th from the Mac App Store.
Updated with more information on exporting using roles and Compressor 4.4.1
ProRes RAW provides the real-time performance and storage convenience of ProRes 442HQ and 4444 with the postproduction flexibility of camera RAW. The new proposition from Apple is effectively: “Add any camera you have into a RED-like RAW workflow with an Atomos recorder and Apple professional video applications.” This can be done now because Macs are now fast enough to work with multiple layers of camera source media in real time – instead of extracting the information from the source when mastering in a grading application.
Whereas the current family of ProRes codecs is designed for all stages of video production, Apple ProRes RAW and Apple ProRes RAW HQ are designed for acquisition. When ProRes RAW is used in Final Cut Pro 10.4.1, the output for distribution is ProRes 422 HQ or ProRes 4444 (although ProRes RAW would be a good codec for archiving ‘original camera negative’).
A camera sensor is a grid of photosites that can each only record a single red, green or blue brightness value. Footage for postproduction is made of a series of images where each pixel in the grid is made up of at brightness values for red, green and blue. At some point in the workflow, the RGB values for each pixel need to be interpolated from the brightness values of adjacent red, green and blue photosites.
In this case the RGB value of the single pixel in the video frame is based on the red brightness at its location plus green and blue values interpolated from the brightness values recorded at adjacent photosites.
ProRes RAW encodes the information captured by individual camera photosites without extrapolating RGB information for every position in the sensor array. At the point of being used in a timeline, Final Cut Pro creates a grid of RGB values by interpolating the brightness values recorded at individual photosites.
The ProRes RAW advantage is that there is more processing power in a Mac running Final Cut Pro than there is in a camera recording images on location. More processing power means the algorithm that is doing the interpolation can be more advanced. It can also be modified if needed. Cameras must bake in their pixel interpolation into the footage they record.
In practical terms, ProRes RAW gives REDCODE RAW quality at ProRes data rates. For 1 stream or REDCODE RAW 5:1 or 3 streams of Canon Cinema RAW Light, a Mac running Final Cut Pro 10.4.1 will be able to play back 7 streams of Apple ProRes RAW HQ or 8 streams of Apple ProRes RAW. Also Final Cut Pro is able to render and export ProRes RAW HQ 5-6 times faster than REDCODE RAW 3:1.
In practice you would use ProRes RAW where you used to use ProRes 422 HQ and ProRes RAW HQ where you used ProRes 4444. Because of how each RAW frame can vary, the data rates vary much more with ProRes RAW than they do with standard ProRes.
For more information on storage requirements and data rates for ProRes RAW, read the new Apple White Paper.
There will initially there will be two ways to record Apple ProRes RAW: using the Sumo 19 or Shogun Inferno on-camera recorders from Atomos or a 5K full frame Super35 Zenmuse X7 camera mounted on a DJI Inspire 2 drone.
Atomos’ ProRes RAW page.
Interesting that this new ProRes family initially only works with Apple video applications: Final Cut Pro 10.4.1, Motion 5.4.1 and Compressor 4.4.1. Could this be the start of Apple favouring their own post applications over other macOS tools.
The other big new feature of Final Cut Pro 10.4.1 and Compressor 4.4.1 is the ability to import, create, edit and export closed caption text. Closed captions are the text that optionally appears at playback – be it in the Netflix application running on a set-top box, on broadcast TV, at special subtitled screenings in cinemas or in the YouTube iOS app.
Of course captioning should be done when picture and sound have been locked, but Apple have done a lot to implement this feature so it works well based on continuous changes made towards the end of postproduction.
The flexibility of Final Cut Pro X video roles means that captions in multiple formats and in multiple languages can be edited and exported from the same timeline.
Individual captions can be associated with video or audio clips in the primary storyline. This means that when these clips are edited and re-ordered, the captions move with their associated clip.
The big news is that captions can also be connected audio and video clips. That means an individual caption can be connected to the specific piece of audio that it is transcribing. So although you should start captioning your production once there is a picture and sound lock, you can start the captioning process earlier. Timeline changes made to clips in the primary storyline and connected clips will be reflected in their associated captions.
Final Cut Pro 10.4.1 works with closed captions in one of two formats: CEA-608 and ITT.
CEA-608 is the long-standing closed caption format used in US broadcast TV and on DVDs worldwide. ‘iTunes Timed Text’ captions are used in iTunes video bundles for movies and TV shows that can be bought or streamed from Apple. They are also used by Amazon Prime Video and YouTube.
Captions can be imported as files generated by external services or applications (using the File > Import > Captions command). .scc and .itt formats are recognised for now.
Captions can be extracted from video files with encoded captions. Add the clip to the timeline and use the Edit > Captions > Extract Captions command.
Captions in compound clips or in multcam angles can be extracted and added to their parent timeline (Edit > Captions > Extract Captions).
Add caption to the active language subrole at the playhead location using the Add Caption command (Option-C [or Control-Option-C if the caption editor is open – this means you can add a caption while editing another caption]).
An indvidual caption is shown in a language sub-role lane of the captions lanes of the timeline. You choose which captions are visible in the viewer by activating the caption video subrole in the timeline index.
To open a selected caption in the caption editor, double-click it or choose the Edit Caption command (Shift-Control-C).
Captions can be edited in a floating caption window (to use timeline navigation shortcuts such as J, K, L, I and O without entering them into the caption editor, also hold down the Control key – Control-J, Control-K etc.):
Captions are automatically checked, errors are flagged in the timeline index (you can choose to only show errors)…
or in the timeline. In this example, captions overlap, which most caption formats do not allow:
This problem can be fixed with the Edit > Captions > Resolve Overlaps command.
For more on fixing problems with captions that would mean they would not be valid when played back, there is an Apple support document on Final Cut Pro X Caption Validation.
Once you have timed the captions for one language, to start work on another language, you can duplicate them as a new language. Select the captions you want to work with, then choose the Edit > Captions > Duplicate Captions to New Language command.
Each caption format has various formatting options. If you are happy with the style of a caption, you can use the Caption Inspector to Save Style as Default and move to another caption to Apply Default Style.
CEA-608 captions can have more than one field on screen at once. You can use the Inspector to add and format up to three extra fields per caption:
If you have a long caption you can split in into individual captions using the Edit > Captions > Split Captions command (Control-Option-Command-C)
Conversely, you can combine consecutive captions into one longer caption using the Edit > Captions > Join Captions command.
By default, captions are connected to the primary storyline. To connect a caption to a connected clip that overlaps the caption in the timeline:
Captions are not supported when sharing to Facebook. If you have captions in your project, they will not appear when you share the project to Facebook.
If you want to export just the captions from a timeline, use the File > Export Captions command.
To make preparing productions for distribution or for collaboration easer, Final Cut Pro 10.4.1 has a new Roles tab in the Share dialog box:
To make preparing to export easier, Final Cut will respect which roles and subroles are on or off in the timeline when sharing.
In the Roles tab you can
When you share a Master File as Separate Files, in the Roles tab you can
It looks like you can’t yet add a video and audio file and then choose which video and audio roles you want it to include. These separate files are either video or audio.
The next version of Compressor has gained some features too:
For those who need to add captions to finished videos, instead of using a full NLE, they can use Apple’s video distribution preparation application.
Built-in settings and destinations support captions: “Apple Devices (in both the H.264 and HEVC codecs), ProRes, Publish to YouTube, Create DVD, and other settings that use the QuickTime Movie, MPEG-2, and MPEG-4 formats.” Note that captions are not supported when sharing to Facebook.
Standard Compressor jobs can only import a single .scc (CEA-608) or .itt (iTunes Timed Text) file. If an imported video file already contains embedded CEA-608 closed captions, Compressor adds the caption data to the job.
You can edit each caption’s text, appearance, position, animation style and timing. You can also add new captions at the time of your choice.
If you have multiple captions selected in the captions palette, you can adjust their start or end times by frames, seconds or minutes at the same time.
YouTube and Vimeo support CEA-608 captions that Compressor encodes into videos. If you use iTT subtitles, Compressor will generate a separate .itt file and will automatically upload it if you use the YouTube or Vimeo presets.
Compressor has been able to add metadata from QuickTime movie files to jobs. Version 4.4.1 can add metadata stored in XML property list files in the following metadata categories:
Using a standard set of property lists when exporting batches means that other tools that can read this metadata can make decisions based on property values (such as specific keywords).
Although Apple don’t often let themselves be guided by external trade events, this is a rare update that seems to be prompted by NAB happening in Las Vegas next weekend. I’m not sure how many naysayers will be swayed by the inclusion of closed captions. ProRes RAW however shows that Apple is serious about trying to attract more high-end workflows to the Mac, and Final Cut Pro X specifically: “Don’t worry about good cameras with bad codecs, we have the acquisition format you need for HDR workflows. Available now in Apple pro video applications only.”Read more
Since May 2017, Apple has been running its ‘Everyone Can Code’ educational programme. It provides video-based and interactive book-based coursework teachers and trainers to help people learn how to make applications in Apple’s Swift programming language. Schools and universities operate Apple-supported courses in app development.
Although it is great that more people can learn software development this way, I think that the ability to know how to tell stories is a skill that a wider range of people need in their day-to-day lives.
People need to tell stories more often than they need to solve problems with app development.
At an education event in Chicago today, Apple announced that a new programme is coming: ‘Everyone Can Create.’ It does for music, film, photography and drawing what Everyone Can Code did for programming. The difference is that they are showing how using tools to create music, videos and pictures can be useful to learn a variety of subjects.
The moviemaking examples for students use Clips for iOS running on an iPad:
Moviemakers don’t just shoot video clips, they put them together in a way that tells a story, documents and event, persuades, or even instructs. While photographers capture a single moment or emotion is a photo, moviemakers combine multiple images, both videos and photos, to tell a complete story.
In this activity, you’ll learn some basic techniques using the Clips app to build a visual story and start thinking like a moviemaker.
The preview of the lesson guide for teachers includes how to prepare to make an interview video:
Students choose an interview topic, compose an interview script, then record an interview with a peer, family member, or other guest expert. Have students follow these guiding steps:
- Identify your interview topic and build a short list of things you know and don’t know about it.
- Find a friend, family member, or community member who has experience with the topic and is willing to be interviewed.
- Compose a script that includes a brief introduction and at least three insightful questions you’ll ask during the interview.
- Choose a quiet and well-lit location to record your interview.
- Record an introduction to yourself, your interviewee, and the main topic.
- Switch to the rear camera to record your interviewee’s responses. Trim clips to keep the interview concise.
- Add posters to introduce or highlight big ideas. Text on posters is most effective when it’s short and sweet.
- Arrange clips so the finished video resembles a conversation between you and your interviewee.
- Share your video with friends, family, and community members.
I’m glad Apple is spending more time supporting video literacy. Those who learn to educate themselves by telling stories through film will soon learn how to tell other stories through film – both to entertain and to change their worlds.Read more