Latest Posts

BlogApple WWDC 2018 hardware hope: A ‘NPU’ familyFriday, June 1 2018

In recent years Apple have used the keynote presentation of their annual Worldwide Developers Conference as a showcase for new hardware launches. This year Intel’s delays in producing significant CPU updates makes it less likely we will see new MacBooks, Macs or Mac Pros this time.

As this event is for those making software and hardware for iOS, tvOS, watchOS and macOS devices, I hope Apple launches a new hardware plan.

VPU?

Yesterday Arm announced three new chip families for smaller devices: a new CPU, a new GPU and a new VPU (video processor).

AnandTech reports that Arm’s new VPU includes hardware support for VP9 10-bit, H.264 10-bit and HEVC 10-bit – with the ability to play 8K 60fps video.

In concert with their display processor, the new video processor is currently able to handle HDR10 and HLG formatted HDR video. Meanwhile support for HDR10+ – which is HDR10 with support for dynamic metadata – is set to arrive in the future.

This shows what Apple could be doing with the A-series chips they use in iOS devices (and future VR/AR devices).

NPU?

The rate at which Apple have improved their A-series CPUs is the envy of phone makers. In September 2017, Wired reported that the new A11 processor in the iPhone 8 and X includes what Apple calls its ‘Neural Engine’:

The engine has circuits tuned to accelerate certain kinds of artificial-intelligence software, called artificial neural networks, that are good at processing images and speech.

Apple said the neural engine would power the algorithms that recognize your face to unlock the phone and transfer your facial expressions onto animated emoji. It also said the new silicon could enable unspecified “other features.”

What if Apple created a very small single-core ‘neural processing unit’ – the N1 NPU?

If N1s are included in all Apple product updates in coming months, it would make simpler for developers to add modern features to their products and services. Including developers within Apple.

Using the cell concept of multiple processing units, the number of N1s used in an Apple device would depend on its power budget, memory and profit margin:

  • Apple remote: 1
  • Apple Pencil: 1
  • Apple Watch: 1-2
  • Apple TV: 3-4
  • iPhone: 3-4
  • Apple VR/AR device: 3-4
  • iPhone Plus: 4-6
  • Apple TV 4K: 4-6
  • iPad: 4-6
  • iPhone X: 6-8
  • macBook: 6-8
  • iPad Pro: 6-8
  • Mac mini: 6-16
  • iMac: 12-32
  • MacBook Pro: 16-24
  • Mac Pro: 32-128

New connector for neural processing farms?

Once we having devices with the processing power to deal with the increased demands of modern uses, Apple could facilitate connecting them together for combined power.

Although Apple would prefer for its devices to have no physical hardware connections, for those that would gain from sharing processing power with other devices, it might be worth it. For example, although the new Mac Pro might connect with some devices using multiple Thunderbolt 3 buses, it would be even better if it had a connector that would interface with other Mac Pros – or a stack of new Mac minis that combine together like Lego.

Tune into Apple’s livestream from WWDC18 on Monday to see what is coming.

Read more
BlogFinal Cut Pro: Apple’s macOS and Mac demo application?Wednesday, May 30 2018

From 1999 to 2009 Final Cut versions 1 to 7 were used by Apple to show off the latest their technologies could do. Especially the QuickTime API. Next week is WWDC 2018 – Apple’s annual developer conference.

For the first time in many years Final Cut Pro X requires the current version of macOS: High Sierra 10.13. I hope this means the Video Applications team are able to show developers what can be done with macOS frameworks. The more that applications use macOS-only features, the more Macs Apple will sell.

Not much of the WWDC 2018 schedule has been revealed yet, but one session shows what Apple could do:

Object Tracking in Vision

Vision is a high-level framework that provides an easy to use API for handling many computer vision tasks. We’ll dive deep into a particularly powerful feature of Vision—tracking objects in video streams. Learn best practices for using Vision in your app. Gain a greater understanding of how request handlers function in terms of lifecycle, performance, and memory utilization.

This kind of tracking is much more useful than tracking pixels or planes. Tools that understand (using machine learning-built models) what the objects in a video clip can be much more powerful. Instead of tracking a number of regions in a clip and working out how they are moving in 3D space relative to each other, object tracking understands how to recognise and track specific objects such as faces, people, vehicles, signs and buildings. Once tracked, the appearance of these things can be modified by the application. This will work when the objects seem to move and turn in the shot and even when are they are obscured by other objects.

Next week I’ll tune in to see forthcoming features of macOS (and iOS) that are relevant to Apple’s video applications.

In a few days the rest of the agenda will be announced. Next week people all over the world will be able to watch live streams of the sessions as they happen. A few hours after each session, videos will be available to watch on the Apple website.

 

Read more
BlogApple Compressor: Make the most of unused cores and nearby MacsFriday, May 18 2018

If you have lots of video to transcode and tight deadlines, sometimes even Apple Compressor isn’t fast enough for the job. If you have a Mac with multiple cores and lots of RAM or a network of Macs going to spare, you can use this power to speed up video conversions and transcodes.

Using more cores on your Mac

If you have many CPU cores and enough RAM, you can have multiple copies (‘instances’) of Compressor run on your iMac or Mac Pro at the same time. Each copy works on a different frames of the source video.

The number of Compressor instances you can set up on a Mac depends on the number of cores and the amount of RAM installed. You need to have at least 8 cores and 4GB of RAM to have at least one additional instance of Compressor run on your Mac.

Maximum number of additional instances of Compressor that can run on a Mac:

GB RAM: 2 4 6 8 12 16 32 64
4 Cores 0 0 0 0 0 0 0 0
8 Cores 0 1 1 1 1 1 1 1
12 Cores 0 1 2 2 2 2 2 2
16 Cores 0 1 2 3 3 3 3 3
24 Cores 0 1 2 3 5 5 5 5

This means that your Mac needs to have a minimum of 8 cores and 4GB of RAM to have two instances of Compressor running at the same time. MacBook Pros (as of Spring 2018) have a maximum of 4 cores – described as ‘quad-core’ CPUs.

From Apple’s support document: Compressor: Create additional instances of Compressor:

To enable instances of Compressor

  1. Choose Compressor > Preferences (or press Command-Comma).
  2. Click Advanced.
  3. Select the “Enable additional Compressor instances” checkbox, then choose a number of instances from the pop-up menu.

Important: If you don’t have enough cores or memory, the “Enable additional Compressor instances” checkbox in the Advanced preferences pane is dimmed.

Using using additional Macs on your network

Once you install Compressor on your other Macs, you can use those Macs to help with video transcoding tasks.

To create a group of computers to transcode your videos:

  1. Set the preferences in Compressor on each Mac in the network to “Allow other computers to process batches on my computer” (in the ‘My Computer’ tab of the preferences dialog).
  2. On the Mac you want to use to control the video transcoding, use the ‘Shared Computers’ section of Compressor preferences to make a group of shared computers.
  3. Add a new untitled group (using the ‘+’ button)
  4. Name it by double-clicking it and replacing ‘Untitled’ and pressing Return.
  5. In the list of available computers (on the right), select the checkbox next to each computer that you want to add to the group.

Once this group is set up, use Compressor to set up a transcode as normal. Before clicking the Start button, click the “Process on” pop-up menu and choose the group of computers that you want to use to process your batch.

There are more details in Apple’s support document: Compressor: Transcode batches with multiple computers.

Read more
BlogAudio for 360° video and VR experiences – Apple job postingsMonday, May 14 2018

More from Apple’s job site. This time signs that they are looking to develop features for their applications, OSes and hardware to support spatial audio. Spatial audio allows creators to define soundscapes in terms of the relative position of sound sources to listeners. This means that if I hear someone start talking to my left, if I turn towards them, the sound should seem to come from what I’m looking at – from the front. Useful for 360° spherical audio, fully-interactive VR and experiences plus future OS user interfaces.

At the moment there are four relevant vacancies:

Apple Hardware Engineering is looking for a Audio Experience & Prototyping Engineer:

Apple’s Technology Development Group is looking for an Audio Experience and Prototyping Engineer to help prototype and define new audio UI/UX paradigms. This engineer will work closely with the acoustic design, audio software, product design, experience prototyping, and other teams to guide the future of Apple’s audio technology and experience The ideal candidate will have a background in spatial audio experience design (binaural headphone rendering, HOA, VBAP), along with writing audio supporting software and plugins.

Experience in the following strongly preferred:

  • Sound design for games or art installations
  • Writing apps using AVAudioEngine
  • Swift / Objective-C / C++
  • Running DAW software such as Logic, ProTools, REAPER, etc.

Closer to post production, Apple’s Interactive Media Group Core Audio team is looking for a Spatial Audio Software Engineer to work in Silicon Valley:

IMG’s Core Audio team provides audio foundation for various high profile features like Siri, phone calls, Face Time, media capture, playback, and API’s for third party developers to enrich our platforms. The team is looking for talented engineers who are passionate about building audio software products for millions of customers and care about overall user experience. You will be pushing the boundaries of spatial audio experience for future technologies.

  • Key Advantage : Experience with audio engines that are part of Digital Audio Workstations or Game audio systems
  • Advantage : Experience with Spatial audio formats (Atmos, HOA etc) is desirable.

I gather that the Logic Pro digital audio workstation team are based in Germany. Apple are also looking for a Spatial Audio Software Engineer to work in Berlin.

For iOS and macOS, Apple are also looking for a Core Audio Software Engineer in Zurich:

The team is looking for talented engineers who are passionate about building audio software products for millions of customers and care about overall user experience. You will be pushing the boundaries of spatial audio experience for future technologies.

If you think this kind of activity is too little too late, there was at least one vacancy for a Spatial Audio Software Engineer back in July 2017.

Although Apple explore many technical directions for products that never see the light of day, I expect that spatial audio has a good future at Apple.

Read more
BlogApple Video job postings 2018… Cloud, IP production, 3D/VR in 2019?Saturday, April 28 2018

A good way of seeing what Apple plans to work on is to check out their jobs site. A July 2017 job posting for a pro workflow expert to set up a studio ends up with Apple giving a journalist a tour of the lab in April 2018.

Here is a round-up of recent Apple Pro Apps-related job posts. They hint as to what might be appearing in Apple’s video applications in 2019.

Many start with this description of the Apple Video Applications group:

The Video Applications group develops leading media creation apps including Memories, Final Cut Pro X, iMovie, Motion, and Clips. The team is looking for a talented software engineer to help design and develop future features for these applications.

This is an exciting opportunity to apply your experience in video application development to innovative media creation products that reach millions of users.

Senior Engineer, Cloud Applications

Job number 113527707, posted March 2, 2018:

The ideal candidate will have in-depth experience leveraging both database and client/server technologies. As such, you should be fluent with cloud application development utilizing CloudKit or other PAAS (“Platform as a Service”) platforms.

The main NLE makers have come to cloud-enabling their tools relatively late compared to other creative fields. Apple currently allow multiple people to edit the same document in iWorks at the same time. Sharing multiple-gigabytes of video data is much harder than keeping a Pages or Numbers document in sync across the internet. Avid have recently announced Amazon-powered video editing in the cloud services coming this year. It looks like Apple isn’t shying away from at least exploring cloud-based editing in 2018.

Cloud features aren’t just for macOS video applications: There was an October 2017 posting for a MacOS/iOS Engineer – Video Applications (Cloud) – Job number 113167115.

Senior Software Engineer, Live Video

Job number 113524253, posted February 27, 2018:

The ideal candidate will have in-depth experience leveraging video editing, compositing, compression, and broadcasting technologies.

The key phrase here is ‘Live Video’ – this could be Apple making sure their tools will be able to work in a IP-enable post workflows. Broadcasters are now connecting their hardware via Ethernet instead of the older SDI technology. Engineering this sort of thing is about keeping everything in sync – sharing streams of video across 10-Gigabit Ethernet.

I wrote about BBC R&D exploring IP production in June 2017. Recently they’ve been seeing how IP production could use cloud services: “Beyond Streams and Files – Storing Frames in the Cloud”.

Sr. Machine Learning Engineer – Video Apps

Job number 113524253, posted April 12, 2018:

Apple is seeking a Machine Learning (ML) technologist to help set technology strategy for our Video Applications Engineering team. Our team develops Apple’s well-known video applications, including Final Cut Pro, iMovie, Memories part of the Photos app, and the exciting new Clips mobile app.

We utilize both ML and Computer Vision (CV) technologies in our applications, and are doing so at an increasing pace.

We are looking for an experienced ML engineer/scientist who has played a significant role in multiple ML implementations — ideally both in academia and in industry — to solve a variety of problems.

You will advise and consult on multiple projects within our organization, to identify where ML can best be employed, and in areas of media utilization not limited to images and video.

We expect that you will have significant software development and integration knowledge, in order to be both an advisor to, and significant developer on, multiple projects.

This follows on from a vacancy last July for a video applications software engineer ‘with machine learning experience.’

It looks like the Video Applications team are stepping up their investments in machine learning – expecting to use it in multiple projects: maybe different features in the different applications they work on.

One example would be improving tracking of objects in video. Instead of tracking individual pixels to hide or change a sign on the side of a moving vehicle, machine learning would recognise the changing position of the vehicle, the sign and be able to interpret the graphics and text in the sign itself.

MacOS High Sierra 10.13 introduced machine learning features in Autumn 2017. Usually Pro Apps users would need to wait at least a year to get features available in the newest version of macOS – because editors didn’t want to update their systems until the OS felt reliable enough for post production. Interesting with the Final Cut Pro 10.4.1 update, the Video Applications team have forced the issue – the current version of Final Cut (plus Motion) won’t run on macOS Sierra 10.12. At least that means new Final Cut features can start relying on new macOS features introduced last year. I wrote about Apple WWDC sessions on media in June 2017.

Senior UI Engineer, Video Applications (3D/VR)

Job number 113524287, posted February 23, 2018:

Your responsibilities will include the development and improvement of innovative and intuitive 3D and VR user interface elements. You will collaborate closely with human interface designers, and other engineers on designing and implementing the best possible user experience. The preferred candidate should have an interest and relevant experience in developing VR user interfaces.

Additional Requirements

  • Experience with OpenGL, OpenGL ES or Metal
  • Experience developing AR/VR software (SteamVR / OpenVR)
  • macOS and/or iOS development experience

Notice here that this is not a user interface engineer who will create UI for a 3D application. Apple plan to at least investigate developing 3D user interfaces that will work in VR. Although this engineer is being sought by the video applications team, who knows where else in Apple is looking for 3D interface design to be used in VR.

See also VR Jobs at Apple – July 2017.

 

 

 

 

 

 

Read more
BlogToday’s Final Cut Pro 10.4.1, Motion 5.4.1 and Compressor 4.4.1 updates require macOS High Sierra 10.13.2Monday, April 9 2018

Last Thursday Apple announced that Final Cut Pro 10.4.1 would be available today.

The specification page for Final Cut Pro, Motion and Compressor states that the minimum requirements have changed from macOS Sierra 10.12.6 to macOS High Sierra 10.13.2 or later. In order to get today’s free updates for Final Cut Pro, Motion and Compressor, your Mac must be running 10.13.2 or newer. You won’t see these updates in the Mac App store if you are using an older version of the OS.

It is rare that Final Cut Pro needs such a relatively new version of macOS. Since 2011, the ProApps team have only required that the OS is as old as 16 months old.

This means that Final Cut will have access to parts of macOS introduced in last year’s Apple Worldwide Developer conference – the most likely feature being added will be eGPU compatibility – as introduced in the most recent update to High Sierra. Although parts of Final Cut Pro 10.4 and earlier can be sped up by attaching an eGPU, some core parts weren’t.

If you haven’t updated Final Cut Pro on your computer before, there is a support page from Apple that gives useful tips.

Read more