Apple WWDC17 post-production and VR sessions

Here are the sessions worth tuning into this week for those interested in what Apple plans for post-production and VR. You can watch these streams live or review the video, slides and transcripts in the weeks and months to come.

Interesting sessions include ones on

  • Vision API to detect faces, compute facial landmarks, track objects, and more. It can recognise and track, faces, elements of faces, rectangles, barcodes, QR codes and other common shapes. The Vision API can be combined with machine learning models to recognise new objects. For example, if I buy a machine learning model that recognises car number places (license plates) or even whole cars, that can be fed into the Vision API so that those things can be recognised in stills and footage, and also be tracked.
  • Depth: In iOS 11, iPhone 7 camera depth data is now available to iOS apps – both for stills and as a continuous low-resolution stream to go with video. This means iOS video filters will be able to replace the backgrounds of stills and videos, or apply filters to objects in the middle distance without affecting the background or foreground.



Apple covered all the announcements of interest to the media and general public, including a high-end focus in post production, VR and AR on iOS.

Platforms State of the Union

Apple went into more details on all news of updates to macOS, iOS, tVOS and watchOS. Video and PDF of presentation now available.


What’s New in Audio

1:50 PM (PDT)

Apple platforms provide a comprehensive set of audio frameworks that are essential to creating powerful audio solutions and rich app experiences. Come learn about enhancements to AVAudioEngine, support for high-order ambisonics, and new capabilities for background audio recording on watchOS. See how to take advantage of these new audio technologies and APIs in this session.

Introducing Metal 2

1:50 PM (PDT)

Metal 2 provides near-direct access to the graphics processor (GPU), enabling your apps and games to realize their full graphics and compute potential. Dive into the breakthrough features of Metal 2 that empower the GPU to take control over key aspects of the rendering pipeline. Check out how Metal 2 enables essential tasks to be specified on-the-fly by the GPU, opening up new efficiencies for advanced rendering.

Introducing HEIF and HEVC

4:10 PM (PDT)

High Efficiency Image File Format (HEIF) and High Efficiency Video Coding (HEVC) are powerful new standards-based technologies for storing and delivering images and audiovisual media. Get introduced to these next generation space-saving codecs and their associated container formats. Learn how to work with them across Apple platforms and how you can take advantage of them in your own apps.

Advances in HTTP Live Streaming

5:10 PM (PDT)

HTTP Live Streaming allows you to stream live and on-demand content to global audiences. Learn about great new features and enhancements to HTTP Live Streaming. Highlights include support for HEVC, playlist metavariables, IMSC1 subtitles, and synchronized playback of multiple streams. Discover how to simplify your FairPlay key handling with the new AVContentKeySession API, and take advantage of enhancements to offline HLS playback.

Introducing ARKit: Augmented Reality for iOS

5:10 PM (PDT)

ARKit provides a cutting-edge platform for developing augmented reality (AR) apps for iPhone and iPad. Get introduced to the ARKit framework and learn about harnessing its powerful capabilities for positional tracking and scene understanding. Tap into its seamless integration with SceneKit and SpriteKit, and understand how to take direct control over rendering with Metal 2.


VR with Metal 2

10:00 AM (PDT)

Metal 2 provides powerful and specialized support for Virtual Reality (VR) rendering and external GPUs. Get details about adopting these emerging technologies within your Metal 2-based apps and games on macOS High Sierra. Walk through integrating Metal 2 with the SteamVR SDK and learn about efficiently rendering to a VR headset. Understand how external GPUs take macOS graphics to a whole new level and see how to prepare your apps to take advantage of their full potential.

SceneKit: What’s New

11:00 AM (PDT)

SceneKit is a fast and fully featured high-level 3D graphics framework that enables your apps and games to create immersive scenes and effects. See the latest advances in camera control and effects for simulating real camera optics including bokeh and motion blur. Learn about surface subdivision and tessellation to create smooth-looking surfaces right on the GPU starting from a coarser mesh. Check out new integration with ARKit and workflow improvements enabled by the Xcode Scene Editor.

What’s New in Photos APIs

1:50 PM (PDT)

Learn all about newest APIs in Photos on iOS and macOS, providing better integration and new possibilities for your app. We’ll discuss simplifications to accessing the Photos library through UIImagePickerController, explore additions to PhotoKit to support new media types, and share all the details of the new Photos Project Extensions which enable you to bring photo services to Photos for Mac.

Vision Framework: Building on Core ML

3:10 PM (PDT)

Vision is a new, powerful, and easy-to-use framework that provides solutions to computer vision challenges through a consistent interface. Understand how to use the Vision API to detect faces, compute facial landmarks, track objects, and more. Learn how to take things even further by providing custom machine learning models for Vision tasks using CoreML.

Capturing Depth in iPhone Photography

5:10 PM (PDT)

Portrait mode on iPhone 7 Plus showcases the power of depth in photography. In iOS 11, the depth data that drives this feature is now available to your apps. Learn how to use depth to open up new possibilities for creative imaging. Gain a broader understanding of high-level depth concepts and learn how to capture both streaming and still image depth data from the camera.


SceneKit in Swift Playgrounds

9:00 AM (PDT)

Discover tips and tricks gleaned by the Swift Playgrounds Content team for working more effectively with SceneKit on a visually rich app. Learn how to integrate animation, optimize rendering performance, design for accessibility, add visual polish, and understand strategies for creating an effective workflow with 3D assets.

Image Editing with Depth

11:00 AM (PDT)

When using Portrait mode, depth data is now embedded in photos captured on iPhone 7 Plus. In this second session on depth, see which key APIs allow you to leverage this data in your app. Learn how to process images that include depth and preserve the data when manipulating the image. Get inspired to add creative new effects to your app and enable your users to do amazing things with their photos.

Advances in Core Image: Filters, Metal, Vision, and More

1:50 PM (PDT)

Get all the details on how to access the latest capabilities of Core Image. Learn about new ways to efficiently render images and create custom CIKernels in the Metal Shading Language. Find out about all of the new CIFilters that include support for applying image processing to depth data and handling barcodes. See how the Vision framework can be leveraged within Core Image to do amazing things.


Working with HEIF and HEVC

11:00 AM (PDT)

High Efficiency Image File Format (HEIF) and High Efficiency Video Coding (HEVC) are powerful new standards-based technologies for storing and delivering images and video. Gain insights about how to take advantage of these next generation formats and dive deeper into the APIs that allow you to fully harness them in your apps.

5th June 2017

Apple courts high-end post production with macOS High Sierra and new Macs

6th June 2017

Using Adjustment Layers as coloured scene markers in Final Cut Pro X