Apple’s VR production patent by Tim Dashwood

Within weeks of third-party Final Cut Pro X developer Tim Dashwood joining the ProApps team, Apple applied for a patent that changes the way computers connect to VR and AR head-mounted devices: ‘Method and System for 360 Degree Head-Mounted Display Monitoring Between Software Program Modules Using Video or Image Texture Sharing’ (PDF version).

It turns out that Tim is doing more for Apple than being part of adding VR video editing features to applications. His work is part of the way macOS works in all sorts of applications.

Direct to Display = Less OS overhead

Up until now, head-mounted devices like the Oculus Rift and HTC Vive connect as specialised displays. As far as macOS or Windows is concerned, an attached device is just another monitor – albeit with an odd aspect ratio and frame rate.

The new method is for VR/AR tools to connect to Apple devices in such a way that there is no longer a ‘simulate a monitor’ overhead. Apple is aiming for a 1/90th of second refresh rate for VR and AR experiences. Even if you are viewing a VR video that is playing at 60 frames a second, for smooth movement it is best if what the viewer sees updates 90 times a second, so if they turn quickly, the content keeps up with them.

If macOS, iOS and tvOS are spending less time simulating a monitor display. That means more of the 90th of a second between refreshes can be spent on rendering content. Also less powerful GPUs will be able to render advanced VR content and AR overlays – because there’s less OS delay in getting it in front of users’ eyes.

The idea is for VR/AR applications to modify image data in a form that the OS automatically feeds to devices without simulating a monitor:

…methods and systems for transmitting monoscopic or stereoscopic 180 degree or 360 degree still or video images from a host editing or visual effects software program as equirectangular projection, or other spherical projection, to the input of a simultaneously running software program on the same device that can continuously acquire the orientation and position data from a wired or wirelessly connected head-mounted display’s orientation sensors, and simultaneously render a representative monoscopic or stereoscopic view of that orientation to the head mounted display, in real time.

For more on how HMD software must predict user actions in order to keep up with their movement, watch the 2017 Apple WWDC ‘VR with Metal 2’ session video: One guest speaker was Nat Brown of Valve Software who talked about SteamVR on macOS High Sierra:

Our biggest request to Apple, a year ago, was for this Direct to Display feature. Because it’s critical to ensure that the VR compositor has the fastest time predictable path to the headset display panels. We also, really needed super accurate low variance VBL, vertical blank, events. So, that we could set the cadence of the VR frame presentation timing, and we could predict those poses accurately.

VR production

Although the patent is about how all kinds of applications work with VR and 3D VR, it also mentions a mode where the production application UI appears in the device overlaid on the content being produced:

FIG. 5 illustrates the user interface of a video or image editing or graphics manipulation software program501 with an equirectangularly projected spherical image displayed in the canvas502 and a compositing or editing timeline503. The image output of the video or image editing or graphics manipulation software program can be output via a video output processing software plugin module504 and passed to a GPU image buffer shared memory and then passed efficiently to the image receiver507 of the head-mounted display processing program506. The 3D image processing routine508 of the head-mounted display processing program will texture the inside of a virtual sphere or cube with a 3D viewpoint at the center of said sphere or cube. The virtual view for each of the left and right eyes will be accordingly cropped, duplicated (if necessary), distorted and oriented based on the lens/display specifications and received orientation data509 of the wired or wirelessly connected head-mounted display’s510 orientation sensor data. Once the prepared image is rendered by the 3D image processing routine, the image can then be passed to the connected head-mounted display511 for immediate presentation to the wearer within the head-mounted display.

Additionally, since wearing a head-mounted display will obscure the wearer’s view of the UI of the video or image editing or graphics manipulation software program, it is also possible to capture the computer display’s user interface as an image using a screen image capture software program module512 and pass it to an image receiver/processor513 for cropping an scaling before being composited on the left and right eye renders from the 3D image processing routine508, 514, 515 and then the composited image can be passed to the connected head-mounted display for immediate presentation to the wearer within the head-mounted display.

Further, a redundant view can be displayed in a window516 on the computer’s display so others can see what the wearer of the head-mounted display is seeing, or if a head-mounted display is not available

Tim has been demonstrating many interesting many 3D and VR production tool ideas over the years. Good to see his inventions now have the support of Apple Computer. I’m looking forward to the other ideas he brings to the world through Apple.

13th September 2017

Adobe Premiere used on big new 10-part Netflix TV series

19th November 2017

9:16, 1:1, 1:2, 4:5… (Social) Media aspect ratios primer