Apple’s VR production patent by Tim Dashwood

Thursday, 12 October 2017

Within weeks of third-party Final Cut Pro X developer Tim Dashwood joining the ProApps team, Apple applied for a patent that changes the way computers connect to VR and AR head-mounted devices: ‘Method and System for 360 Degree Head-Mounted Display Monitoring Between Software Program Modules Using Video or Image Texture Sharing’ (PDF version).

It turns out that Tim is doing more for Apple than being part of adding VR video editing features to applications. His work is part of the way macOS works in all sorts of applications.

Direct to Display = Less OS overhead

Up until now, head-mounted devices like the Oculus Rift and HTC Vive connect as specialised displays. As far as macOS or Windows is concerned, an attached device is just another monitor - albeit with an odd aspect ratio and frame rate.

The new method is for VR/AR tools to connect to Apple devices in such a way that there is no longer a 'simulate a monitor' overhead. Apple is aiming for a 1/90th of second refresh rate for VR and AR experiences. Even if you are viewing a VR video that is playing at 60 frames a second, for smooth movement it is best if what the viewer sees updates 90 times a second, so if they turn quickly, the content keeps up with them.

If macOS, iOS and tvOS are spending less time simulating a monitor display. That means more of the 90th of a second between refreshes can be spent on rendering content. Also less powerful GPUs will be able to render advanced VR content and AR overlays - because there's less OS delay in getting it in front of users' eyes.

The idea is for VR/AR applications to modify image data in a form that the OS automatically feeds to devices without simulating a monitor: 

…methods and systems for transmitting monoscopic or stereoscopic 180 degree or 360 degree still or video images from a host editing or visual effects software program as equirectangular projection, or other spherical projection, to the input of a simultaneously running software program on the same device that can continuously acquire the orientation and position data from a wired or wirelessly connected head-mounted display's orientation sensors, and simultaneously render a representative monoscopic or stereoscopic view of that orientation to the head mounted display, in real time.

For more on how HMD software must predict user actions in order to keep up with their movement, watch the 2017 Apple WWDC ‘VR with Metal 2’ session video: One guest speaker was Nat Brown of Valve Software who talked about SteamVR on macOS High Sierra:

Our biggest request to Apple, a year ago, was for this Direct to Display feature. Because it's critical to ensure that the VR compositor has the fastest time predictable path to the headset display panels. We also, really needed super accurate low variance VBL, vertical blank, events. So, that we could set the cadence of the VR frame presentation timing, and we could predict those poses accurately.

VR production

Although the patent is about how all kinds of applications work with VR and 3D VR, it also mentions a mode where the production application UI appears in the device overlaid on the content being produced:

FIG. 5 illustrates the user interface of a video or image editing or graphics manipulation software program501 with an equirectangularly projected spherical image displayed in the canvas502 and a compositing or editing timeline503. The image output of the video or image editing or graphics manipulation software program can be output via a video output processing software plugin module504 and passed to a GPU image buffer shared memory and then passed efficiently to the image receiver507 of the head-mounted display processing program506. The 3D image processing routine508 of the head-mounted display processing program will texture the inside of a virtual sphere or cube with a 3D viewpoint at the center of said sphere or cube. The virtual view for each of the left and right eyes will be accordingly cropped, duplicated (if necessary), distorted and oriented based on the lens/display specifications and received orientation data509 of the wired or wirelessly connected head-mounted display's510 orientation sensor data. Once the prepared image is rendered by the 3D image processing routine, the image can then be passed to the connected head-mounted display511 for immediate presentation to the wearer within the head-mounted display.

Additionally, since wearing a head-mounted display will obscure the wearer's view of the UI of the video or image editing or graphics manipulation software program, it is also possible to capture the computer display's user interface as an image using a screen image capture software program module512 and pass it to an image receiver/processor513 for cropping an scaling before being composited on the left and right eye renders from the 3D image processing routine508, 514, 515 and then the composited image can be passed to the connected head-mounted display for immediate presentation to the wearer within the head-mounted display.

Further, a redundant view can be displayed in a window516 on the computer's display so others can see what the wearer of the head-mounted display is seeing, or if a head-mounted display is not available

Tim has been demonstrating many interesting many 3D and VR production tool ideas over the years. Good to see his inventions now have the support of Apple Computer. I'm looking forward to the other ideas he brings to the world through Apple.

Adobe Premiere used on big new 10-part Netflix TV series

Wednesday, 13 September 2017

It was tough ask for Adobe Premiere to tackle the needs of David Fincher's 'Gone Girl' feature film in 2014. In recent months, it has been used on a bigger project: ‘Mindhunter’ - a 10 hour David Fincher exec-produced high-end TV series soon to be available on Netflix. 

Instead of a single team working on a two hour film, TV series have multiple director-cinematographer-editor teams working in parallel. In this case the pilot was directed by David Fincher. The way TV works in the US is that the pilot director gets an executive producer credit for the whole series because the decisions they make define the feel of the show from then on. Fincher brought along some of the team who worked on Gone Girl. While they worked on the pilot post production, other teams shot and edited later episodes in the series.

The fact that the production company and the studio were happy for the workflow to be based around Premiere Pro CC is a major step up for Adobe in Hollywood.

The high-end market Adobe is going for is too small to support profitable software development. Even if they sold a subscription to all professional editors in the USA, that would not be enough to pay for the costs in maintaining Adobe Premiere. Its use in high-end TV and features is a marketing message that Adobe must think contributes to people choosing to subscribe to the Adobe Creative Cloud - even if renters will never edit a Hollywood film or TV show.

What about Final Cut Pro X?

Directors Glenn Ficarra and John Requa are happy to use Final Cut Pro X in studio features. They haven't been able to use Final Cut in the TV shows they have directed. Glenn and John directed the pilot and three other episodes of ‘This is Us’ - a big success for NBC in the US last year. Although directors have much less power in TV than in features, pilot directors do have some power to set standards for the rest of the series. I don’t know why Final Cut wasn’t used on ‘This is Us.’ It could be a lack of enough collaboration features or a lack of enough Final Cut-experienced crew. It may take a while before both of these reasons no longer apply.

Although the 10.3 update for Final Cut Pro X was nearly all about features requested by people who work on high-end production, it seems the majority of the ProApps team time is spent on features for the majority of Final Cut users. 

Is the use of Final Cut Pro X in a smattering of Hollywood productions enough to support Apple’s marketing message? Will Apple invest more in Final Cut’s use in Hollywood? 

When it comes to the opinions of Hollywood insiders, it seems that Premiere is currently the only viable alternative to Avid Media Composer. Although the ProApps team is very likely to want Final Cut to be the choice people make at all levels of production, will they be able to get the investment they need from the rest of Apple to make that happen? We’ll see in the coming months and years.

IMF: Output any version you need from a single master

Tuesday, 12 September 2017

Interoprable Master Format is a system that allows you to specify all the versions of a feature film using a set of rules. Instead of rendering out every combination of language, aspect ratio, certification, distributor standard, you define how their rules apply to your movie. When a specific version is called for, it can then be rendered out automatically based on the media and the specific timeline included in an IMP (Interoperable Mastering Package).

Even if you don't work in high-end features, it is worth learning about this because it is coming to TV and online delivery in 2018. For now this IMF is for high-end tools, services and suppliers, but the nature of video production means that it will be the eventual standard most NLEs will support - maybe even directly, with few external tools.

This video – presented by Bruce Devlin (@mrMXF on Twitter) – is an introduction to IMF, and the first in a series, should you want to learn more:

(My YouTube playlist of videos on Interoperable Mastering Format in order)

The nature of Final Cut Pro X makes it potentially the best NLE to work with IMF. Apple could add features to the timeline required to generate IMPs. Compressor could generate specific versions of a film or TV show based on an IMP.

If Apple considers this the kind of feature best left to third-parties, I hope they add the required hooks to Final Cut so Frame.io (for example) could add IMF management to their Final Cut Pro X service.

Apple Goes to Hollywood: For more than just TV production

Friday, 01 September 2017

Apple have had offices in Los Angeles for many years. The number of Apple employees in the area rose significantly when the company bought Beats Music in 2014. Now it looks like there’ll be more to the LA operation than music.

The Financial Times reports [paywall link] that Apple are looking for more space in Culver City, Los Angeles County. The FT say that Apple is thinking of leasing space at The Culver Studios. Culver City isn’t exactly close to Hollywood, but from a production perspective, it counts as Hollywood: both Gone With the Wind and Citizen Kane were filmed at The Culver Studios.

The FT headline ‘Apple eyes iconic studio as base for Hollywood production push’ implies that they want space to make high-end TV and feature films - including bidding to produce a TV show for Netflix. Interesting that they suggest that Apple plan to make TV for others - instead of commissioning others to make TV for them. That would mean Apple investing in the hardware and infrastructure to make high-end TV directly.

Office space for…

However, the body of the article says that Apple is primarily looking for office space. It seems that the large amount of office space that Beats lease won’t be enough. It could be that Apple Music administration needs more people (The Culver Studios is only 15 minutes walk from Beats). On the hand, what else could Apple be doing in LA?

They certainly need to need to hire enough new staff to be involved in their $1bn push into TV. They could be based in Los Angeles County.

Part of the Mac team seems to be based in Culver City. A recent vacancy listed on the Apple jobs site was for an expert to set up a post production workflow lab in Culver City. That is likely to be primarily about making sure the next iteration of the Mac Pro fits the future needs of Hollywood TV and film production:

Help shape the future of the Mac in the creative market. The Macintosh team is seeking talented technical leadership in a System Architecture team. This is an individual contributor role. The ideal candidate has core competencies in one or more professional artists content creation areas with specific expertise in video, and photo, audio, and 3D animation.

The pro workflow expert will be responsible for thoroughly comprehending all phases of professional content creation, working closely with 3rd party apps developers and some key customers, thoroughly documenting, and working with architects to instrument systems for performance analysis.

It seems that some of Apple’s ProApps team is based in Culver City too. Recent job openings for a Video Applications Graphics Engineering Intern and a Senior macOS/iOS Software Engineer for Video Applications are based there.

Also, if I was going to develop a VR and AR content business, it might be a good idea to create custom-designed studio resources for VR and AR content production. Los Angeles would be a good location to experiment with the future of VR and AR.

Adobe discontinues Speedgrade, will live on as Adobe Premiere panel - More integration to come?

Wednesday, 23 August 2017

Will all Adobe video applications end up as panels in Adobe Premiere? Adobe doesn't see the need to make an application dedicated to the colour grading process any more. Adobe have announced that they are discontinuing their Speedgrade colour grading application:

Producing a separate application for color grading was born out of necessity some 35 years ago – it was never a desirable split from a creative perspective.

I don’t think audio post people would say the same about picture editing.

…the paradigm of consolidating toolsets for a specific task into a single panel has led to further innovation. The Essential Sound Panel and the new Essential Graphics panel are designed with the same goal in mind: streamlining professional and powerful workflows made for editors.

Maybe this is a sign that Blackmagic’s Resolve 12 and 14 updates are putting pressure on Adobe. Which other Adobe video applications do you think will end up as panels in Premiere?

Who will define the immersive video experience file format? MPEG, Apple, Adobe or Facebook?

Tuesday, 22 August 2017

We have file formats and codecs to store 2D video as seen from a single point. Soon we will need ways recording light information in a 3D space, so immersed viewers will be able to move around inside and choose what to look at, and where to look from.

In 1994 Apple tried to kick off VR on the Mac using an extension to their QuickTime video framework: QuickTimeVR. As with the Newton personal digital assistant, it was the right idea, wrong time.

Today different are companies are hoping to earn money from creating VR and AR experience standards, markets and distribution systems. The Motion Pictures Experts Group think it is time to encourage the development of a standard - so as to prevent multiple VR and AR ‘walled gardens’ (where individual companies hope to capture users in limited ecosystems).

This summer Apple announced that their 4K+ codec of choice is HEVC. That can encode video at very high resolutions. Apple also plan to incorporate depth information capture, encoding, editing and playback into iOS and macOS. 

Structured light encoding

Knowing the depth of the environment corresponding to every pixel in a flat 2D video frame is very useful. With VR video, that flat 2D video can represent all the pixels from the point of view of a single point. Soon we will want more. Structured light recording is more advanced. It captures the light in a given 3D volume. Currently light field sensors do this by capturing the light information arriving at multiple points on a 2D plane (instead of the single point we use today in camera lenses). The larger the 2D plane, the larger the distance viewers will be able to move their heads when immersed in the experience to see from different points of view. 

However the light information is captured, we will need file formats and codecs to encode, store and decode structured light information.

Streaming Media has written about MPEG-I, a standard that is being developed:

The proposed ISO/ IEC 23090 (or MPEG-I) standard targets future immersive applications. It's a five-stage plan which includes an application format for omnidirectional media (OMAF) "to address the urgent need of the industry for a standard is this area"; and a common media application format (CMAF), the goal of which is to define a single format for the transport and storage of segmented media including audio/video formats, subtitles, and encryption. This is derived from the ISO Base Media File Format (ISOBMFF).

While a draft OMAF is expected by end of 2017 and will build on HEVC and DASH, the aim by 2022 is to build a successor codec to HEVC, one capable of lossy compression of volumetric data.

"Light Field scene representation is the ultimate target," according to Gilles Teniou, Senior Standardisation Manager - Content & TV services at mobile operator Orange. "If data from a Light Field is known, then views from all possible positions can be reconstructed, even with the same depth of focus by combining individual light rays. Multiview, freeview point, 360° are subsampled versions of the Light Field representation. Due to the amount of data, a technological breakthrough – a new codec - is expected."

This breakthrough assumes that capture devices will have advanced by 2022 – the date by which MPEG aims to enable lateral and frontal translations with its new codec. MPEG has called for video test material, including plenoptic cameras and camera arrays, in order to build a database for the work.

Already too late?

I wonder if taking until 2022 for MPEG to finish work on MPEG I 1 will be too late. In 2016 there was debate about the best way of encoding ambisonic audio for VR video. The debate wasn't settled by MPEG or SMPTE. Google’s YouTube and Facebook agreed on the format they would support. That became the de facto standard.

Apple have advertised a job vacancy for a CoreMedia VR File Format Engineer with ‘Direct experience with implementing and/or designing media file formats.’

Facebook have already talked about 6 degrees of freedom video at their 2017 developer conference. They showed alpha versions of VR video plugins from Mettle running in Premiere Pro CC for 6DoF experiences. Adobe have since acquired Mettle.

Facebook won’t want to wait until 2022 to have serve immersive experiences where users will be able to move left, right, up, down, back and forth while video plays back.

Now the race is on to define the first immersive video file format.

Adobe Premiere and Final Cut Pro creator Randy Ubillos honoured with 2017 SMPTE award

Tuesday, 22 August 2017

SMPTE (The Society of Motion Picture & Television Engineers) have announced that former Adobe, Macromedia and Apple employee Randy Ubillos will be receiving the Workflow Systems Medal at the SMPTE 2017 Awards later this year.

The Workflow Systems Medal, sponsored by Leon Silverman, recognizes outstanding contributions related to the development and integration of workflows, such as integrated processes, end-to-end systems or industry ecosystem innovations that enhance creativity, collaboration, and efficiency, or novel approaches to the production, postproduction, or distribution process.

The award will be presented to Randy Ubillos in recognition of his role in establishing the foundation of accessible and affordable digital nonlinear editing software that fundamentally shaped the industry landscape and changed the way visual stories are created and told. Ubillos’ revolutionary work with creating and designing lower-cost editing software such as Final Cut Pro® and Adobe® Premiere® shifted the film and television industry toward a more inclusive future, giving storytellers of diverse backgrounds and experience levels the ability to tell their stories and rise as filmmakers, technicians, engineers, and key players in every facet of media and entertainment.

His work significantly enhanced and transformed the world of postproduction, popularizing and commoditizing file-based workflows while removing significant barriers to the creative editing process for millions of users worldwide.

I interviewed Randy at the first FCPX Creative Summit in 2015. Topics covered included where Adobe Premiere 1.0 came from, the story of Final Cut Pro at Macromedia and working with Steve Jobs:

Ubillos: iMovie’s codename was RoughCut, it was conceived originally as a front end to Final Cut - for creating a rough edit for Final Cut. I worked with a graphic designer to make it look good. When I did a demo of it to Steve [Jobs] in about three minutes he said “That’s the next iMovie.” So I asked when it was supposed to ship, and he said “Eight months.”

[…]

The very last conversation I had with Steve Jobs was right after the launch of Final Cut Pro X. I was getting ready to get on a plane to go to London to record the second set of movie trailers - we’d hired the London Symphony Orchestra [to perform the music that was going to be bundled with the next version of iMovie] - and Steve caught me at home: “What the heck is going on with this Final Cut X thing?” I said “We knew this was coming, we knew that people were going to freak out when we changed everything out from under them. We could have done this better. We should have. Final Cut 7 should be back on the market. We should have an FAQ that lists what this is all about.” He said “Yeah, let’s get out and fund this thing, let’s make sure we get on top of this thing, move quickly with releases…” and he finished by asking: “Do you believe in this?” I said “Yes.” He said “then I do too.”

Congratulations to Randy. Although he is probably making the most of his retirement, I hope his contributions to the history of video literacy are not over.

2017 iPhone 3D face scanner for ID unlock fast enough for video depth capture

Monday, 21 August 2017

In reports coming from Asia it is rumoured that one of the iPhones Apple plans to announce later this year will have a face scanner that will allow users to unlock their phones without using a TouchID fingerprint scanner:

The Korea Herald yesterday:

The new facial recognition scanner with 3-D sensors can deeply sense a user’s face in the millionths of a second. Also, 3-D sensors are said to be adopted for the front and rear of the device to realize AR applications, which integrate 3-D virtual images with user’s environment in real time.

This is interesting news for those who want to work with footage that includes depth information. The kind of camera required to quickly distinguish between different faces probably needs to sample the depth in a very short space of time to counteract phone and face movement.

Apple’s ARkit and Metal 2 are based around a 90fps refresh rate, so the sampling rate for both cameras is more than enough for VR and AR experiences.

Another tid-bit is that the new Apple phone is said to be able to recognise a person’s face even when the phone is lying on a table. That says to me that the volume that the phone's depth sensor will be able to capture will be much wider than the light sensor in the phone’s camera. The ‘angle of view’ required for a sensor in a phone lying on a table to read a nearby face would have to be at least 120º.

Now we need applications that can help people use this depth information in creative ways. At the very least, it will mean there will be no need to use green and blue screens to separate people from backgrounds. All objects further than a specific distance from the camera can be defined as transparent for production purposes.

Apple spending $1bn on TV production - how much on Final Cut Pro X post?

Sunday, 20 August 2017

Last week the Wall Street Journal reported that Apple will spend $1 billon on making its own TV content over the next year:

Combined with the company’s marketing clout and global reach, the step immediately makes Apple a considerable competitor in a crowded market, where both new and traditional media players are vying for original shows. Apple’s budget is about half of what Time Warner Inc.’s HBO spent on content last year, and on par with estimates of what Amazon.com Inc. spent in 2013, one year after it announced its move into original programming.

Apple could acquire and produce as many as 10 television shows, according to the people familiar with the plan, helping fulfill Apple Senior Vice President Eddy Cue’s vision of offering high-quality video, similar to shows such as HBO’s “Game of Thrones,” on its streaming-music service or possibly a new, video-focused service.

Given that post production costs on feature films and high-end TV usually amount to 1-3% of total budgets, that means around $10-30 million will be spent on picture editing, sound editing, compositing and mastering.

How much of that $10-30 million will be spent on Final Cut Pro X-based workflows?

How prescriptive will Apple be?

Judging from recent success in TV production, the trick that HBO, Netflix and Amazon have mastered is to be less hands-on with the creative people involved. Their policy is to invest in people will proven track records and not to manage them too closely.

That means Apple are unlikely to be insisting that each TV show is as precisely designed and produced as an iPhone. They will not insist on Apple products and services being at the core of story ideas, or even being placed clearly on screen. This kind of thing would be instantly counter-productive.

Although what goes into the writing and on screen is unlikely to be influenced by Apple, that restriction might not apply to the aspects of production that viewers won't be able to judge. That means Apple could require that preproduction, production and post-production use a specific amount of Apple products, services and software.

Even Apple isn’t forced to use Apple

Today Apple doesn't force all suppliers and staff to only use Apple products and services. Marketing vacancies at apple.jobs.com include requirements that people know how to use Adobe products for which there are Apple equivalents. Some Apple TV commercials are not edited using Final Cut Pro X, some motion graphics are not created in Motion 5, audio post production is not limited to Logic Pro X.

Apple sensibly want to be able to work with the best people and suppliers - and not be limited to those that only use Apple products. On the other hand, Apple’s hardware, software and services teams proceed on the basis that what they make would be the best tools to make high-end TV and feature films. 

Train the talented in the tools you want them to use

There are two things Apple can do here: firstly, they need to improve their products to make them more suited for high-end production. Secondly, they could invest in the education aspects of the post ecosystem. Production companies who are required to use Final Cut Pro X to edit the next House of Cards or Stranger Things are likely to say: ‘There aren't enough editors, assistant editors, apprentices, post production supervisors and VFX producers who know Final Cut Pro X and its ecosystem.’

Oversupply is a requirement

The catch is that although there are some people in Los Angeles and New York who could be employed in these roles, they don’t have enough experience in high-end TV. The sad thing about TV and film production is that you need a whole ecosystem of people and suppliers who know a specific post system: so you can have the security of knowing you can fire individuals and drop companies when you want. Production company management techniques expect that kind of control over costs. That's working in the VFX industry is so tough - those paying the bills know that there is enough oversupply to keep them in a very good negotiation position.

Specific demands of commissioners such as Amazon are changing post workflows. They specify that shows that they fund must be made using a 4K workflow. In practice almost no-one will benefit from more pixels being streamed to their TVs at home, but Amazon consider 4K an important marketing distinction. But production companies will change their workflow in order to be in line for some Amazon money.

Apple are unlikely to require a Final Cut Pro X workflow. They could encourage its use by allowing production companies to get double the usual money for post budgets if they use Final Cut. Dangling that carrot won’t make adoption possible any time soon. $1bn of TV production makes around 10 big (‘Fargo’/‘The Walking Dead’-sized) shows per year. If the post production of five or six of these shows are done in Los Angeles, because of a lack of a Final Cut Pro X people and supplier ecosystem, I guess only two of those could be made simultaneously using a Final Cut-based workflow.

A few million on a big plan

As Apple is about to spend millions of dollars in Los Angeles with creative people, perhaps it is time for Apple to prime the Final Cut Pro X L.A. post production ecosystem: Train the trainers, plan the courses, do the marketing to post people, train experienced post people and generate case studies. Create the oversupply that makes producers feel like they have enough control.

There is time. TV show development takes months. While new Apple commissioning people make their plans and start working with talented people and production companies, there is time other Apple people can set about preparing a much bigger ecosystem to support production and post production using Apple hardware, software and services.

‘Negative cost’ Final Cut Pro X training

What could Apple do with $1m in Los Angeles? Pay experienced post production people to attend Final Cut Pro X training. For editors, assistant editors, VFX supervisors, VFX staffers, producers, writers, directors, reporters. Pay them their normal daily rates to be trained in what people their roles need to know about Apple’s Pro Apps.

Who needs to be convinced?

The argument isn't about ‘tracks vs. the magnetic timeline’ - it’s about money. All the talk of convincing post people to use Final Cut Pro X is a nice, kind way of doing things. The people who need convincing are those with the money. Post didn't move to Avid 20 years ago because it was better than film. The money people were convinced by the economics of computer-based editing, and ordered the post people to make the change.

Sorting the supply of people and third-party services is the start of this. The next stage is gathering the evidence of how much money will be saved. Once that happens, improvements in the magnetic timeline or the Final Cut Pro X version of bin locking will be irrelevant. Once it can be shown that a switch to Final Cut Pro X makes post twice as cheap and twice as flexible as any other method, that's when the switch will happen.

Time to for Apple start planning and make a big change in high-end post production.

The end of public advertising

Friday, 11 August 2017

How much would you pay for all advertising to be removed from your view as you go about your daily life? All is it needs is the ability to interpolate what advertsing covers up, and replace all advertising with that.

This TechCrunch article buries the lede:

Facebook buys computer vision startup focused on adding objects to video

Adding objects to video isn't as hard as removing objects, Facebook has bought a company that has that technology:

Facebook has acquired a German company called Fayteq that builds software add-ons for video editing that can remove and add whole objects from captured video using computer vision

My emphasis.

Facebook also have a company that makes glasses you can wear to run the software.

I would guess that Apple and others will also develop such software to run on their AR devices. Once a majority of people won't be able to see shared public advertising, how long until it is no longer put up?

Apple Motion Tip: Setting Precise Widget Snapshot Values

Tuesday, 08 August 2017

In 2011, Apple updated Motion to version 5. It's biggest new feature is the ability to make plugins for Final Cut Pro X. Part of that feature is the ability to control multiple parameters in Motion with a single slider. You can set the start and end values for the slider. Each Motion parameter value you 'rig' to the slider 'widget' can be controlled by the slider. 

In this case, a Slider widget has been to control the width of a line and the font size of some text in Motion. The range of the slider has been set to a minimum of 6 and a maximum of 100:

You can also make it so when the slider is set to a specific value, the rigged parameters can be set to values of your choice. These specific slider values are known as 'snapshots.' In the case above, I would like to make it so a value of 50 for the slider makes the width of the line 10 points and the size of the text 50 points. The problem is that you add snapshots to slider widgets by double-clicking below the slider. You can then drag the snapshot to the value you want it to be, but it is hard to set the value precisely. In this case, the closest I could get the snapshot to be was 50.03.

Before version 5.3, to set a precise value for a snapshot required doing some calculations and opening the Motion file in a text editor and editing the source directly (setting the snapshot value to 0.531914894 (50/(100-6))). I wrote about this back in 2011.

Now all you need do is double-click the snapshot pin. A field appears that you can edit:

Sadly there is a small fault in that the value doesn't take the start value of the slider into account. In this case it doesn't show ‘50.03’ but ‘44.03.’ If you edit this value to be ‘44’…

…the snapshot value is set to ‘50’.

Much easier than before.

Apple Patent: Personalised Programming for Streaming Media Breaks

Tuesday, 08 August 2017

Apple have been awarded a patent for 'content pods' 

A content delivery system determines what personal content is available on the user device through connecting to available information sources. The delivery system then assembles the content pod from these elements in addition to invitational content from content providers. In some embodiments, a bumper message is included in the content pod to provide a context for the elements that are being assembled in combination with each other. Once the content pod is generated, it is sent to the user device to be played during content breaks within the online streaming playback.

The patent doesn't specify whether this pod is made for breaks in video streaming - Apple TV - or audio - Apple Music. This means automatically generated audio and video content to pepper the ‘stream’ (or Facebook/Twitter/Instagram feed). Apple already creates animated video ‘Memories’ based on photos on iOS and macOS. 

'Pod'?

Interesting that Apple refers to these bundles of content as 'pods.' Seems that when they applied for this patent, they saw the value of the podcast brand. As people have had problems widening understanding of podcasts outside their niche, perhaps Apple were considering modifying the meaning of 'pod' to integrated customised programming bundle.

On the advent of Apple’s 'iTunes Radio’ in 2013, I had some thoughts on what else might be included in automatically generated personalised media feeds might be like:

Adding the visual to a media feed would make a playlist item an act of a TV show or feature film, a short film, a YouTube video or a family video. It would include content from broadcast TV (news and sport and drama premieres), purchased TV, feature films and content from streamed subscription services. If you wanted to jump into a TV series or Soap after the first episodes, recap content would be playlisted in advance of the show you want to start with.

Almost 10 years ago Apple got a patent for inserting advertising into a feed. Just because Apple has a patent, it doesn't mean they will produce a product or service that relies on the patent.

New RED HYDROGEN ONE video shows non-working prototype

Thursday, 03 August 2017

A video showing more of the HYDROGEN ONE phone due 'next year' from RED. No preview of the display or much explanation of RED's new viewing format (aka video network). A look at the physical design and a hint of the modularity. Interesting.

Not sure why RED needs to make this a phone. How long would it take anyone to match a phone to Samsung or Apple quality - hardware and software integration-wise?

It would work just as well as a media slab that you take with you everywhere.

Apple and RED getting closer?

Unless due to a deal with Apple (who is now exclusive distributor of the $15,000 RED RAVEN camera), the HYDROGEN is secretly an iPhone Pro extension device! That would be interesting.

Coming soon: Apple post production workflow lab in Culver City, Los Angeles

Monday, 31 July 2017

It's good news and bad news for users of Apple’s high-end Macs. The good news: they are going to set up a ‘Pro Workflow’ lab. The bad news: they didn't do this years ago!

Apple watchers suspect that up until earlier this year, the plan was that the introduction of the iMac Pro would mark the end of the 2013 Mac Pro - or any kind of standalone high-end Mac. The new plan is to make a Mac Pro replacement, which was described as coming eventually, but “will not ship this year.”

The news about a new Apple Pro Workflow Lab based in Culver City, Los Angeles County, comes from a new Apple job description for a ‘Pro Workflow Expert’ position.

They are looking for someone with experience combining multiple high-end post production applications, hardware and computers together to produce professional content:

  • A minimum of 7+ years of experience developing professional content using video and photo editing and/or 3D animation on Macs and/orMac, PC and/or Linux systems.
  • Deep knowledge of one or more key professional content creation tools such as Final Cut Pro X, Adobe Creative Cloud tool suite, Flame, Maya, Mari, Pro Tools, Logic Pro, and other industry leading tools. An understanding of the entire workflow from generating and importing content, editing, effects, playback and distribution is required.
  • Basic knowledge of required 3rd party hardware such as cameras, control surfaces, I/O, display, storage, etc. for various workflows
  • Knowledge of relevant plug-ins typically used for various aspects of pro workflows

The Macintosh System Architecture team wants someone to

Set up a lab with necessary equipment for relevant workflows for instrumentation and analysis, including desktops, notebooks and iPads.

Work with key developers to thoroughly comprehend all aspects of various pro workflows, applications, plug-ins, add-on hardware, and infrastructures.

Ensure relevant workflows are understood and thoroughly documented and work with technical marketing to ensure definitions correspond to customer usage cases.

Identify performance and functional issues with each workflow and work with architecture team to for detailed micro-architecture analysis.

Good news for those who think that Apple is only ‘the iPhone company’ who will never take small niche markets like high-end production seriously.

6 years late?

It is a pity that this lab wasn’t set up during the development of the Mac Pro in the years before its 2013 launch. At least the new lab will include ‘desktops, notebooks and iPads’ - that implies not just Mac Pros, iMacs and MacBook Pros but PCs and mobile PCs.

Have the relevant experience and want to work with cool Apple folk in Culver City? Real high-end post production skills? Apply for the job today!

 

Animation tools not ready for satire prime time

Thursday, 27 July 2017

If you had an almost unlimited budget, could you produce a rich feed of animated satirical videos with a one-day turnaround?

For now the answer seems to be no.

When HBO announced a deal with American satirist Jon Stewart in 2015, one of the shows mentioned was an animated comedy show. The Hollywood Reporter has reported that in May this year, HBO said that they would not be going ahead with the idea:

Stewart was set to work with cloud-graphics company OTOY to develop new technology that would allow him to produce timely shortform digital content. “Appearing on television 22 minutes a night clearly broke me," Stewart said at the time. "I’m pretty sure I can produce a few minutes of content every now and again."

The idea had been for the material to be refreshed on HBO's digital platforms, including HBO Now and HBO Go, multiple times throughout the day. But sources say it was the one-day turnaround that became a sticking point for the project. From a technological standpoint, it became clear to those involved that it would be next to impossible to create and distribute the sophisticated animation within the short window. The project had already been delayed due to its technological complexity. At one point, the digital shorts were expected to debut ahead of the presidential election, so as to provide commentary on the campaigns — but when challenges arose, HBO reportedly told Stewart he could have as much time as he needed to get it right.

HBO executive Casey Bloys explained that there was another reason they couldn't get the turnaround time down to one day:

"Once Jon realized that he could get close on the animation, what he realized also was in terms of the quality control and in terms of the writing, when you’re putting something out a couple of times a day, the quality control still has to be here," Bloys said. "It just got to a point where it was like, is this worth his time? Is this worth our time? We kind of thought, ‘You know what? It was a good try, but ultimately not worth it.’”

So making sure the writing remains high quality was a problem, but the technology also isn’t ready. I wonder what tools and UI will be able to hit this kind of target.

 

 

BBC evaluating iOS ARkit for mobile apps

Tuesday, 25 July 2017

The BBC is looking for an AR agency to collaborate in developing a mobile application to augment the launch of a big new TV series in Spring 2018:

We are looking to create an Augmented Reality (AR) experience in association with the Civilisations television broadcast (Spring 2018). We’d like a mobile application that allows the audience to bring objects from the programme into their homes - we’d like to do this using through-camera, marker less tracking to let the audience inspect and experience the objects in the comfort of their own living room. In addition to the objects themselves we’d like to provide the audience with exclusive audio and text content around these objects.

  • We’d like to use this as an opportunity to evaluate the state-of-play of solutions like Apple’s ARKit, Vuforia and OpenCV.
  • A system whereby approved content can be categorised or curated based on episodic content, overarching themes, geographic location or personal interest.
  • A system whereby institutions can integrate and publish new content to the app.

We want both the App and the content framework to be built with extensibility in mind - we want the ability to add different types of content in the future, or use the framework to power a geolocated app.

The TV show is in a similar scale to programmes such as "Life on Earth" and "Planet Earth." It aims to cover the history of art, architecture and philosophy:

BBC Civilisations is a re-imagining of the landmark history series Civilisation. Where the original series focussed on the influence of Western art, this new series will expand to include civilisations from Asia to the Americas, Africa as well as Europe. The series will be complemented by other programming across the BBC’s platforms and will include innovative digital content which will be made working in collaboration with the UK’s museum sector.

Imagine what kind of objects and environments could be overlaid into people’s lives using AR. The link to museums would allow AR content to be unlocked on visits to specific locations in the UK.

ARkit is a new featuring coming iOS 11 later this year. It allows recent iPhones and iPads to overlay 3D content over the live camera view so that rendered objects and environments align with real spaces.

If you are part of an agency that might be able deliver this kind of application and associated content management system, apply now - the deadline is tomorrow evening.

New Apple job = Machine learning coming to Final Cut Pro X?

Monday, 24 July 2017

Computer vision and machine learning coming to Apple ProApps? On Friday Apple added a new opening to their jobs website:

Video Applications Senior Software Engineer

Combining the latest Mac OS with Apple quality UI design, the editing team builds the innovative next generation timeline experience in Final Cut Pro X.

The job requirements have some interesting clauses:

Work on the architecture that provides the canvas for telling a story in video and incorporate cutting edge technology to build future versions of the product.

What cutting edge technology could they be thinking of here?

Experience developing computer vision and/or audio processing algorithms

Do you have experience applying machine learning solutions with video and audio data in a product?

Seems like object and pattern recognition will be useful, perhaps for automatic keywording and point, plane and object tracking. This is to be expected as smartphones can do real-time face and object tracking in social media apps today.

At 2017 WWDC in June Apple announced that they will add object tracking to iOS and macOS later this year (link to video of tracking demo). Here's an excerpt from the video of the session:

Another new technology, brand-new in the Vision framework this year is object tracking. You can use this to track a face if you've detected a face. You can use that face rectangle as an initial condition to the tracking and then the Vision framework will track that square throughout the rest of your video. Will also track rectangles and you can also define the initial condition yourself. So that's what I mean by general templates, if you decide to for example, put a square around this wakeboarder as I have, you can then go ahead and track that.

They also talked about applying machine learning models to content recognition:

Perhaps for example, you want to create a wedding application where you're able to detect this part of the wedding is the reception, this part of the wedding is where the bride is walking down the aisle. If you want to train your own model and you have the data to train your own model you can do that.

Machine learning and audio?

Interesting that they are planning to recognise aspects of audio as well - keywording is straughtforward. What could machine learning automatically determine about captured audio? This could be the beginning of automatic audio mixing to produce rudimentary placeholders before audio professionals take over.

Recently there have been academic experiments investigating automatic picture editing (for example the Stanford/Adobe research I wrote about in May). I wonder when similar experiments will investigate sound editing and mixing?

Not just Final Cut Pro X

Although people are expecting machine learning to be applied to video and audio in Final Cut Pro X, remember that iMovie is the same application with the consumer-friendly UI turned on. What works in Final Cut Pro X can also be introduced to a wider market in iMovie for macOS, for iOS and Clips for iOS.

Logic Pro X 10.3.2 update sees Final Cut Pro X interchange improvements

Wednesday, 19 July 2017

It looks like the Logic Pro team are spending time making it work better with Final Cut Pro X. Logic Pro X was updated to version 10.3.2 yesterday. In the extensive list of new features and bug fixes, here are the points related to Final Cut:

  • When importing Final Cut Pro XML projects containing multichannel audio files, Logic Pro now reliably maintains channel assignments.
  • Large Final Cut Pro XML files now load more quickly.
  • Final Cut Pro X XML files now reliably import with the correct Sub-Role Names.
  • Logic Pro now creates a Summing Stack for each Parent Role when importing FCPX XML files.

I don't use Final Cut to Logic workflows, so I can’t say how reliable Logic is when interpreting Final Cut XML. It seems that the Logic team are more like the Adobe Premiere team when it comes to implementing features: don't wait until a feature is perfect, get it in, then make it better based on user feedback.

If you have bought Logic Pro X, the update is from the Mac App Store.

VR jobs at Apple: July 2017

Monday, 17 July 2017

There are a number of positions available at Apple in July 2017 whose job descriptions mention VR.

VR hardware

IMG CoreMedia VR Pipeline Engineer

The Interactive Media Group (IMG) provides the media and graphics foundation across all of Apple's innovative products, including iPhone, AppleTV, Apple Watch, iPad, iPod, Macs as well as professional and consumer applications from Final Cut to iTunes and iWork.

  • Strong coding skills in C with ARM on embedded platforms
  • 2+ years experience developing and debugging large software systems
  • Direct experience with implementing and/or designing VR or 360 video playback systems

The role requires the ability to help design, build and troubleshoot media services for playback and export.

Spatial Audio Software Engineer

  • Key Advantage : Experience with audio software subsystems including DAWs, Game Audio Engines including Unreal, Unity and/or audio middleware for game and AR/VR applications.
  • Experience with Spatial audio formats (Atmos, HOA etc) is desirable.
  • Experience with Metal and general working of GPU systems.
  • Experience with SIMD, writing highly optimized audio algorithms

What would be in VR file format?

IMG - CoreMedia VR File Format Engineer

  • Proven experience with Audio/Video components of a media software system
  • Direct experience with implementing and/or designing media file formats
  • Experience with VR and 360 video

Interesting that Apple feel the need for a VR file format. I wonder what will make Apple’s VR file format stand out? It will probably be able to be recorded/encoded on iOS and macOS. I wonder if will also work on tvOS and watchOS. If it doesn't work on non-Apple hardware, it could be part of an Apple plan for technological lock-in.

VR marketing

Creative Technologist

As a member of Apple’s Interactive Team, the Creative Technologist is responsible for driving innovation that enhances and enlivens the marketing of Apple’s products and services. This role requires collaboration with the design, UX, motion graphics, film/video, 3D, and development groups across Apple’s Marcom group.

  • Developing interactive prototypes in order to conceptualize and develop innovative approaches for Apple marketing initiatives.
  • Experience with Adobe Creative Suite.
  • Experience with Unity/Unreal and AR/VR development is a plus.
  • Motion graphics and 3D software (AfterEffects, Maya) skills.

It's a pity Apple Marketing doesn't require knowledge of Apple’s motion graphics application. 

Route-to-Market Mgr, WW In-Store Channel Digital Experience

The Route-to-Market Manager, WW In-Store Channel Digital Experience, is responsible for driving and executing all digital marketing communications as related to in-store Apple-led branded product presentation and campaigns.

  • Detailed knowledge of digital experience technologies - including but not limited to, on-device engagement tactics, digital content development, app development, beaconing, AR/VR, etc.

Using VR to make Apple products

Also a job requirement shows that Apple are using VR simulations to design power systems:

Senior Electromagnetics Analyst

  • Engineer will also need to do fundamental analyses and run SPICE simulations for VR conversion.

SPICE is a system that takes circuit designs and simulates specific results given specific inputs. 30 years ago it was a command-line based UNIX tool. Now Apple engineers are using VR to look around inside their hardware designs.

If you choose to apply for any of these jobs, good luck. Tell them Alex Gollner sent you!

Apple Pro Apps and macOS High Sierra compatibility

Friday, 14 July 2017

What versions of Final Cut Pro X are compatible with macOS High Sierra?

During Apple’s 2017 Worldwide Developer Conference, macOS High Sierra was announced. Apple has a public beta test programme, where you can sign up to try early versions of Apple operating systems before they are released. 

macOS High Sierra is supposed to a version of the Mac operating system that consolidates on previous features and stability. This gives Apple and third-party developers the chance to catch their breath for a year. They can concentrate on reliability and stable improvement.

The question for Final Cut Pro X, Motion 5, Compressor and Logic Pro X users is whether to update their Macs to High Sierra.

Apple says that if they are using older versions of these applications, if they want to use macOS High Sierra, they will need to update to 

  • Final Cut Pro X 10.3.4 or later
  • Motion 5.3.2 or later
  • Compressor 4.3.2 or later
  • Logic Pro X 10.3.1 or later
  • MainStage 3.3 or later

If you still use Final Cut Pro 7 - or any other applications in the Final Cut Studio suite (including DVD Studio Pro and Soundtrack Pro), or need to use them once in a while to open older projects, don't update all your Macs to macOS High Sierra:

Previous versions of these applications, including all apps in Final Cut Studio and Logic Studio, are not supported in macOS High Sierra.

Interesting that the ProApps team are pushing users forward this way. It will be interesting to see if new application features and bug fixes require newer versions of macOS than previous transitions. 

Final Cut Pro 7 was last updated in September 2010. It is impressive that it still runs on Macs being released in 2017.

If you have more than one Mac, perhaps it is worth keeping one on macOS Sierra for the foreseeable future. When the next major version of Final Cut appears, it is likely it will work on Sierra. If you don't have more than one Mac, prepare a clone of your most reliable macOS Sierra startup drive for future use when you need to revisit old projects.

recommended reading cost of project standard 2016 acrobat xi pro for mac for sale online browse around these guys autodesk autocad lt 2016 student discount

buy adobe acrobat pro for mac acrobat pro student price dreamweaver cs6 education version that site https://www.colornote.com/pres.php student license cost photoshop cc for mac os x check over here