Latest Posts

BlogVideo preview of iMac Pro from MKBHD – Marques BrownleeTuesday, December 12 2017

Just as with the iPhone X, it looks like Apple are giving online influencers early access to new products and giving them permission to share their impressions before release. Marques Brownlee – known as MKBHD on the internet – has posted a video on the forthcoming iMac Pro. His 5.4 million subscribers are now finding out about the new Mac from Apple.

He mentions that this video was editing on the new iMac Pro in the next version on Final Cut Pro X, 10.4.

The model he’s be working with for a week is the Intel Xeon W 3GHz 10-core iMac Pro with 128GB of RAM, Radeon Pro Vega 64 GPU with 16GB of RAM and 2TB storage – the ‘middle’ iMac Pro in the range.

  • The physical dimensions exactly match today’s 2017 5K iMac.
  • No access to upgrading the RAM
  • Two more Thunderbolt 3 ports (for a total of 4)
  • 10 Gigabit Ethernet
  • Geekbench iMac Pro single core: 5,468 (vs. 5,571 for 2017 iMac and 3,636 for 2013 Mac Pro)
  • Geekbench iMac Pro multi-core: 37,417 (vs. 19,667 for 2017 iMac and 26,092 for 2013 Mac Pro)
  • Storage speed: 3,000MB/s read and write
  • Fan rarely spins up and keeps cool to the touch, despite high-end workstation components
  • 8- and 10-core editions available first, you’ll have to wait longer if your order an 18-core.
  • “The ideal high-end YouTuber machine”

Looks like applications that take advantage of multiple CPU cores are going to see a big difference on the iMac Pro.

Apple have announced that the orders for the new iMac Pro will start on Thursday.

Read more
BlogSoon: More audio timelines that can automatically be modified to match changes in video timelinesWednesday, December 6 2017

In many video editing workflows, assistant have the thankless task of making special versions of timelines that generate files for others in postproduction. A special timeline for VFX people. A special timeline for colour. A special timeline for exporting for broadcast. A special timeline for audio. Transferring timelines to other departments is called ‘doing turnovers.’

Final Cut Pro X is the professional video editing application that automates the most turnovers. It seems that Apple want to stop the need for special timelines to be created. Special timelines that can go out of sync if the main picture edit changes. Final Cut video and audio roles mean that turnovers for broadcast no longer require special timelines.

The Vordio application aims to make the manual audio reconform process go away. At the moment problems arise when video timelines change once the audio team start work one their version of the timeline. Sound editors, designers and mixers can do a great deal of work on a film and then be told that there have been changes to the picture edit.

What’s new? What’s moved? What has been deleted?

Vordio offers audio autoreconform. That’s if (when) the picture timeline changes Vordio looks at the NLE-made changes and produces a change list that can be applied to the audio timeline in the DAW. It currently does this with Final Cut Pro X and Adobe Premiere timelines. If the sound team have already made changes in Reaper (a popular alternative to ProTools) and they need to know what changes have since been made to the video edit, Vordio can make changes to the audio timeline that reflect the new video edit. This includes labelling new clips, clips that have moved and showing which clips have been deleted.

It looks like Vordio will soon work with other DAWs by using the Hammerspoon UI scripting toolkit.

StudioOne is a useful DAW that has a free version.

I expect timeline autoreconform come to all timelines. To get a preview of what it could be like, check out Vordio.

Read more
BlogFilm from a single point, then wander around inside a cloud of pixels in 3DMonday, December 4 2017

People wearing 360° spherical video headsets will get a feeling of presence when the small subconscious movements they make are reflected in what they say. This is the first aim of Six Degrees of Freedom video (6DoF). The scene changes as the viewer turns in three axes and moves in three axes. 6DoF video is stored as a sphere of pixels and a channel of information that defines how far each of those pixels are from the camera.

Josh Gladstone has been experimenting with creating point clouds of pixels. His 4th video in a series about working on a sphere of pixels plus depth shows him wondering around a 3D environment that was captured by filming from a single point.

The scenes he uses in his series were filmed on an GoPro Odyssey camera. The footage recorded by its 16 sensors was then processed by the Google Jump online service to produce a sphere of pixels plus a depth map.

The pixels that are closest to the camera have the brighter corresponding pixels in the depth map.

360° spherical video point clouds are made up of a sphere of pixels whose distance from the centre point have been modified based on a depth map.

Josh has written scripts in Unity – a 3D game development environment – that allow real-time rendering of these point clouds. Real time is important because users will expect VR headsets to be able to render in real time as they turn their heads and move around inside virtual spaces.

You can move around inside this cloud of pixels filmed from a single point:

In the latest video in his series Josh Gladstone simulates how a VR headset can be used to move around inside point clouds generated from information captured by 360° spherical video camera rigs. He also shows how combining multiple point clouds based on video taken from multiple positions could be the basis of recording full 3D environments:

What starts as an experiment in a 3D game engine is destined to be in post production applications like Apple’s Motion 5 and Adobe After Effects, and maybe eventually in NLEs like Final Cut Pro X.

I’m looking forward to playing around inside point clouds.

Read more
Blog28 videos, 53 million views (so far) – advice for your video essay YouTube channelSunday, December 3 2017

Every Frame a Painting is a YouTube channel made up of video essays about visual storytelling. It has 1.3 million subscribers and millions of views. The creators Taylor Ramos and Tony Zhou have decided to close close it. Luckily for us they have written an essay on what they learned – including tips for others considering making videos in this form.

All the videos were made with Final Cut Pro X:

Every Frame a Painting was edited entirely in Final Cut Pro X for one reason: keywords.

The first time I watch something, I watch it with a notebook. The second time I watch it, I use FCPX and keyword anything that interests me.

Keywords group everything in a really simple, visual way. This is how I figured out to cut from West Side Story to Transformers. From Godzilla to I, Robot. From Jackie Chan to Marvel films. On my screen, all of these clips are side-by-side because they share the same keyword.

Organization is not just some anal-retentive habit; it is literally the best way to make connections that would not happen otherwise.

Even if you don’t make scholarly videos on the nature of visual storytelling, there is a lot to be learnt from their article and the 28 video essays in their channel.

Read more
BlogiPhone-mounted camera will capture 3D environments that can be fully explored in VRFriday, December 1 2017

Photogrammetry is the method of capturing a space in 3D using a series of still photos. It usually requires a great deal of complex computing power. A forthcoming software update for the $199 Giroptic iO (a 360° spherical video camera you mount onto your iPhone or Android phone) will give users of the  the ability to capture full VR models of the spaces they move through.

Mic Ty of 360 Rumors writes:

the photographer simply took 30 photos, then uploaded them to cloud servers for processing. The software generates the 3D model, and can even automatically remove the photographer from the VR model, even though the 360 photos had the photographer in them.

Once the model is generated it can be included in full VR systems that can be explored in VR headsets. This will work especially well in devices such as the HTC Vive, which can detect where you are in 3D space and move the 3D model in VR to match. Remember though that many VR experiences are about interactivity, and in order to add that to a 3D environment, users will have to use a VR authoring system.

3D environments in post production applications

For those making 360° spherical videos, it is likely that they will want their post tools to be able to handle the kind of 3D models generated by systems like these. Storytellers range from animators (users of applications like Blackmagic Fusion) to editors and directors (users of Final Cut Pro X and Adobe Premiere). Developers should bear in mind the way they integrate 3D environments in post applications should vary based on the nature of the storyteller.

However, it looks like there’ll be a new skill to develop for 360° spherical photographers: where to take pictures in a space to capture the full environment in 3D.

Go over to 360 Rumors to see a video of the system in action.

Read more
BlogAmazon launches Rekognition Video content tagging for third-party applicationsThursday, November 30 2017

Amazon have announced an content recognition service that developers can use to add features to their video applications, Streaming Media reports:

Rekognition Video is able to track people across videos, detect activities, and identify faces and objects. Celebrity identification is built in. It identifies faces even if they’re only partially in view, provides automatic tagging for locations and objects (such as beach, sun, or child), and tracks multiple people at once. The service goes beyond basic object identification, using context to provide richer information. The service is available today.

The videos need to be hosted in or streamed via Amazon S3 storage.

Apple are unlikely to incorporate Amazon Rekognition Video in their video applications and services. Luckily the Final Cut Pro X and Adobe Premiere ecosystems allow third-party developers to create tools that use this service. Post tools makers can then concentrate on integrating their workflow with their NLE while Amazon invest in improving the machine learning they can apply to video.

Read more