FREMONT, CA — Blackmagic Design announced an update to DaVinci Resolve at the recent NAB Show in Las Vegas. Version 18.5 of Resolve represents a major update, adding new AI tools, over 150 feature upgrades, speech to text editing, automatic subtitling, AI audio classification, Universal Scene Description file support, and new menus on the Cut page. Resolve 18.5 public beta is currently available for download from the company’s website (www.blackmagicdesign.com).
Editors can now transcribe audio within clips to search for media based on narrative content, or to quickly generate subtitles for timelines with the automatic speech-to-text feature. DaVinci Neural Engine AI can analyze and automatically sort audio clips based on classification, and on the Fairlight page, audio tracks can now be grouped for faster mix automation and editing. Colorists can use the new Relight FX feature to add virtual lighting to a scene. And VFX artists can collaborate more easily with support for USD files, and work faster with the multi-merge tool.
For those working in a remote workflow, user can now initiate remote monitoring using just a Blackmagic ID and a session code. Users can also stream to multiple computers, iPads or iPhones, all at the same time. They can also export their timeline to the Blackmagic Cloud using the new Presentations feature, which is in public beta for customers with project libraries. With Presentations, multiple people can review their timeline, leave comments and share a live chat. Comments will appear as markers on their DaVinci Resolve timeline.
On the Color page, DaVinci Resolve color management can now be configured on a timeline level. Any existing custom timelines are automatically initialized to color management settings from the project. This allows the setting of independent timeline and output color spaces per timeline for projects with mixed media.
The addition of three new menus to the Cut page timeline allows for quicker editing. Editors can use the timeline options, timeline actions and edit actions to toggle ripple editing, trim edit points to the playhead, resync audio, and change the appearance of the timeline. Scene cut detection is also now possible directly on the Cut timeline. The Cut page also has a new ripple button, which enables and disables rippled edits. Previously, edits were always rippled. Now, by disabling the ripple button, the duration of the edit will be preserved so editors can create gaps in the timeline. The auto subtitle feature on the Cut and Edit pages transcribes speech to text automatically into a subtitle track on the timeline.
Speech-to-text editing has also been added in the transcribe feature, which automatically transcribes video and audio clips. Users can search for specific terms or jump to the section of a clip where a word appears. Content creators can also now upload videos directly to TikTok by signing in to their TikTok account in preferences and using either the render preset in the Deliver page or the quick export dialogue. A new option for vertical aspect ratio output makes creating content for social media easier.
Support for the OpenTimelineIO (OTIO) format makes importing and exporting timelines from other NLE applications fast and easy. OTIO supports metadata for clips, timing, tracks, transitions and markers, as well as information about the order and length of cuts and references to external media. Plus, users can also now quickly backup and restore their work by enabling per timeline backups in preferences.
The Fusion component of Resolve now supports Universal Scene Descriptor files for easier collaboration between VFX artists. USD data, such as geometry, lighting, cameras, materials and animation, can be imported. Fusion’s new USD tools let users manipulate, re-light and render files using Hydra-based renderers, such as Storm. The new multi-merge tool lets artists merge numerous media sources into a single multi-layer stack, so it’s easier to create composites by merging clips, stills or graphics using layers.
Audio engineers are now able to combine related audio tracks or mixer channels into groups, enabling shared mix automation or editing operations. The DaVinci Neural Engine can now classify audio clips based on their content, making editing choices faster when reviewing unfamiliar materials. After analysis, audio clips appear in bins cateorized for dialogue, music and effects with detailed sub categories.