The 2018 SIGGRAPH show in Vancouver last month was declared a success. With some 16,500-plus attendees from around the world, the 45th annual conference also featured nearly 160 exhibitors on the show floor. That’s where attendees had a chance to see the latest technologies, products and solutions.
Here’s a look at some of the highlights and big show news.
NVIDIA INTRO’S REALTIME RAY TRACING
SANTA CLARA, CA — Nvidia (www.nvidia.com) is truly looking to revolutionize the computer graphics industry. At this year’s SIGGRAPH show, which took place in Vancouver last month, Nvidia founder and CEO Jensen Huang (pictured below) announced to a standing room only crowd (more than 1,200 industry professionals) the “world’s first ray tracing GPU” with the introduction of the Nvidia Turing GPU architecture and the first Turing-based Quadro (top, right) products — the Quadro RTX 8000, RTX 6000 and RTX 5000 GPUs.
Turing features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing. Together for the first time, they make realtime ray tracing possible.
“This is the greatest leap in computer graphics since the invention of the CUDA GPU in 2006,” said Huang.
He continued that the two engines — along with a powerful compute for simulation and enhanced rasterization — are bringing in a new generation of hybrid rendering that addresses a “$250 billion visual effects industry.”
“Turing is Nvidia’s most important innovation in computer graphics in more than a decade,” said Huang. “Hybrid rendering will change the industry, opening up amazing possibilities that enhance our lives with more beautiful designs, richer entertainment and more interactive experiences. The arrival of realtime ray tracing is the Holy Grail of our industry.”
The new Quadro RTX line brings hardware-accelerated ray tracing, AI, advanced shading and simulation to creative professionals. Also announced was the Quadro RTX Server, a reference architecture for highly configurable, on-demand rendering and virtual workstation solutions from the datacenter.
Quadro RTX GPUs are designed for demanding visual computing workloads, such as those used in film and video content creation; automotive and architectural design; and scientific visualization. They surpass the previous generation with groundbreaking technologies, including: New RT Cores to enable realtime ray tracing of objects and environments with physically-accurate shadows, reflections, refractions and global illumination; Turing Tensor Cores accelerate deep neural network training and inference, which are critical to powering AI-enhanced rendering, products and services; new Turing Streaming Multiprocessor architecture features up to 4,608 CUDA cores and delivers up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second to accelerate complex simulation of real-world physics; advanced programmable shading technologies to improve the performance of complex visual effects and graphics-intensive experiences; the first implementation of ultra-fast Samsung 16Gb GDDR6 memory to support more complex designs, massive architectural datasets, 8K movie content and more; and new and enhanced technologies to improve performance of VR applications.
Quadro RTX Server
The Quadro RTX Server addresses on-demand rendering in the datacenter, enabling easy configuration of on-demand render nodes for batch and interactive rendering. It combines Quadro RTX GPUs with new Quadro Infinity software (available in the first quarter of 2019) to deliver a powerful and flexible architecture to meet the demands of creative professionals.
Quadro RTX GPUs will be available starting in the fourth quarter on Nvidia.com. The Quadro RTX 8000 with 48GB memory is priced at $10,000 (ESP); Quadro RTX 6000 with 24GB memory priced at $6,300 (ESP); and Quadro RTX 5000 with 16GB memory priced at $2,300 (ESP).
Prior to introducing Turing and the Quadro RTX GPU line, Huang opened up his address with a look at the computer graphics industry, Nvidia’s own history and some of the key milestones — all building up to the announcement of realtime ray tracing, helping to frame its significance.
The company featured a number of technology demos around its booth throughout the show, highlighting a number of its key markets, including film, gaming, architectural and automotive applications.
“There’s no question in my mind,” stressed Huang, “that this is the single greatest leap in one generation. Computer graphics will never look the same again.”
CHAOS GROUP INTROS REALTIME RAYTRACING TECHNOLOGY
CULVER CITY, CA — At SIGGRAPH 2018, Chaos Group (chaosgroup.com) offered a first look at Project Lavina, a new technology designed for photorealistic realtime ray tracing — what is often considered ‘the Holy Grail of computer graphics.’ By leveraging the dedicated RT Core within Nvidia’s Turing-based Quadro RTX GPUs, Project Lavina fundamentally changes the direction of computer graphics, introducing a new level of visual quality for realtime games, VR and 3D visualization.
Project Lavina, named after the Bulgarian word for “avalanche,” debuted as a SIGGRAPH tech demo, depicting a massive 3D forest and several architectural visualizations running at 24 to 30 frames per second (FPS) in standard HD resolution. Rather than using game engine shortcuts, like rasterized graphics and a reduced level of detail, each scene features live ray tracing for truly interactive photorealism. Lavina is able to handle massive scenes at realtime speeds — over 300 billion triangles in one case — without any loss in detail.
The SIGGRAPH tech demo used 3D scenes exported from V-Ray-enabled applications directly in Lavina. Unlike a traditional game engine, which requires assets to be rebuilt and specially optimized, Lavina dramatically simplifies this process with direct compatibility and translation of V-Ray assets. Upon loading the scene, the user can explore the environment exactly as they would in a game engine, and experience physically-accurate lighting, reflections and global illumination.
This is Chaos Group’s second realtime announcement in the last year, following the beta release of V-Ray for Unreal.
VICON SHOWS LOCATION-BASED VR SOLUTION, ORIGIN
OXFORD, ENGLAND — Vicon (www.vicon.com) showed off Origin, a comprehensive location-based virtual reality (LBVR) system that blends the company’s tracking abilities with tools that make it easy for anyone to setup and operate. Building on nearly 35 years of R&D, Vicon’s new, fully-scalable solution is designed to empower VR creators big and small by offering high quality and reliability.
Origin includes three brand new pieces of hardware, along with software created specifically for Vicon’s new LBVR system. Following three years of development working alongside creators like Dreamscape Immersive, Origin contains everything a commercial enterprise would need to drive a fully-immersive experience. Auto-healing software capable of repairing calibrations between sessions, paired with tracking that never fails also ensures that the system requires minimal training and maintenance in order to operate perfectly every time.
At SIGGRAPH, the company offered attendees an unforgettable LBVR adventure (photo, above) , in partnership with Dreamscape Immersive.
The Origin suite consists of several components. Viper is a compact, lightweight tracking camera specifically designed to work with active marker technology. Pulsar are wearable tracking clusters that emit unique, active infrared LED patterns synchronized to ensure optimal battery life. They are designed to easily attach to a participant’s body, limbs and head-mounted displays.
Beacon creates a synchronized wireless network connecting to Pulsar clusters (or other devices), allowing them to communicate with Viper cameras and provide seamless connectivity. Evoke is a highly automated software platform featuring unbreakable tracking and auto-healing between sessions.
With Origin, multiple participants can appear simultaneously as characters in the LBVR environment, each with fully-animated avatars driven by Evoke’s realtime tracking. Participants wear the Pulsar clusters on their body, limbs and head-mounted displays, contributing to creating a fully immersive experience where their every movement is recreated in the virtual world. Participants can also interact with others and react to them based on their movements. Tools and props can utilize clusters as well, allowing them to be passed between participants to increase the immersion.
FOUNDRY LAUNCHES NUKE 11.2, UPDATES MARI & KATANA
LONDON — Foundry (foundry.com) has launched Nuke 11.2, bringing a range of new features and updates to the compositing toolkit. The latest version lets artists work quicker through upgraded UI features and performance capabilities, alongside a new API for deep compositing that can increase the speed of script processing.
Key features for Nuke 11.2 include a new API that delivers 1.5x faster processing. In this installment, the Nuke Tab menu and UI for creating user knobs have been enhanced to improve user experience for some of the most common tasks: adding nodes and creating Gizmos. The updated Tab menu allows artists to find nodes using partial words, set ‘favorite’ nodes and organize them via a weighting system. These improvements add up to substantial time savings when building scripts with a large number of nodes.
A new UI allows user knobs to be linked between nodes by simply dragging and dropping. Artists can add, rearrange or remove user parameters using the same interface. The Smart Vector toolset is now even faster to use and more effective in shots with occluding objects. Smart Vector and Vector Distort have been optimized for the GPU, allowing users to generate Smart Vectors on the fly and preview the result without needing to pre-render the vectors.
A new mask input allows artists to identify areas of motion to ignore when generating the Smart Vectors and warping the paint or texture. As a result, the Smart Vector toolset can now be used on shots with occluding objects with less laborious manual clean-up, speeding up the use of the toolset in more complex cases.
Nuke Studio now benefits from an updated project panel UI, providing the artist with new visual controls for managing and organizing complex projects. For quick visual reference, artists can assign colors to items in the project bin and the timeline, based on file type and other parameters accessible via the UI and python API.
Nuke 11.2 is available for purchase on Foundry’s Website and via accredited resellers.
Foundry also introduced updates to its flagship lighting, 3D painting and look development and lighting tools — Katana and Mari. The updates provide a more user-friendly experience and improved response times.
OPTITRACK PREVIEWS LATEST MOCAP TECHNOLOGY
CORVALLIS, OR — OptiTrack (http://optitrack.com) previewed the lightweight Active Puck Mini, its latest advancement in VR player tracking and full body motion capture. While at the show, Post got a demo from company CSO Brian Niles (pictured above). With a reduced form factor and the added tracking benefits of an inertial measurement unit (IMU), OptiTrack’s newly-designed pucks build on its technology already implemented by leading location-based entertainment centers. They can deliver precise player tracking for VR and produce high quality motion capture data without the need for specialized suits and long setup times.
IMUs have also been added to OptiTrack’s standard-sized Active Pucks, and both will be shipping in the fall. OptiTrack Active Pucks provide positional tracking data with errors of less than 0.2mm, even across very large tracking areas, as well as rotational data derived from active LEDs and the IMU. Pucks will provide tracking for any object to which it is applied, including HMDs, set pieces, weapons and end effectors for full body tracking. The pucks are powered by a rechargeable battery and can run up to 10 hours on a single charge.
All OptiTrack Active technology can be tracked by both Active and Passive camera systems, so customers with existing Ethernet-based systems can integrate Active Pucks into their workflows immediately.
EPIC GAMES RELEASES UNREAL ENGINE 4.20
CARY, NC — Epic Games launched Unreal Engine 4.20 (unrealengine.com), enabling developers to build realistic characters and immersive environments across games, film and TV, VR/AR/MR and enterprise applications.
Unreal Engine 4.20 combines the latest realtime rendering advancements with improved creative tools. Artists working in visual effects, animation, broadcast and virtual production can take advantage of the latest enhancements for digital humans, VFX, cinematic depth of field and more to create sophisticated images across all forms of media and entertainment.
Key features within Unreal Engine 4.20 include:
Niagara VFX (early access): Unreal Engine’s new programmable VFX editor, Niagara, is now available in early access. This new suite of tools is built from the ground up to give artists unprecedented control over particle simulation, rendering and performance, for more sophisticated visuals. This tool will eventually replace the Unreal Cascade particle editor.
Cinematic Depth of Field: Unreal Engine 4.20 delivers tools for achieving depth of field at true cinematic quality. This new implementation replaces the Circle DOF method. It’s faster, cleaner and provides a cinematic appearance through the use of a procedural bokeh simulation. Cinematic DOF also supports alpha channel, dynamic resolution stability and has multiple settings for scaling up or down on console platforms based on project requirements.
Digital Humans Improvements: In-engine tools now include dual lobe specular/double Beckman specular models, backscatter transmission in lights, boundary bleed color subsurface scattering, iris normal slot for eyes and screen space irradiance to build the most cutting-edge digital humans in games and beyond.
Mixed Reality Capture Support (early access): Users with virtual production workflows now have mixed reality capture support that includes video input, calibration and in-game compositing. Supported webcams and HDMI capture devices enable users to pull real world green-screened video into the engine, and supported tracking devices can match your camera location to the in-game camera for more dynamic shots.
Robust AR Support: Unreal Engine 4.20 ships with native support for ARKit 2, which includes features for creating shared, collaborative AR experiences. Also included is the latest support for Magic Leap One, Google ARCore 1.2 support.
AMD INTROS RADEON PRO WX 8200
SANTA CLARA, CA — AMD (www.amd.com) announced a high-performance addition to the Radeon Pro WX workstation graphics lineup with the AMD Radeon Pro WX 8200 graphics card, delivering advanced workstation graphics performance for under $1,000, for realtime visualization, virtual reality (VR) and photorealistic rendering. AMD also unveiled a new alliance with the Vancouver Film School.
The new AMD Radeon Pro WX 8200 graphics card allows professionals to accelerate design and rendering. It is intended for design and manufacturing, media and entertainment, and architecture, engineering and construction (AEC) workloads at all stages of product development.
Based on the advanced “Vega” GPU architecture with the 14nm FinFET process, the Radeon Pro WX 8200 graphics card offers the performance required to drive increasingly large and complex models through the entire design visualization pipeline. With planned certifications for many of today’s most popular applications — including Adobe CC, Dassault Systmes Solidworks, Autodesk 3ds Max, Revit, among others — the Radeon Pro WX 8200 graphics card is ideal for workloads such as realtime visualization, physically-based rendering and VR.
The Radeon Pro WX 8200 graphics card is equipped with advanced features and technologies geared towards professionals, including: High Bandwidth Cache Controller (HBCC): The Radeon Pro WX 8200 graphics card’s memory system removes the capacity limitations of traditional GPU memory, letting creators and designers work with much larger, more detailed models and assets in realtime; Enhanced Pixel Engine: The “Vega” GPU architecture’s enhanced pixel engine lets creators build more complex worlds without worrying about GPU limitations, increasing efficiency by batching related work into the GPU’s local cache to process them simultaneously. New “shade once” technology ensures only pixels visible in the final scene are shaded; Error Correcting Code (ECC) Memory: Helps guarantee the accuracy of computations by correcting any single or double-bit error resulting from naturally occurring background radiation.
The Radeon Pro WX 8200 graphics card is now available for $999 USD.
AMD also announced a new alliance with The Vancouver Film School (https://vfs.edu/) to open a tech innovation lab and hub for Vancouver’s professional VFX community. Powered by Radeon Pro and Ryzen technologies, the AMD Creators Lab, which features AMD-based workstations, will inspire the creative tech community and advance the field of VFX, game design and VR/AR development.
MAXON ANNOUNCES CINEMA 4D RELEASE 20
FRIEDRICHSDORF, GERMANY — Maxon (www.maxon.net/en-us) unveiled Cinema 4D Release 20 (R20) at SIGGRAPH, which introduces high-end features for VFX and motion graphics artists, including node-based materials, volume modeling, robust CAD import and a dramatic evolution of the MoGraph toolset.
Key highlights in Release 20 include: Node-Based Materials, which provide new possibilities for creating materials from simple references to complex shaders in a node-based editor. With more than 150 nodes to choose from that perform different functions, artists can combine nodes to easily build complex shading effects for greater creative flexibility. For an easy start, users new to a node-based material workflow still can rely on the user interface of Cinema 4D’s standard Material Editor, creating the corresponding node material in the background automatically. Node-based materials can be packaged into assets with user-defined parameters exposed in a similar interface to Cinema 4D’s classic Material Editor.
MoGraph Fields offers new capabilities in this procedural animation toolset, giving users an entirely new way to define the strength of effects by combining falloffs — from simple shapes to shaders or sounds and objects and formulas. Artists can layer Fields with standard mixing modes and remap their effects. They can also group multiple Fields and use them to control effectors, deformers, weights and more.
ProRender Enhancements — ProRender in Cinema 4D R20 extends the GPU-rendering toolset with key features, including sub-surface scattering, motion blur and multi-passes. Also included are an updated ProRender core, support for Apple’s Metal 2 technology, out-of-core textures and other enhancements.
Cinema 4D 20 is available this month.
Also, Maxon announced that its three original founders and serving managing directors of the company, Uwe Bärtels, Harald Egel and Harald Schneider, will retire after 32 years serving the company. Maxon has appointed former Adobe executive David McGavran to the role of CEO at Maxon Computer GmbH.