Major Omniverse updates announced at GTC include increased access to generative AI, simulation, and the industrial metaverse.
In another blog post shared today by NVIDIA, the company has revealed the release of a new set of Omniverse Connectors that expand artist and developer access to generative AI, simulation, and industrial metaverse collaboration 3D workflows, tools, and platforms.
The post details how developers and creators can better realize the massive potential of a host of collaborative creative tools and technologies with new Omniverse Connectors and other updates to NVIDIA Omniverse.
One such update is Omniverse Cloud, a platform-as-a-service unveiled today at NVIDIA GTC, equips users with a range of simulation and generative AI capabilities to easily build and deploy industrial metaverse applications.
New Omniverse Connectors and applications developed by third parties enable enterprises across the globe to push the limits of industrial digitalization.
Omniverse Ecosystem Expansion
Though Omniverse, developers and professionals can create, design, and deploy massive virtual worlds, AI-powered digital humans and 3D assets.
Its newest additions include:
- New Omniverse Connectors: Elevating connected workflows, new Omniverse Connectors for the Siemens Xcelerator portfolio — including Siemens Teamcenter, Siemens NX and Siemens Process Simulate — Blender, Cesium, Emulate3D by Rockwell Automation, Unity and Vectorworks are now available — linking more of the world’s most advanced applications through the Universal Scene Description (USD) framework. Azure Digital Twin, Blackshark.ai, FlexSim and NavVis Omniverse Connectors are coming soon.
- SimReady 3D assets: Over 1,000 new SimReady assets enable easier AI and industrial 3D workflows. KUKA, a leading supplier of intelligent automation solutions, is working with NVIDIA and evaluating an adoption of the new SimReady specifications to make customer simulation easier than ever.
- Synthetic data generation: Lexset and Siemens SynthAI are both using the Omniverse Replicator software development kit to enable computer-vision-aided industrial inspection. Datagen and Synthesis AI are using the SDK to create synthetic digital humans for AI training. And Deloitte is providing synthetic data generation services using Omniverse Replicator for customers across domains ranging from manufacturing to telecom.
Core Updates Coming to Omniverse
In his GTC keynote this morning, NVIDIA CEO Jenson Huang previewed the next Omniverse release coming this spring, which includes:
Updates to Omniverse apps that enable developers and enterprise customers to build on foundation applications to suit their specific workflows:
- NVIDIA USD Composer (formerly Omniverse Create) — a customizable foundation application for designers and creators to assemble large-scale, USD-based datasets and compose industrial virtual worlds.
- NVIDIA USD Presenter (formerly Omniverse View) — a customizable foundation application visualization reference app for showcasing and reviewing USD projects interactively and collaboratively.
- NVIDIA USD-GDN Publisher — a suite of cloud services that enables developers and service providers to easily build, publish and stream advanced, interactive, USD-based 3D experiences to nearly any device in any location.
Improved developer experience — The new public extension registry enables users to receive automated updates to extensions. New configurator templates and workflows as well as an NVIDIA Warp Kernel Node for Omnigraph will enable zero-friction developer workflows for GPU-based coding.
Next-level rendering and materials — Omniverse is offering for the first time a real-time, ray-traced subsurface-scattering shader, enabling unprecedented realism in skin for digital humans. The latest update to Universal Material Mapper lets users seamlessly bring in material libraries from third-party applications, preserving material structure and full editing capability.
Groundbreaking performance — In a major development to enable massive large-scene performance, USD’s runtime data transfer technology provides an efficient method to store and move runtime data between modules. The scene optimizer allows users to run optimizations at USD level to convert large scenes into more lightweight representations for improved interactions.
AI training capabilities — Automatic domain randomization and population-based training make complex robotic training significantly easier for autonomous robotics development.
Generative AI — A new text-to-materials extension allows users to automatically generate high-quality materials solely from a text prompt. To accelerate usage of generative AI, updates within Omniverse also include text-to-materials and text-to-code generation tools. Additionally, updates to the Audio2Face app include headless mode, a REST application programming interface, improved lip-sync quality and more robust multi-language support including for Mandarin.
Developers can also use AI-generated inputs from technology such as ChatGPT to provide data to Omniverse extensions like Camera Studio, which generates and customizes cameras in Omniverse using data created in ChatGPT.
You can read more on the NVIDIA website here.
Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.