This week, the SIGGRAPH conference, the annual gathering of 3D technologists and artists, is welcoming a new phenomenon some say is as big a deal as those breakthroughs in 1993. “This year, I believe, is a special SIGGRAPH, it will probably go down in history as one of the most important SIGGRAPHs, and an inflection point,” said Rev Lebaredian, Nvidia’s vice president of Omniverse and simulation technology, in a press briefing in advance of the conference. “1993 was a big SIGGRAPH, where everyone was celebrating,” Lebaredian said. “This year marks a similar inflection, one where we are seeing a new era of the Internet that is being called the Metaverse.” Also: Nvidia’s Omniverse: The metaverse is a network not a destination The “foundational technologies” of the Metaverse, said Lebaredian, are “all the things that people at SIGGRAPH have been working toward for decades now.” 1993 was also the year Nvidia was founded, noted Lebaredian, and today, “Nvidia is probably the most important and largest company to focus on 3D computer graphics in general.” Tuesday, Lebaredian is among the Nvidia executives taking part in the keynote at SIGGRAPH of Jensen Huang, Nvidia’s co-founder and CEO, beginning at 9 am Pacific time. The focus of the keynote is the Metaverse and enabling technologies for it, including what Nvidia calls its Omniverse technology or what Nvidia bills as a platform for building and connecting metaverse worlds based on what’s called “USD,” or “Universal Scene Description,” an industry standard being developed by multiple giants including Nvidia, Apple, and many others. USD is an interchange framework invented by Pixar in 2012, which was released as open-source software in 2016, providing a common language for defining, packaging, assembling, and editing 3D data. During the keynote, Huang and team unveiled updates to the Omniverse technology, with a heavy emphasis on the use of artificial intelligence technology, to build upon USD. Also: Nvidia expands Omniverse with a new GPU, new collaborations The first major item was the announcement of a “broad initiative” between Nvidia and partners including Pixar, Adobe, Autodesk, Siemens and others to “pursue a multi-year roadmap to expand USD’s capabilities beyond visual effects” so that it will better serve as infrastructure for various kinds of applications, from architectural to engineering to digital twins. The centerpiece of the announcements is Nvidia’s Omniverse Avatar Cloud Engine, a suite of machine learning models to ease the development of virtual assistants and “digital humans.” Nvidia said of the technology that ACE “enables businesses of any size to instantly access the massive computing power needed to create and deploy assistants and avatars that understand multiple languages, respond to speech prompts, interact with the environment and make intelligent recommendations.” There were a number of additional tool announcements. OmniLive Workflows, an overhaul to the way that apps share USD data, to make collaboration speedier and “non-destructive.” Audio2Face and Machinima, two Omniverse apps that create facial animations for avatars and that create realistic arm and body motion for avatars, respectively. DeepSearch, a tool for enterprises to search through unstructured data of 3-D assets. Nvidia has open-sourced its MDL tool for “physically accurate representation of 3-D materials.” Neural VDB, appearing in beta, is an adaptation of the open-source OpenVDB, a C++ library for storage and manipulation of “sparse” volumetric data for 3-D objects. Nvidia says its NeuralVDB, which uses a deep learning technology called neural fields, can cut the amount of storage space needed for very large data sets by a hundred times.