Digital Twins and Hyper-Realistic Avatars, the 3D Modeling of 2022

Lucid Reality Labs
13 min readJun 6, 2022

Every day we come one step closer to achieving the next major step in the global tech development that will touch upon every user globally. This step could be complex yet exciting, reshaping how we perceive interaction, collaboration, and physical and digital presence — the establishment of Web 3.0 or, as some call it, the Metaverse. Regardless of the name, it will be the next generation of the internet, enhanced with the latest software and hardware. The next iteration will be a decentralized, data-driven, machine learning (ML) and artificial intelligence (AI) enhanced, ungoverned, open-source global network.

While the vision is there, we also understand that building such a massive new system will require the entire tech community to join forces in providing elements from advanced hardware and software to cyber security and blockchain technology. Nevertheless, all related announcements cause major hype amongst both specialists and enthusiasts. It was most evident when the tech community gasped with excitement over the October 2021 Meta announcement. This particular statement of the ambition to build the Metaverse has split opinions on whether it is feasible to say that one company can solely bring us the next generation of the internet.

The caused hype has also split the tech community in their rhetoric of whether to consider the Metaverse the potential future Web 3.0 or not. While we have reached S2–2022, we have already witnessed a significant downshift in the search trends, evident more distinctly since Feb 2022; it is still too early to say that it was just another overhyped news. The interest around the Metaverse, or more precisely around the next potential iteration of the internet, is still tremendous.

However, like with any news, the attention of users can only be maintained for a limited amount of time. Regardless of the switch in the search agenda, the Metaverse concept is here to stay, with market estimates of hitting 824.53 Billion by 2030. Even though the main drivers of the Metaverse are gaming, entertainment, and media industries, they have helped set the stage for a more mass XR adaptation and prepare end-users for Web 3.0. With the rapid consumer interest in XR-enhanced experiences, businesses are surging to adapt to the ever-growing demand and need for cutting-edge infrastructure design, 3D environments revamp, and the creation of tech-driven ecosystems.

At the same time, the idea of a new generation of the internet has taken root in the technological community for quite some time. Giants like Meta, Apple, Google, Microsoft, Qualcomm, and many more are investing continuously into elements that can bring us closer to Web 3.0 or Metaverse, whichever name the next iteration will have by then. We can see significant technological advancement in numerous fields, from Extended Reality (XR) and affective haptics to advanced cybersecurity and next-generation immersive hardware.

Among all elements that make up Metaverse, XR can be deemed the fastest growing and most dynamic. This has to do primarily with the pandemic accelerated market need for remote presence, expanding number of XR devices, increasing capabilities of developers, and appearance of more low and no-code platforms. The more significant the technological leap we make with Extended Reality (XR), the more we wonder about the future opportunities it can bring. Nevertheless, understanding the current point we are at can help us envision and explore the technology stack that will develop even more in the near future.

In our previous articles, we talked quite a lot about Extended Reality, the technology itself, the notable trends, its implementation in various industries, including healthcare, pharma, manufacturing, and enterprise ecosystems, and the advantages it has when speaking of hybrid platforms and much more. Today, we would like to go a bit more in-depth and uncover one of the critical elements that make up XR, the essential behind the level of realism, immersion, and believability of the entire experience, the backbone of visual fidelity — the 3D modeling

If we look at pure 3D modeling, it has long gone far beyond its traditional game and movie production implementation, emerging in healthcare, 3D printing, and XR. With the coming of the Metaverse and Web 3.0, 3D modeling is no longer a solely developers’ domain. More and more low-code software is being released that enables users to create 3D objects, avatars, and entire environments with simplified and ready-to-use tools. Which, without doubt, allows users to express themselves, sharing their creativity and vision without requiring any advanced developer education or skill.

At the same time, when it comes to XR experiences, advanced 3D modeling plays an essential role in creating practically every element, from the entire functional environment and complete scenes to 1-to-1 precision objects as well as hyper-realistic AI and ML-enhanced avatars. As XR experiences have already become more immersive and complex, XR developers have long moved past basic 3D modeling that involved just creating and producing digital objects and environments. We have already reached the point where developers are capable of recreating objects of significant size, like aircraft or cruise liners, or even entire cities with the tiniest detail and functionality — the digital twin of practically anything.

Speaking of which, today, we will look at two of these significant aspects of 3D advancements in 2022 — the digital twin technology, already mentioned earlier, and one other essential direction in 3D, the hyper-realistic avatars. We will have a quick dive into the advancement of digital twin technology that enables more immersive and realistic XR interaction and talk about hyper-realistic avatars that will allow to realistically represent users in the Metaverse, or Web 3.0 if you would like, in the near coming future.

DIGITAL TWINS

Deloitte predicts the global digital twin technology market to hit a whopping $16 billion by 2023, with an estimated annual growth of 38%, driven by industries like healthcare, aerospace, automotive and rapid advancement of digital twin technology capabilities.

WHAT IS A DIGITAL TWIN?

Now, to better understand the concept, a digital twin is the technology that implies the creation of digital models or, otherwise, one-to-one digital replicas. Here we are talking about creating the digital twin software representation of practically anything from physical objects, systems, networks, operations, and assets to entire environments, 3D designed to accurately reflect real-life visual and functional properties, recreated in a virtual space. Digital twins are dynamic, real-time, data-driven counterparts of tangible assets, environments, networks, systems, and processes that are used to model, test, and predict behavior outcomes based on various scenarios and data input.

HOW DOES DIGITAL TWIN TECHNOLOGY WORK?

Digital twins work based on initial data gathered and used to re-create physical objects, systems, operations, assets, product life cycles, or environments. Which is then used to create partial or fully functional digital versions of the real-world items. Completed models can simulate full or some aspects of functionality and characteristics, depending on the project’s requirements and final goal. Digital twins can be built based on 3D modeling, scanning, BIM, CAD, and GIS models. They can be interacted with in real-time while applying Internet of Things (IoT), Artificial Intelligence (AI), Machine Learning (ML), Big Data, and analytics. Digital twins are often used in XR technology, for which it is possible to practically create the entire experience, or so to say clone, any real-life environment, scene, process or even lifecycle.

TYPES OF DIGITAL TWINS?

Some identify three types of digital twins — Product Digital Twins, Production Digital Twins, and Performance Digital Twins. Where Product Digital Twins allow for efficient product design, Production Digital Twins for production and manufacturing planning, and Performance Digital Twin serves as a tool to capture, analyze and take the next steps based on the acquired data. Today, digital twin technology is used to recreate anything from retail items to aircraft, entire cities, or even ecosystems. Additionally, they can include complete functional replicas of existing and future objects from around the globe, connected to precise geographical locations. When it comes to digital twins, we often imagine just the object recreation. They involve more than simple assets and cover multiple directions, including component/part twins, asset twins, system/unit twins, and even process twins.

WHAT CHALLENGES DOES DIGITAL TWIN TECHNOLOGY SOLVE?

Digital twins can solve quite a spectrum of challenges that involve high-value capital assets, expensive product innovation, or even intricate processes. The challenges vary depending on the industry. And start from Healthcare and MedTech to defense and aerospace as well as urban planning and smart cities. They can include costly, complex, time-sensitive, or high-risk processes to products, activities, ecosystems, and environment re-creation. They allow to prototype, map, analyze, simulate and test activities in virtual environments to predict results and test hypotheses, optimize processes and plan for potential setbacks and bottlenecks.

BENEFITS OF DIGITAL TWINS?

There are quite a few benefits digital twin technology can deliver, from increasing processes reliability and reducing outcome-related time to avoiding risk, optimizing industrial processes, and physical asset management. While the full scope of benefits solely depends on the purpose of the digital twins, they can be significantly beneficial when we speak of planning, effective cost management, risk, and time-related activities.

WHERE ARE DIGITAL TWINS USED?

Digital twins can be used in many areas, from innovative products and efficient process design to performance optimization, predictive maintenance, and large-scale infrastructure planning. One of the primary ways digital twins are implemented today is, without a doubt, in the creation of advanced XR experiences, including Augmented Reality (AR), Virtual Reality (VR) and Mixed Reality (MR). The main reason behind that is the XR requirement for replicating not only the visual but functional, physics, and interactive aspects of an object, asset, environment, or system. In terms of XR, digital twins are especially helpful as they allow users to experience a completely different level of process and presence. They allow for a much more realistic immersive experience and scenario creation, where the users can interact with specific elements of the XR experience, see, feel, hear and sense as if they were physically present while interacting in an artificially created setting. Once combined with haptic technology, which can partially replicate and convey the sense of touch, the visual fidelity of digital twin technology allows for an unprecedented user immersion into the XR experience.

XR DIGITAL TWIN USE CASES

As the number of industries adapting XR continues growing, we have seen the technology implementation into the broadest range of experiences. Further, we will look at some examples of the digital twins in action and the use cases of digital twin technology used to create genuinely immersive XR experiences.

VR HYSTEROSCOPY MULTIPLAYER SIMULATION

VR Hysteroscopy Multiplayer Simulation is a fully immersive VR solution. That could replace traditional large-scale electronic PC-based simulators and be easily used at any exhibition or conference for a precise product and procedure demonstration with maximum realism for the user. The core of this solution lies within the “7 degrees of freedom” 1-to-1 visual and tactile replica of 7 moving parts controlled by the doctor during the actual procedure. The VR solution allowed for faster data transfer and update, system and demonstration effectiveness, as well as the post-procedure feeling of accomplishment triggered by tactile memory, and precise hands-on and fully immersive experience. Find out more.

AEROSPACE VR MAINTENANCE SIMULATION

Aerospace VR Maintenance Simulation is a VR jet turbine engine maintenance training environment for specialists in charge of regular technical reviews and checks of civil aircraft, including every step of the procedure from disassembling to cleaning, oiling, testing, and reassembling, in order to improve the quality of performed maintenance work. The simulation enables users to train and ensure process precision, reduce the number of corrective steps required, minimize engine post maintenance release time, and process lead time. The uniqueness of this project was the need to create an identical 1-to-1 graphical and industry standards protocol following replica of the hangar working space (complete with work personnel, equipment, vehicles, technicians working space and safety markings as per a real maintenance procedure) based on measures, 360 video and source materials, 3D facility model of the full facility, pictures from selected angles and technical documentation due to security restrictions on physically visiting the site. Find out more.

INSTRUMENTATION LABORATORY HEMOCELL IN VR

The objective of the project was to create a VR environment constructor where the client could create 1-to-1 virtual visualizations of the future laboratory according to their customer’s requirements. The virtual space would allow customers to first-hand visit and observe the future laboratory with pre-installed equipment. The developed solution, created for Oculus Quest 2 in order to ensure a smooth user experience, consisted of two stages — the editor and the VR experience. In the editor, the client can create the full visual of the future laboratory based on the customer’s requirements, defining range and parameters as well as selecting required equipment. Part of the equipment was created by the developers team, the other part was. One part of the initial equipment was created by the in-house developers, the other integrated using CAD models, processing and preparation for the real-time 3d rendering in a portable VR headset. The second stage generates the VR laboratory based on the editor-created plan. The created VR laboratory allows users to experience and walk-through a 1-to-1 recreated VR laboratory in a fully immersive manner. Find out more.

HYPER-REALISTIC AVATARS

The ongoing development of XR technology and the coming of the Web 3.0, alternatively the Metaverse, has led many users to strive to achieve realistic and representative depictions of themselves in this new technological ecosystem. Which is already reflecting in the market size estimate for digital human avatars that is expected to reach USD 527.58 Billion in 2030, today still primarily driven by the media and entertainment industries.

WHAT ARE HYPER-REALISTIC AVATARS?

Some of us still remember when user avatars were purely generic 2D images that you could upload, the ones that were only able to reflect the tiniest fraction of the user’s appearance, personality, or feeling of oneself. We’ve definitely gone a long way since then. Today, even our phones allow us to instantly capture and create 3D avatars in a simple and straightforward way. However, when we are speaking of hyper-realistic avatars today, we are referring to life-like digital depictions, digital clones, or digital counterparts of real people in virtual environments that are created to represent human entities that can be animated and integrated into a variety of XR and Metaverse experiences. Or they could be “from scratch” created 3D models of a human being, which realistically reflect anatomy, features, facial structure, and complexion of a real human being, which could potentially be AI powered or voice assistant enhanced.

HOW DOES HYPER-REALISTIC AVATAR TECHNOLOGY WORK?

Hyper-realistic avatars are developed using a variety of software that involves 3D modeling, facial and full-body scans. While it might seem fairly complex, today, major tech players have already released software capable of producing hyper-realistic avatars for XR with less hustle. This still doesn’t mean that it’s possible to produce real quality 3D avatars without any 3D modelling, however, the process now is much more optimized, which enables developers to create them with the next level of realism. Like the Unreal Engine which launched its Metahuman high-fidelity photorealistic digital human creation platform in 2021. Or the recent Unity purchase of Ziva Dynamics, the real-time hyper-realistic 3D character design suite. We are seeing more XR pioneers experimenting and releasing products that can potentially simplify the creative process for many immersive technology developers worldwide.

TYPES OF HYPER-REALISTIC AVATARS?

Probably the most common types of hyper-realistic avatars would be the avatars used by real users and the NPCs, otherwise called Non-playable characters, the term that is used in the gaming industry for characters that reside as part of the experience. In the first case, the hyper-realistic depiction of oneself would allow users to immerse more into and personalize the experiences, incorporating self-awareness and emotionally resonating with the user. At the same time, hyper-realistic avatars used as NPCs can be AI-enhanced and improve the level of interaction for users, making experiences more life-like, believable, and submerging.

BENEFITS OF HYPER-REALISTIC AVATARS?

Even though hyper-realistic avatars are used in the media and entertainment industries, they can significantly benefit education and training experiences. Primarily due to their capability to be integrated into a more extensive scope of industries from Healthcare and MedTech to aerospace, defense, or manufacturing, allowing users to feel more present while inside the experience. They can provide more realistic means of interaction, collaboration, training, and communication, removing boundaries between being physically and digitally present.

WHERE ARE HYPER-REALISTIC AVATARS USED?

Looking back, the gaming and movie production industry were, without doubt, among the first to adopt the next level of realism in avatars. Or to be more exact, in character creation, constructing almost every element in 3D from scratch, from textures and facial complexion to hair and clothing in order to build a universal model ready for animation. Today, hyper-realistic avatars are being vastly integrated into XR experiences, making them more believable and engaging and helping create more immersive scenarios for practically any industry.

WHAT CHALLENGES DOES HYPER-REALISTIC AVATARS TECHNOLOGY SOLVE?

The topic of realism was always important in terms of XR experiences and simulations, speaking from the perspective of the environment and experience comprehension, submersion, and believability. With every step, we take towards realizing the Metaverse, or Web 3.0, hyper-realism in avatars is becoming a much more important subject for developers and users. The main challenge hyper-realistic avatar can solve is the possibility of accurate representation of the user, which resonates more with the users, allowing for more meaningful interactions with others within the virtual experience, both human and artificial participants. At the same time, they help solve the challenge of feeling present while collaborating remotely within the experience with both participants and the entire environment.

Even though digital twins and hyper-realistic avatars are becoming more widely used for XR projects, we will need much more advanced processing powers before any of such high-fidelity and capability assets can be easily integrated into a Web 3.0 or Metaverse. Regardless, today, they already provide the opportunity to simplify some of the fundamental processes and give room for developers to create much more functionally and interactively complex experiences that simultaneously have an unprecedented level of visual fidelity. There is no doubt that these technologies will continue to significantly improve the quality of immersive experiences for users, giving both enterprise and regular consumers new opportunities to collaborate and feel physically present.

Authors: Alex Dzyuba, Lucid Reality Labs Founder & CEO | Anna Rohi, Lucid Reality Labs Senior Marketing & Communications Manager. You can find the original article published here, for more articles and news on immersive technology follow Lucid Reality Labs Blog.

Article Last Edited 31 August 2022.

--

--