HomeTechnologyAR and VRVideo Game Characters Inspired by Real People

    Video Game Characters Inspired by Real People

    In recent years, videogame developers and computer scientists have been trying to devise techniques that can make gaming experiences increasingly immersive, engaging, and realistic. These include methods to automatically create videogame characters inspired by real people.

    Most existing methods to create and customize videogame characters require players to adjust the features of their character’s face manually, in order to recreate their own face or the faces of other people. More recently, some developers have tried to develop methods that can automatically customize a character’s face by analyzing images of real people’s faces. However, these methods are not always effective and do not always reproduce the faces they analyze in realistic ways.

    Researchers have recently created MeInGame, a deep learning technique that can automatically generate character faces by analyzing a single portrait of a person’s face.

    They proposed an automatic character face creation method that predicts both facial shape and texture from a single portrait and can be integrated into most existing 3D games.

    Some of the automatic character customization systems presented in previous works are based on computational techniques known as 3D morphable face models (3DMMs). While some of these methods have been found to reproduce a person’s facial features with good levels of accuracy, the way in which they represent geometrical properties and spatial relations (i.e., topology) often differs from the meshes utilized in most 3D videogames.

    In order for 3DMMs to reproduce the texture of a person’s face reliably, they typically need to be trained on large datasets of images and on related texture data. Compiling these datasets can be fairly time-consuming. Moreover, these datasets do not always contain real images collected in the wild, which can prevent models trained on them from performing consistently well when presented with new data.

    Given an input face photo, they first, reconstruct a 3D face based on a 3D morphable face model (3DMM) and convolutional neural networks (CNNs), then transfer the shape of the 3D face to the template mesh. The proposed network takes the face photo and the unwrapped coarse UV texture map as input then predicts lighting coefficients and refined texture maps.

    They evaluated their deep learning technique in a series of experiments, comparing the quality of the game characters it generated with that of character faces produced by other existing state-of-the-art methods for automatic character customization. Their method performed remarkably well, generating character faces that closely resembled those in input images.

    The proposed method does not only produce detailed and vivid game characters similar to the input portrait, but it can also eliminate the influence of lighting and occlusions. Experiments show that this method outperforms state-of-the-art methods used in games.

    In the future, the character face generation method devised by this team of researchers could be integrated within a number of 3D videogames, enabling the automatic creation of characters that closely resemble real people.

    ELE Times Bureau
    ELE Times Bureauhttps://www.eletimes.ai/
    ELE Times provides a comprehensive global coverage of Electronics, Technology and the Market. In addition to providing in depth articles, ELE Times attracts the industry’s largest, qualified and highly engaged audiences, who appreciate our timely, relevant content and popular formats. ELE Times helps you build awareness, drive traffic, communicate your offerings to right audience, generate leads and sell your products better.

    Related News

    Must Read

    Nuvoton Launches Arbel NPCM8mnx System-in-Package (SiP) for AI Servers and Datacenter Infrastructure

    Breakthrough BMC Innovation Powers Secure, Scalable, and Open Compute...

    STMicroelectronics joins FiRa board, strengthening commitment to UWB ecosystem and automotive Digital Key adoption

    STMicroelectronics has announced that Rias Al-Kadi, General Manager of the...

    NEPCON ASIA 2025: Showcasing the Future of Smart Electronics Manufacturing

    NEPCON ASIA 2025, taking place from October 28 to...

    Renesas Expands Sensing Portfolio with 3 Magnet-Free IPS ICs & Web-Based Design Tool

    New Simulation & Optimization Platform Enables Custom Coil Designs...

    IEEE IEDM, 2025 Showcases Latest Technologies in Microelectronics, Themed “100 Years of FETs”

    The IEEE International Electron Devices Meeting (IEDM) is considered...

    OMNIVISION Introduces Next-Generation 8-MP Image Sensor For Exterior Automotive Cameras

    OMNIVISION announced its latest-generation automotive image sensor: the OX08D20, 8-megapixel (MP) CMOS...

    Vishay Intertechnology Expands Inductor Portfolio with 2000+ New SKUs and Increased Capacity

    Vishay Intertechnology, Inc. announced that it has successfully delivered...

    Keysight to Demonstrate AI-enabled 6G and Wireless Technologies at India Mobile Congress 2025

    Keysight Technologies will demonstrate 20 advanced AI-enabled 6G and...

    Ashwini Vaishnaw Approves NaMo Semiconductor Lab at IIT Bhubaneswar

    As part of a big push towards the development...