HomeTechnologyArtificial IntelligenceIntelligent Carpet Gives Insight into Human Poses

    Intelligent Carpet Gives Insight into Human Poses

    The sentient magic carpet from ‘Aladdin’ might have a new competitor. While it can’t fly or speak, a new tactile sensing carpet from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) can estimate human poses without using cameras, in a step towards improving self-powered personalized healthcare, smart homes, and gaming.

    Many of our daily activities involve physical contact with the ground: walking, exercising, or resting. These embedded interactions contain a wealth of information that help us better understand people’s movements.

    Previous research has leveraged use of single RGB cameras, (think Microsoft Kinect), wearable omnidirectional cameras, and even plain old off the shelf webcams, but with the inevitable byproducts of camera occlusions and privacy concerns.

    The CSAIL team’s system only used cameras to create the dataset the system was trained on, and only captured the moment of the person performing the activity. To infer the 3D pose, a person would simply have to get on the carpet, perform an action, and then the team’s deep neural network, using just the tactile information, could determine if the person was doing sit-ups, stretching, or doing another action.

    The carpet itself, which is low cost and scalable, was made of commercial, pressure-sensitive film and conductive thread, with over nine thousand sensors spanning thirty six by two feet. (Most living room rug sizes are eight by ten or nine by twelve.)

    Each of the sensors on the carpet convert the human’s pressure into an electrical signal, through the physical contact between people’s feet, limbs, torso, and the carpet. The system was specifically trained on synchronized tactile and visual data, such as a video and corresponding heatmap of someone doing a pushup.

    The model takes the pose extracted from the visual data as the ground truth, uses the tactile data as input, and finally outputs the 3D human pose.

    This might look something like, when, after stepping onto the carpet, and doing a set up of pushups, the system is able to produce an image or video of someone doing a push-up.

    In fact, the model was able to predict a person’s pose with an error margin (measured by the distance between predicted human body key points and ground truth key points) by less than ten centimeters. For classifying specific actions, the system was accurate 97 percent of the time.

    Based solely on tactile information, it can recognize the activity, count the number of reps, and calculate the amount of burned calories.

    Since much of the pressure distributions were prompted by movement of the lower body and torso, that information was more accurate than the upper body data. Also, the model was unable to predict poses without more explicit floor contact, like free-floating legs during sit-ups, or a twisted torso while standing up.

    While the system can understand a single person, the scientists, down the line, want to improve the metrics for multiple users, where two people might be dancing or hugging on the carpet. They also hope to gain more information from the tactical signals, such as a person’s height or weight.

    ELE Times Research Desk
    ELE Times Research Deskhttps://www.eletimes.ai
    ELE Times provides extensive global coverage of Electronics, Technology and the Market. In addition to providing in-depth articles, ELE Times attracts the industry’s largest, qualified and highly engaged audiences, who appreciate our timely, relevant content and popular formats. ELE Times helps you build experience, drive traffic, communicate your contributions to the right audience, generate leads and market your products favourably.

    Related News

    Must Read

    Nuvoton Launches Arbel NPCM8mnx System-in-Package (SiP) for AI Servers and Datacenter Infrastructure

    Breakthrough BMC Innovation Powers Secure, Scalable, and Open Compute...

    STMicroelectronics joins FiRa board, strengthening commitment to UWB ecosystem and automotive Digital Key adoption

    STMicroelectronics has announced that Rias Al-Kadi, General Manager of the...

    NEPCON ASIA 2025: Showcasing the Future of Smart Electronics Manufacturing

    NEPCON ASIA 2025, taking place from October 28 to...

    Renesas Expands Sensing Portfolio with 3 Magnet-Free IPS ICs & Web-Based Design Tool

    New Simulation & Optimization Platform Enables Custom Coil Designs...

    IEEE IEDM, 2025 Showcases Latest Technologies in Microelectronics, Themed “100 Years of FETs”

    The IEEE International Electron Devices Meeting (IEDM) is considered...

    OMNIVISION Introduces Next-Generation 8-MP Image Sensor For Exterior Automotive Cameras

    OMNIVISION announced its latest-generation automotive image sensor: the OX08D20, 8-megapixel (MP) CMOS...

    Vishay Intertechnology Expands Inductor Portfolio with 2000+ New SKUs and Increased Capacity

    Vishay Intertechnology, Inc. announced that it has successfully delivered...

    Keysight to Demonstrate AI-enabled 6G and Wireless Technologies at India Mobile Congress 2025

    Keysight Technologies will demonstrate 20 advanced AI-enabled 6G and...

    Ashwini Vaishnaw Approves NaMo Semiconductor Lab at IIT Bhubaneswar

    As part of a big push towards the development...