Deep neural network grabs full-body scans from wrist camera

Share
Using a tiny camera and customised deep neural network, researchers have developed a first-of-its-kind wristband that tracks an entire body posture in 3D

A deep neural network which allows a single wrist-mounted camera to produce a full-body representation of the wearer’s actions in real-time could be coming to regular smartwatches and phones.

Cornell University researchers have developed BodyTrak, a wristband that tracks the entire body posture in 3D, the first wearable to track the full body pose with a single camera. BodyTrak could be used to monitor physical activities where precision is critical, says Cheng Zhang, Assistant Professor of Information Science and the paper’s senior author.

“Since smartwatches already have a camera, technology like BodyTrak could understand the user’s pose and give real-time feedback,” says Zhang. “That’s handy, affordable and does not limit the user’s moving area.”

A corresponding paper - BodyTrak: Inferring Full-body Poses from Body Silhouettes Using a Miniature Camera on a Wristband - was published in the Proceedings of the Association for Computing Machinery (ACM) on Interactive, Mobile, Wearable and Ubiquitous Technology, and presented in September at UbiComp 2022, the ACM international conference on pervasive and ubiquitous computing.

BodyTrak is the latest body-sensing system from Cornell’s SciFiLab, which previously developed similar deep-learning models to track hand and finger movements, facial expressions and silent-speech recognition.

Partial images fleshed out by deep neural network

BodyTrak uses a coin-sized camera on the wrist and the customised deep neural network behind it. This deep neural network – a method of AI that trains computers to learn from mistakes – reads rudimentary images or “silhouettes” of the user’s body in motion. The model accurately fills out and completes the partial images captured by the camera, says Hyunchul Lim, a doctoral student in the field of information science and the paper’s lead author.

“Our research shows that we don’t need our body frames to be fully within camera view for body sensing,” says Lim. “If we are able to capture just a part of our bodies, that is a lot of information to infer to reconstruct the full body.”

Privacy for those standing near someone wearing a BodyTrak device is a legitimate concern when developing these technologies, say Zhang and Lim, so the camera is pointed toward the user’s body and collects only partial body images of the user.

The researchers say they also accept that today’s smartwatches don’t yet have small or powerful enough cameras and adequate battery life to integrate full-body sensing, but this will change in the future.

The paper was co-authored by Matthew Dressa, Jae Hoon Kim and Ruidong Zhang, a doctoral student in the field of information science; Yaxuan Li of McGill University; and Fang Hu of Shanghai Jian Tong University, in addition to Lim and Zhang.

Share

Featured Articles

Nvidia & AWS’s AI Breakthroughs at Re:Invent 2024

Nvidia & AWS showcase groundbreaking AI, robotics & quantum computing solutions at re:Invent 2024, changing enterprise AI deployment across industries

SAP and AWS Partner on AI-Powered Cloud ERP Platform GROW

Partnership between enterprise software firm SAP and cloud computing leader Amazon Web Services aims to speed cloud software adoption with generative AI

SAVE THE DATE – Cyber LIVE London 2025

Cyber LIVE returns in 2025 for a one-day in-person event co-located with Tech & AI LIVE London Global Summit

Amazon's New AI Chip Challenges Nvidia's Dominance

AI & Machine Learning

Wipro Cloud Deal Marks Marelli’s Data Centre Transformation

Digital Transformation

SUBMISSIONS OPEN - Global Tech & AI Awards 2025

Digital Transformation