Deep neural network grabs full-body scans from wrist camera

Using a tiny camera and customised deep neural network, researchers have developed a first-of-its-kind wristband that tracks an entire body posture in 3D

A deep neural network which allows a single wrist-mounted camera to produce a full-body representation of the wearer’s actions in real-time could be coming to regular smartwatches and phones.

Cornell University researchers have developed BodyTrak, a wristband that tracks the entire body posture in 3D, the first wearable to track the full body pose with a single camera. BodyTrak could be used to monitor physical activities where precision is critical, says Cheng Zhang, Assistant Professor of Information Science and the paper’s senior author.

“Since smartwatches already have a camera, technology like BodyTrak could understand the user’s pose and give real-time feedback,” says Zhang. “That’s handy, affordable and does not limit the user’s moving area.”

A corresponding paper - BodyTrak: Inferring Full-body Poses from Body Silhouettes Using a Miniature Camera on a Wristband - was published in the Proceedings of the Association for Computing Machinery (ACM) on Interactive, Mobile, Wearable and Ubiquitous Technology, and presented in September at UbiComp 2022, the ACM international conference on pervasive and ubiquitous computing.

BodyTrak is the latest body-sensing system from Cornell’s SciFiLab, which previously developed similar deep-learning models to track hand and finger movements, facial expressions and silent-speech recognition.

Partial images fleshed out by deep neural network

BodyTrak uses a coin-sized camera on the wrist and the customised deep neural network behind it. This deep neural network – a method of AI that trains computers to learn from mistakes – reads rudimentary images or “silhouettes” of the user’s body in motion. The model accurately fills out and completes the partial images captured by the camera, says Hyunchul Lim, a doctoral student in the field of information science and the paper’s lead author.

“Our research shows that we don’t need our body frames to be fully within camera view for body sensing,” says Lim. “If we are able to capture just a part of our bodies, that is a lot of information to infer to reconstruct the full body.”

Privacy for those standing near someone wearing a BodyTrak device is a legitimate concern when developing these technologies, say Zhang and Lim, so the camera is pointed toward the user’s body and collects only partial body images of the user.

The researchers say they also accept that today’s smartwatches don’t yet have small or powerful enough cameras and adequate battery life to integrate full-body sensing, but this will change in the future.

The paper was co-authored by Matthew Dressa, Jae Hoon Kim and Ruidong Zhang, a doctoral student in the field of information science; Yaxuan Li of McGill University; and Fang Hu of Shanghai Jian Tong University, in addition to Lim and Zhang.

Share

Featured Articles

Microsoft & Alphabet: AI and Cloud Strategy Driving Success

Tech giants Microsoft and Alphabet are going all in on AI and cloud computing, investing billions to develop powerful models and platforms

Vodafone’s Maria Grazia Pecorari joins Tech & AI LIVE London

Maria Grazia Pecorari, Director of Strategy and Wholesale at Vodafone UK to speak at Tech & AI LIVE London

How Alteryx Aims to Bring Data Analytics Skills to All

With digital leaders citing skills shortages as a major business obstacle, Alteryx has announced partnerships to tackle the data and analytics skills gap

Ivanti’s David Shepherd joins Tech & AI LIVE London

Digital Transformation

Dell Technologies: Firms Expect AI to Transform Industries

AI & Machine Learning

Top 100 Women 2024: Robyn Denholm, Tesla - No. 8

AI & Machine Learning