Deep neural network grabs full-body scans from wrist camera

Using a tiny camera and customised deep neural network, researchers have developed a first-of-its-kind wristband that tracks an entire body posture in 3D

A deep neural network which allows a single wrist-mounted camera to produce a full-body representation of the wearer’s actions in real-time could be coming to regular smartwatches and phones.

Cornell University researchers have developed BodyTrak, a wristband that tracks the entire body posture in 3D, the first wearable to track the full body pose with a single camera. BodyTrak could be used to monitor physical activities where precision is critical, says Cheng Zhang, Assistant Professor of Information Science and the paper’s senior author.

“Since smartwatches already have a camera, technology like BodyTrak could understand the user’s pose and give real-time feedback,” says Zhang. “That’s handy, affordable and does not limit the user’s moving area.”

A corresponding paper - BodyTrak: Inferring Full-body Poses from Body Silhouettes Using a Miniature Camera on a Wristband - was published in the Proceedings of the Association for Computing Machinery (ACM) on Interactive, Mobile, Wearable and Ubiquitous Technology, and presented in September at UbiComp 2022, the ACM international conference on pervasive and ubiquitous computing.

BodyTrak is the latest body-sensing system from Cornell’s SciFiLab, which previously developed similar deep-learning models to track hand and finger movements, facial expressions and silent-speech recognition.

Partial images fleshed out by deep neural network

BodyTrak uses a coin-sized camera on the wrist and the customised deep neural network behind it. This deep neural network – a method of AI that trains computers to learn from mistakes – reads rudimentary images or “silhouettes” of the user’s body in motion. The model accurately fills out and completes the partial images captured by the camera, says Hyunchul Lim, a doctoral student in the field of information science and the paper’s lead author.

“Our research shows that we don’t need our body frames to be fully within camera view for body sensing,” says Lim. “If we are able to capture just a part of our bodies, that is a lot of information to infer to reconstruct the full body.”

Privacy for those standing near someone wearing a BodyTrak device is a legitimate concern when developing these technologies, say Zhang and Lim, so the camera is pointed toward the user’s body and collects only partial body images of the user.

The researchers say they also accept that today’s smartwatches don’t yet have small or powerful enough cameras and adequate battery life to integrate full-body sensing, but this will change in the future.

The paper was co-authored by Matthew Dressa, Jae Hoon Kim and Ruidong Zhang, a doctoral student in the field of information science; Yaxuan Li of McGill University; and Fang Hu of Shanghai Jian Tong University, in addition to Lim and Zhang.


Featured Articles

Building Cyber Resilience into ‘OT in Manufacturing’ webinar

Join Acronis' webinar, Building Cyber Resilience into ‘OT in Manufacturing’, 21st September 2023

Google at 25: From a Search pioneer to AI breakthroughs

Technology Magazine explores how the tech giant went from being based in a California garage to a pioneer in technologies from AI to quantum computing

McKinsey: Nine actions for CIOs and CTOs to embrace gen AI

McKinsey identifies nine actions to help CIOs and CTOs create value, orchestrate technology and data, scale solutions, and manage risk for generative AI

OpenAI ChatGPT Enterprise tier drives digital transformation

AI & Machine Learning

Sustainability LIVE: A must-attend for technology leaders

Digital Transformation

VMware and NVIDIA to unlock generative AI for enterprises

AI & Machine Learning