Deep neural network grabs full-body scans from wrist camera

Using a tiny camera and customised deep neural network, researchers have developed a first-of-its-kind wristband that tracks an entire body posture in 3D

A deep neural network which allows a single wrist-mounted camera to produce a full-body representation of the wearer’s actions in real-time could be coming to regular smartwatches and phones.

Cornell University researchers have developed BodyTrak, a wristband that tracks the entire body posture in 3D, the first wearable to track the full body pose with a single camera. BodyTrak could be used to monitor physical activities where precision is critical, says Cheng Zhang, Assistant Professor of Information Science and the paper’s senior author.

“Since smartwatches already have a camera, technology like BodyTrak could understand the user’s pose and give real-time feedback,” says Zhang. “That’s handy, affordable and does not limit the user’s moving area.”

A corresponding paper - BodyTrak: Inferring Full-body Poses from Body Silhouettes Using a Miniature Camera on a Wristband - was published in the Proceedings of the Association for Computing Machinery (ACM) on Interactive, Mobile, Wearable and Ubiquitous Technology, and presented in September at UbiComp 2022, the ACM international conference on pervasive and ubiquitous computing.

BodyTrak is the latest body-sensing system from Cornell’s SciFiLab, which previously developed similar deep-learning models to track hand and finger movements, facial expressions and silent-speech recognition.

Partial images fleshed out by deep neural network

BodyTrak uses a coin-sized camera on the wrist and the customised deep neural network behind it. This deep neural network – a method of AI that trains computers to learn from mistakes – reads rudimentary images or “silhouettes” of the user’s body in motion. The model accurately fills out and completes the partial images captured by the camera, says Hyunchul Lim, a doctoral student in the field of information science and the paper’s lead author.

“Our research shows that we don’t need our body frames to be fully within camera view for body sensing,” says Lim. “If we are able to capture just a part of our bodies, that is a lot of information to infer to reconstruct the full body.”

Privacy for those standing near someone wearing a BodyTrak device is a legitimate concern when developing these technologies, say Zhang and Lim, so the camera is pointed toward the user’s body and collects only partial body images of the user.

The researchers say they also accept that today’s smartwatches don’t yet have small or powerful enough cameras and adequate battery life to integrate full-body sensing, but this will change in the future.

The paper was co-authored by Matthew Dressa, Jae Hoon Kim and Ruidong Zhang, a doctoral student in the field of information science; Yaxuan Li of McGill University; and Fang Hu of Shanghai Jian Tong University, in addition to Lim and Zhang.

Share

Featured Articles

How Deloitte, Nvidia & Oracle are Driving Enterprise Gen AI

Deloitte’s Gen AI turnkey solution aims to democratise AI adoption thanks to a partnership with tech giants Nvidia and Oracle

Arsenal Kicks Off Digital Revolution with NTT DATA

Premier League giant Arsenal FC is embracing cutting-edge technology thanks to a partnership with NTT DATA to revolutionise supporter experiences worldwide

1 Month to Go – Tech & AI LIVE: Gen AI 2024

One month to go until Tech & AI LIVE returns with its virtual event focused on the latest trends, innovations, strategies & more surrounding generative AI

Oracle and Google Cloud Unite in Multicloud Alliance

Cloud & Cybersecurity

Salesforce Unveils Industry-Specific AI to Boost Adoption

AI & Machine Learning

Intuit: How AI-Driven Personalisation is Reshaping Ecommerce

Data & Data Analytics