Dense Motion Captioning

1University of Trento 2LIGM, Ecole des Ponts, IP Paris, Univ Gustave Eiffel, CNRS


Abstract

Recent advances in 3D human motion and language integration have primarily focused on text-to-motion generation, leaving the task of motion understanding relatively unexplored. We introduce Dense Motion Captioning, a novel task that aims to temporally localize and caption actions within 3D human motion sequences. Current datasets fall short in providing detailed temporal annotations and predominantly consist of short sequences featuring few actions. To overcome these limitations, we present the Complex Motion dataset (CompMo), the first large-scale dataset featuring richly annotated, complex motion sequences with precise temporal boundaries. Built through a carefully designed data generation pipeline, CompMo includes 60,000 motion sequences, each composed of multiple actions ranging from at least two to ten, accurately annotated with their temporal extents. We further present DEMO, a model that integrates a large language model with a simple motion adapter, trained to generate dense, temporally grounded captions. Our experiments show that DEMO substantially outperforms existing methods on CompMo as well as on adapted benchmarks, establishing a robust baseline for future research in 3D motion understanding and captioning.

Gallery of CompMo dataset

In this gallery, we show some samples from our proposed CompMo dataset. Different colors in the motion correspond to different actions. The bottom displayed is the dense captions of the data in the format of 'start - description'.

Main Results

Here we show DEMO's results on our proposed CompMo dataset. The bottom displays GT dense captions and DEMO generated captions in sequence.

Comparison Result with UniMotion

Here we show comparison results with UniMotion on CompMo and H3D+BABEL dataset. The bottom displays GT dense captions, DEMO, and UniMotion generated captions in sequence.