Diffusion Based Human Motion Generation
Keywords:Software Engineering, Human motion generation
Efficiently generating realistic human motion presents a significant challenge across various domains, including animation and robotics. Traditional handcrafted motion sequences for animation, is notoriously time-intensive and skill-demanding. On the other hand, motion capture technology, while being very effective for real-word data, often incurs high costs for the equipment and may produce noisy data requiring further work. This project focused on the development of autoregressive conditional diffusion models tailored to human motion generation. A comprehensive examination of existing state-of-the-art motion models that utilize diffusion and normalizing flows while acknowledging other generative models was conducted. Limitations and opportunities for enhancement in these models and propose generalizable solutions were identified. The research contributes to the ongoing evolution of generative diffusion techniques, particularly in autoregressive generative models. Furthermore, it provides an additional tangible demonstration of autoregressive diffusion using a toy model showed in an intuitive way that does not require animated sequences. This will further illustrate the model’s potential for time-series tasks and its ability to be applied to other domains while producing convincing results. By thoroughly evaluating our model and its capabilities, the aim is to provide a valuable contribution to the field with explorations into important hyperparameters and model architectures. This work underscores the importance of understanding and addressing challenges in predictive time-series tasks, thereby advancing our collective knowledge in this area.