SinMDM: Single Motion Diffusion
π€β¨ Meet SinMDM: Your go-to tool for creating diverse and high-quality animations of humans, animals, and creatures with unique skeletons and motions! π¦πΊπ½π Easily generate crowds, perform style transfers, and more with this efficient and innovative AI model. Check it out: https://sinmdm.github.io/SinMDM-page/ #AI #Animation #Innovation
- SinMDM is a model that learns internal motion motifs from a single motion sequence and generates motions faithful to those motifs.
- It specializes in synthesizing animations of humans, animals, and imaginary creatures with unique skeletons and motion patterns.
- SinMDM is a lightweight architecture utilizing diffusion models and denoising networks to ensure motion diversity and avoid overfitting.
- Applications of SinMDM include spatial/temporal in-betweening, motion expansion, style transfer, and crowd animation.
- The model can generate crowds performing various motions based on a single input sequence, like hopping ostriches, jaguars, or breakdancing dragons.
- SinMDM excels in quality and time-space efficiency compared to existing methods and does not require additional training for different applications during inference.
- It leverages a shallow UNet with a QnA local attention layer to generate diverse synthesized motions while retaining core motion motifs.
- SinMDM can compose motions temporally by completing missing parts or spatially by controlling selected joints, as shown with upper body control examples.
- The model can handle style transfer by adjusting content motion to match learned style motions and generate long animations up to 60 seconds without further training.
- The work by Raab et al. on Single Motion Diffusion is presented at the ICLR 2024 conference, showcasing their innovative approach in motion synthesis.