
DreamFusion: Text-to-3D using 2D Diffusion
🚀 Transform text into stunning 3D objects effortlessly with DreamFusion! 🖥️✨ This AI tool uses 2D Diffusion & Neural Radiance Fields to create high-fidelity, relightable 3D models from text captions. No need for 3D training data! 🌟🔮 #AI #DreamFusion #TextTo3D
- DreamFusion is a text-to-3D synthesis method leveraging a pretrained 2D text-to-image diffusion model.
- It optimizes a Neural Radiance Field (NeRF) model via gradient descent using a loss based on probability density distillation.
- The approach generates high-fidelity 3D objects from text captions without the need for 3D training data or modifications to the image diffusion model.
- Objects are relightable and can be composed into scenes, and can be exported to meshes for integration into 3D renderers or modeling software.
- DreamFusion utilizes Score Distillation Sampling (SDS) to optimize samples in a 3D space with additional regularizers for improved geometry.
- The resulting NeRFs exhibit coherent geometry, high-quality normals, depth, and are relightable with a Lambertian shading model.