Generative modeling tool renders 2D sketches in 3D
Apr. 07, 2023.
1 min. read.
7 Interactions
Novice and professional designers could use it to customize items in virtual reality environments and video games, or add effects to films
Researchers at Carnegie Mellon University’s Robotics Institute have developed a machine learning tool that could potentially allow beginner and professional designers to create 3D virtual models of everything from customized household furniture to video game content.
Pix2pix3d allows anyone to create a realistic, 3D representation of a simple or rough 2D sketch, using generative artificial intelligence tools similar to those powering popular AI photo generation and editing applications.
Pix2pix3d has been trained on data sets including cars, cats and human faces, and the team is working to expand those capabilities. In the future, it could be used to design consumer products, like giving people the power to customize furniture for their homes. Both novice and professional designers could use it to customize items in virtual reality environments or video games, or to add effects to films.
Once pix2pix3d generates a 3D image, the user can modify it in real time by erasing and redrawing the original two-dimensional sketch.
Citation: Deng, K., Yang, G., Ramanan, D., & Zhu, J. (2023). 3D-aware Conditional Image Synthesis. https://arxiv.org/abs/2302.08509
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
1 Comments
One thought on “Generative modeling tool renders 2D sketches in 3D”
good research??
🟨 😴 😡 ❌ 🤮 💩