Adobe Substance 3D debuts AI-driven 'Text to Texture' feature
Adobe has unveiled innovative beta features for its Substance 3D design software suite, made to simplify the 3D design workflow. Announced at the Game Developers Conference, these tools harness the power of Adobe's Firefly AI model to produce object textures and staging environments using textual descriptions. The groundbreaking "Text to Texture" feature can generate either photorealistic or stylized textures from prompts such as 'scaled skin' or 'woven fabric,' thereby eliminating the necessity for designers to hunt for reference materials.
Generative background tool for 3D staging
Adobe's second new offering is the "Generative Background" tool, designed specifically for Substance 3D Stager. It empowers designers to craft background images for objects being incorporated into 3D scenes using text prompts. Intriguingly, both the "Text to Texture" and "Generative Background" features utilize 2D imaging technology, akin to Adobe's earlier Firefly-driven tools in Photoshop and Illustrator. They don't create 3D models or files but instead apply 2D images generated from text descriptions in a way that gives a three-dimensional impression.
Beta versions of the new features are now accessible
The innovative "Text to Texture" and "Generative Background" features are now offered in the beta iterations of Substance 3D Sampler 4.4 and Stager 3.0, respectively. Sebastien Deguy, Adobe's head of metaverse and 3D, confirmed in an interview with The Verge that these features are free during their beta phase. Both tools have been developed using Adobe-owned assets, including company-created reference materials and licensed Adobe stock.