Whereas Meta offers with synthetic intelligence within the type of its constantly-changing content tagging system, the corporate’s analysis wing is difficult at work on novel generative AI expertise, together with a brand new Meta 3D Gen platform that delivers text-to-3D asset technology with high-quality geometry and texture.
“This method can generate 3D property with high-resolution textures & materials maps end-to-end with outcomes which can be superior in high quality to earlier state-of-the-art options — at 3-10x the pace of earlier work,” Meta AI explains on Threads.
Publish by @aiatmeta
View on Threads
Meta 3D Gen (3DGen) can create 3D property and textures from a easy textual content immediate in below a minute, per Meta’s research paper. That is functionally much like text-to-image mills like Midjourney and Adobe Firefly, however 3DGen builds absolutely 3D fashions with underlying mesh buildings that assist physically-based rendering (PBR). Which means the 3D fashions generated by Meta 3DGen can be utilized in real-world modeling and rendering purposes.

“Meta 3D Gen is a two-stage methodology that mixes two parts, one for text-to-3D technology and one for text-to-texture technology, respectively,” Meta explains, including that this strategy ends in “higher-quality 3D technology for immersive content material creation.”
3DGen combines two of Meta’s foundational generative fashions, AssetGen and TextureGen, specializing in the relative strengths of every. Meta says that primarily based on suggestions from skilled 3D artists, its new 3DGen expertise is most well-liked over competing text-to-3D fashions “a majority of the time” whereas being three to 60 occasions quicker.

It’s value noting that by separating mesh fashions and texture maps, 3DGen guarantees important management over the ultimate output and permits for the iterative refinement frequent to text-to-image mills. Customers can alter the enter for texture model with out tweaking the underlying mannequin.

Meta’s complete technical paper about 3DGen goes into considerably extra element and exhibits evaluative testing outcomes in comparison with different text-to-3D fashions.
Picture credit: Meta AI