top of page
Search
nicholas williams

OpenAI Releases a New AI That Generates 3D Models — Point-E


OpenAI open-sourced Point-E this week, a machine learning system that generates 3D objects based on simple text input. According to a paper describing the new tool, released alongside the code base, Point-E can produce 3D models in one to two minutes on a single Nvidia V100 GPU.


Unlike others, Point-E is faster, and it can capture an object’s texture

Point-E (the “E” stands for “efficiency”) is seemingly faster than the previous 3D object generation approaches because it doesn’t generate 3D objects in the traditional sense.


Instead, Point-E generates point clouds or sets of data points in spaces that represent a 3D shape.





Meshes are a collection of vertices, faces, and edges that define an object.


While point clouds are easier to synthesize from a computational standpoint, they don’t capture an object’s texture or fine-grained shape. The Point-E team created an additional AI system that converts Point-E’s point clouds to meshes to get around this limitation.


Meshes are often used in 3D modeling and design. However, the paper states that the model can sometimes miss certain parts of an object resulting in distorted or blocky shapes — more on this in a bit.


Here are the two models that make up Point-E:

  1. A text-to-image model

  2. An image-to-3D model

While the mesh-generating model stands alone.


The text-to-image model — trained on labeled images to understand the associations between visual concepts and words — is similar to systems that generate art like DALL-E 2 and Stable Diffusion.


The second model, image-to-3D, was trained on a set of images paired with 3D objects, learning to translate between them.


Point-E could be used in animation and game development

OpenAI researchers note that Point-E’s point clouds could be used to fabricate real-world objects, through 3D printing, for example.


The system could also find its way into animation and game development once more polished, thanks to its additional mesh-generating model.


While 2D art generators have all the attention currently, model-synthesizing AI could be the next big thing as 3D models find use in several fields like science, TV, interior design, engineering, and architecture.


For example, architectural firms use 3D models to demo proposed landscapes and buildings, while engineers use them as designs for new vehicles, devices, and structures.


Point-E Isn’t a Flawless AI System

According to the OpenAI team, Point-E isn’t perfect because its image-to-3D model sometimes fails to understand the image generated by the text-to-image model, resulting in a shape that doesn’t match the text.


However, it is faster than the previous state-of-the-art. The team wrote this in the paper:

While our method performs worse on this evaluation than state-of-the-art techniques, it produces samples in a small fraction of the time


They further went on to say that:

This could make it more practical for certain applications, or could allow for the discovery of higher-quality 3D objects.


Besides these two failure modes of their model, they also expect Point-E to suffer from other problems, like the lack of a safeguard around models that might create “dangerous objects” and biases inherited from the training data.


Copyright infringement is another big issue set to face Point-E

Recently, worried about copyright laws, Getty Images banned AI art — a potential issue Point-E could face in time, which neither the Point-E paper nor GitHub commented on.


While Getty Images is a marketplace for photos and artwork, 3D model artists can sell the content they created on several other online marketplaces like CreativeMarket and CGStudio.


The system doesn’t give credit to the artists

Model artists might try to protect their intellectual property, products, and brands by pointing out the evidence that generative AI borrows heavily from its training data if models made by Point-E make it to the marketplaces.


Point-E doesn’t cite or credit any artist that might have influenced its generations, just like DALL-E.


>>> 🌎《#TNNS is now live for trading on Bitmart Exchange! BUY NOW!》📈<<<



Is AI going to take over the world of 3D modeling, or is it just a gimmick? Let us know what you think down below!


Disclaimer: This is a paid release that was not written by Crypto Online News. The statements, views and opinions expressed in this column are solely those of the content provider and do not necessarily represent those of Crypto Online News. Crypto Online News does not guarantee the accuracy or timeliness of information available in such content. Do your research and invest at your own risk.

Comments


bottom of page