Bestgamingpro

Product reviews, deals and the latest tech news

Point-E, an AI that creates 3D models, is made available by OpenAI

Perhaps 3D model generators are the next big thing in the field of artificial intelligence. Point-E, a machine learning system that can generate a 3D object in response to a text query, was released this week as open source software by OpenAI. Point-E, according to the accompanying article, can generate 3D models in under two minutes using a single Nvidia V100 GPU.

Point-E does not produce “real” 3D objects. Instead, it creates point clouds, which are “clouds” of discrete data points in space that collectively define a 3D shape. (The “E” in Point-E is an abbreviation for “efficiency,” since it is supposed to be quicker than conventional methods of 3D object production.) A major shortcoming of Point-E is that it cannot represent the fine-grained form or texture of an object, even if point clouds are computationally simpler to create.

The Point-E development team overcame this shortcoming by training a second artificial intelligence system to transform Point-point E’s clouds into meshes. (In 3D modelling and design, meshes (groups of vertices, edges, and faces) are often utilised.) However, the research also acknowledge that there are cases when the model fails to capture relevant information, producing inaccurate depictions of the world around us.

Point-E is composed of a text-to-image model and an image-to-3D model in addition to the standalone mesh-generating model. The text-to-image model, which is related to generative art systems such as DALL-E 2 and Stable Diffusion, was educated on tagged images in order to comprehend the connections between words and pictures. In contrast, the image-to-3D model was trained by being shown with a collection of photos of 3D objects, along with their 2D counterparts.

Point-text-to-image E’s model produces a synthetic rendered object when given a text prompt, such as “a 3D printed gear, a single gear 3 inches in diameter and half an inch thick,” which is then passed into the image-to-3D model, which produces a point cloud.

Point-E, developed by OpenAI, is able to generate coloured point clouds that regularly match word prompts because the models were trained on data including “several million” 3D objects and related information. It’s not flawless; occasionally Point-picture-to-3D E’s model misinterprets the image from the text-to-image model, producing an unnatural form. The OpenAI researchers maintain, however, that their method is many orders of magnitude more efficient than anything else available at the time.

“While our method performs worse on this evaluation than state-of-the-art techniques, it produces samples in a small fraction of the time,” they wrote in the paper. “This could make it more practical for certain applications, or could allow for the discovery of higher-quality 3D object.”

In what ways may this information be used? The team at OpenAI notes that Point-point E’s clouds may be utilised to create physical items, perhaps via 3D printing. When the mesh-converting model is finalised, the technique might potentially be used in game and animation production pipelines.

As was alluded to before, OpenAI isn’t the first business to create a 3D object generator, but it is the most recent. Google revealed its Dream Fields generative 3D engine in 2021; this year, the corporation published DreamFusion, an improved version of Dream Fields. DreamFusion can build 3D representations of things without 3D data, unlike Dream Fields, which needs training.

Although focus is now on 2D art generators, model-synthesizing AI has the potential to significantly shake up the business in the near future. Film, television, interior design, architecture, and many scientific disciplines all make extensive use of 3D models. Companies in the architectural and landscaping industries use them to showcase their planned buildings and landscapes, while engineers use them to create prototypes of new tools, machines, and buildings.