Product reviews, deals and the latest tech news

OpenAI reveals an AI tool for creating 3D models

OpenAI, an AI research centre, just revealed its newest technique, which would significantly enhance 3D rendering.

OpenAI, the creators of the DALL-E text-to-image converter, have developed a new system, POINT-E, to convert written commands into 3D point clouds.

In contrast to other existing techniques, which may take hours and need several GPUs, POINT-E “produces 3D models in about 1-2 minutes on a single GPU,” as stated in a report released by OpenAI.

It’s OpenAI’s POINT-E

Here’s a snippet from the study that explains where POINT-E stands in the realm of 3D modelling right now:

Although our approach is not yet state-of-the-art in terms of sample quality, it is one to two orders of magnitude quicker to sample from, making it a viable compromise in certain applications.

A text-to-image diffusion model is used to generate a single synthetic perspective. The study then discusses the trade-off between generating a 3D point cloud, which is quicker to synthesise and so reduces the burden on GPUs, and capturing finer details.

Even though a secondary AI has been taught to help with this issue, the work admits that it may “occasionally overlook thin/sparse sections of objects,” such as the stems of a plant, which can lead to the misleading appearance of flowers that seem to be floating in midair.

Although its potential applications are still restricted, OpenAI claims to have trained the AI on millions of 3D models and associated information.

Real-world object rendering for 3D printing is one such use; but, as this technology improves, it’s likely to find utility in other, more complex contexts like computer and console games and even broadcast television.