Blender can now use AI to create images and effects from text descriptions

Share

Even 3D modelling software is using AI art generators. Stability AI has introduced a Stability for Blender tool that, as the name implies, brings Stable Diffusion’s image creation tech to the open-source 3D tool. You can create AI-based textures, effects and animations, whether using source material from your renders or nothing more than a text description. You may not need to be (or hire) a skilled 2D artist to put the finishing touches on a project.

Stability for Blender requires an API (programming interface) key and an internet connection, but it’s free to use. It doesn’t require any software dependencies or a dedicated GPU. This might help if you need to complete some texture or video work on a laptop that isn’t as robust as your main workstation.

The addition theoretically saves time and money, and might help streamline your work. It can also help you make truly custom content, Stability says. It’s safe to say this may be useful if you were already planning to use AI-generated art, as it could save you jumping between apps and services.

This isn’t likely to give Stable Diffusion a major advantage over rivals like OpenAI’s DALL-E. It also won’t create 3D objects from scratch. You’ll need a tool like POINT-E for that. However, it does hint at a way AI image generation can help creatives without as much risk of copyright issues. Stability for Blender can rely on your own content for source material — you shouldn’t have to worry about legal trouble.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.