New AI research tool turns ideas into art

Imagine creating a digital painting without ever picking up a paintbrush or instantly generating storybook illustrations to accompany the words. Today, we’re showcasing an artificial intelligence (AI) exploratory research concept called Make-A-Scene that will empower people to bring their visions to life.

Make-A-Scene allows users to create images using text prompts and free-form sketches. Earlier image-generating AI systems typically used textual descriptions as input, but the results could be difficult to predict. For example, entering text “a painting of a zebra riding a bicycle” may not reflect exactly what you imagined; the bike may be turned sideways or the zebra may be too big or too small.

With Make-A-Scene, this is no longer the case. It shows how people can use both text and simple drawings to convey their visions with greater specificity using a variety of elements.

Make-A-Scene captures the layout of the scene to enable nuanced input sketches. It can also generate its own layout with text-only prompts, if that’s what the creator chooses. The pattern focuses on learning key aspects of imagery that are more likely to be important to the creator, such as objects or animals.

Empowering creativity for artists and non-artists

As part of our research and development process, we’ve shared access to our Make-A-Scene demo with AI artists including Sofia Crespo, Scott Eaton, Alexander Reben, and Refik Anadol.

Crespo, a generative artist focusing on the intersection of nature and technology, used Make-A-Scene to create new hybrid creatures. Using her free-form drawing abilities, she found she could start creating quickly through new ideas.

“As a visual artist, sometimes you just want to be able to create a basic composition by hand, draw a story for the eye to follow, and this allows just that.” — Sofia Crespo, AI Artist

A GIF of the output generated by Make-A-Scene, via text prompt by artist Sofia Crespo "a painting of an alien jellyfish with flower petals at night."

Make-A-Scene isn’t just for artists – we think it could help everyone express themselves better. Andy Boyatzis, program manager at Meta, used Make-A-Scene to generate art with his children, aged two and four. They used playful drawings to bring their ideas and imaginations to life.

“If they wanted to draw something, I would just say, ‘What if…?’ and it led them to create some wild things, like a blue giraffe and a rainbow airplane. It just shows the limitlessness of what they could imagine. — Andy Boyatzis, Program Manager, Meta

An image of the output generated by Make-A-Scene via the text prompt "a monster robot bear on a train."

Building the next generation of creative AI tools

It is not enough for an AI system to generate content. To realize the potential of AI to advance creative expression, people must be able to shape and control system-generated content. It should be intuitive and easy to use so people can take advantage of the modes of expression that work best for them, whether it’s speech, text, gestures, eye movements or even sketches to bring their vision to life.

Through projects like Make-A-Scene, we continue to explore how AI can expand creative expression. We are making progress in this area, but this is only the beginning. We will continue to push the boundaries of what is possible using this new class of generative creative tools to create methods for more expressive messaging in 2D, mixed reality, and virtual worlds.

Comments are closed.