- Canva has banned its AI tools from creating images of political candidates and medical terms.
- CEO Melanie Perkins told The Verge that the decision was aimed at preventing harmful or inappropriate content.
- Canva’s AI policies appear to be more artist-friendly than those of Adobe and Meta, which have faced backlash.
Design giant Canva has drawn clear lines about what can and can’t be created with its AI tools.
Canva’s AI feature, Magic Media, doesn’t work with medical or political terms because that content could be harmful or inappropriate, CEO Melanie Perkins said in an interview with The Verge published Monday. Canva’s software can be used to create everything from party invitations to social media content to presentation templates.
“Canva is designed to be a platform where users can turn their ideas into designs, some of which we shouldn’t generate,” said Perkins, co-founder of the 11-year-old company.
For example, if the tool is asked to create an image of a political candidate, it simply tells the user, “You can’t do that,” Perkins said.
Users can still create their own designs on the platform, including political and health content.
Canva also does not permit the use of AI to generate contracts, legal or financial advice, spam or adult content, per our terms of use for our AI products.
The company also has a clear policy on AI scraping: According to a company blog post, Canva will not train its AI on creators’ content without permission, and users can always opt out of having their designs used to train the AI.
A Canva spokesperson told Business Insider that by default, all users opt out of having their private design content used to train AI models.
The company set up a $200 million fund last year to pay users who take part in AI training over the next three years.
Canva’s stance on AI is in stark contrast to that of other content creation giants Adobe and Meta, which have come under fire within the creative community in recent months.
Last month, Meta received backlash from artists, who Photos published on Instagram and Facebook To Train an artificial intelligence modelSeveral artists told BI they were moving to platforms like Cara, which bans the use of AI. Meta did not respond to a request for comment at the time.
Around the same time, artists protested after Adobe sent users a re-agreement to its Terms of Use. This led some to wonder if the AI would delete their art and content. Many artists have boycotted Adobe, leading to increased sign-ups for alternatives such as Linearity and Affinity, which was acquired by Canva earlier this year.
At the time, Adobe said in a blog post that the content belongs to its users and is never used to train generative AI tools.
An Adobe spokesperson referred BI to the company’s AI guidelines, which instruct users not to create hate or adult content or seek medical advice from AI features — without addressing whether such content can even be generated.