The future of the generative AI hype cycle is unclear, especially since Goldman Sachs released a report questioning the actual value of AI tools. Still, whether built with generative AI or hyped-up machine learning, these tools and platforms are flooding the market. In response, agencies are validating these tools and platforms through sandboxes (safe, isolated, and controlled spaces for testing), in-house AI task forces, and client engagements.
While artificial intelligence itself has been around for a long time, the industry launched an arms race into generative AI last year, with the hope that the technology would make marketers’ jobs easier and more efficient. But the jury is still out on this promise, as generative AI is still in its early stages and faces challenges such as illusions, bias, and data security (not to mention the energy issues associated with AI). What’s more, AI companies hold large amounts of data, which could raise concerns about hacking.
“There are a lot of ad platforms out there. Everything is AI now. But is it really? We spend a lot of time vetting it and thinking carefully beforehand,” said Tim Lippa, global chief product officer at marketing company Assembly.
Today, generative AI has moved beyond large-scale language models like OpenAI’s ChatGPT to permeate everything from search features on Google and social media platforms to image creation. Agencies are also rolling out their own AI experiences for both in-house use and for client-facing work. For example, in April, Digitas rolled out its own generative AI operating system for clients, Digitas AI. (You can find a comprehensive timeline of generative AI’s breakout year here.)
For all the noise around generative AI, it’s all still in the testing phase, agency executives say. It’s especially important to consider that some AI efforts are aimed at gaining immediate attention or pleasing executives by easing their concerns about missing out on generative AI.
“Some of these solutions are still [intellectual property] “And copyright and how to protect it, and whether you can disclose the datasets that you’re using or training on,” said Erav Horowitz, executive vice president and global head of applied innovation at McCann Worldgroup.Recall, for example, that OpenAI’s chief technology officer Mira Murati made headlines in March when she refused to reveal details about the data being used to train Sora, OpenAI’s text-to-video generation tool.
Horowitz said one of the big problems with generative AI is hallucinations, an issue that has come up time and time again, and McCann said he’s been talking to OpenAI to find out what exactly the company is doing to address it.
McCann has signed enterprise-level contracts with leading companies in the field, including ChatGPT, Microsoft Copilot, Claude.ai and Perplexity AI, all of which have been deemed safe environments by the company’s legal, IT and finance departments (financial details of these contracts have not been made public). Only once the platform is deemed safe can the solution be made available to internal stakeholders. Horowitz added that the company also builds its own sandbox environment on its own servers to ensure the safety of sensitive information before signing on with AI partners.
McCann is also currently testing Adobe Custom Models, a content creation tool from Adobe. “You can actually use your own visual assets as part of it. It’s safe and secure because it’s trained on your own data. And we know it can be used commercially,” Horowitz said. teeth He added that the agency obtains its own information through research and customer information.
Razorfish is in a similar situation, having inked deals with major platforms to sandbox its data and that of its customers. Christina Lawrence, evp of consumer and content experience at Razorfish, said the company has an approved vendor list to ensure that the AI platforms it works with aren’t trained on licensed or royalty-free assets.
“Or they should ensure that the sensitive data used in the tool is not used as training material for law masters programs, which we all know it is,” she added.
Going a step further than Sandbox, Razorfish is introducing legal protections that require clients to acknowledge that they are aware that generative AI is being used in their work. “You have to understand that there are multiple levels of checking procedures because this is something very new and we want to be completely open and transparent,” Lawrence said.
Again, generative AI is still new territory for marketers. Tools like ChatGPT were originally released to the public, and platforms are learning as technology advances and changes, Lawrence said. There is still no societal consensus on how AI should be regulated. Lawmakers have recently been pondering the intersection of AI and privacy, with concerns over privacy, transparency, and copyright protection.
Until that agreement is reached, the onus is on brands and their agency partners to set guardrails and parameters to ensure data security and scalability, and to address inherent bias in AI, agency executives said.
“What I’m always concerned about is making sure that what’s coming out of the imagery and the creative side has the right number of fingers and toes and everything at the center,” Lippa said. “For the past year, they’ve put the AI logo on everything they do, and sometimes that’s true and sometimes it’s not.”