4 minute read

If your agency or marketing team is anything like Borshoff, you’ve got thoughts about AI. If you’re a lot like us, you’re curious, intrigued and maybe uncertain about how to leverage these new powers. Our team has embraced the technology and we enjoy experimenting with Generative AI in just about every capacity, from content to imagery to strategy briefs to research and discovery. There is value and demand for this technology and tool, but how do we get the most out of it to ensure authentic human connections? How can it help us inspire, move and motivate real human behavior?

In discussing Generative AI’s strengths and weaknesses with friends and colleagues in marketing tech, as well as with agencies and brands, we’ve noticed a shift: What started as remarkably spirited, borderline polarizing arguments have mellowed a bit… now we’re talking more collaboratively about the best ideas and use cases for how to get the most out of this technology.

While we continue to adopt, learn and develop our Generative AI skills to help brands connect to humans in meaningful ways, we’re finding it important to document our thoughts for responsible usage as we continue doing what we do best: Creating meaningful connections between people, and doing work that helps brands make a positive impact on the world.

These are our guidelines. What are yours? How are you getting the most out of Generative AI to help make human connections? Please do share your thoughts. Let’s collaborate and debate.

WE WILL NOT

use Large-Language Models (LLMs), artificial intelligence (AI) image generators, nor any other type of AI, to generate – or attempt to generate – completed original content for the agency or our clients. We’re proud of what we make, and we genuinely enjoy making it, so the idea of passing off a robot’s work as our own just strikes us as not only cheating, but boring.

WE WILL

on occasion use AI output as a starting point to generate potential copy, design, and imagery directions to explore, or as a starting place for research and briefs.

BUT ONLY

after we’ve done initial ideation using our squishy, inventive, unpredictable human brains, because that’s where truly unexpected ideas actually come from.*

WE WILL ALSO

diligently fact-check, verify, validate, scrutinize, modify and transform anything AI spits out, because sometimes those robots are just crazy.

WE WILL

generally treat AI as if it were a very smart, hyper-fast intern: an entity with amazing capacity for research and imitation, but little to no real-world experience or perspective, who almost always makes mistakes to one degree or another. Such an intern is unquestionably valuable to have around, but not exactly fit to produce work without oversight, present to clients, or make key decisions.

WE WILL NOT

…heaven forbid, use AI to write a condolence letter after a tragic event, nor a legal case that will be heard by a real human judge. Not even if some crisis arises that catches everyone off guard. In fact, especially not if a crisis arises that catches everyone off guard. Also, we won’t use AI to generate imagery that represents specific people, products, locations, etc.

WE WILL NOT

overlook or underestimate the potential of this new tool and will continually explore its expanding capability.

WE WILL

protect the agency and our clients by staying vigilant on legal and copyright laws concerning AI and related ethical concerns about intellectual property.

Have thoughts or want to discuss, let’s connect.

*Even the robots know they’re unoriginal. In response to the prompt, “What are some examples of AI writing erroneous content?” on June 21, 2023, the AI’s response included this statement: “AI writing tools can sometimes generate text that is very similar to other text that it has been trained on. This can happen because the AI is simply copying the patterns of the text that it has been trained on, rather than generating truly unique content.” As of now, anyway, that’s still our job.