Why your values still matter more than your prompts

AI makes output cheap. Trust is the expensive part now. If your marketing standards live inside someone’s “magic prompt”, you’re one rushed draft away from a brand mess. This is why values matter more than prompts.
Picture of Georgina

Georgina

Georgina brings clarity and strategy to everything she writes. She creates content that feels good to read and gets results. She also supports our copy team, sharing ideas and helping them sharpen their skills. Outside of work, she’s usually deep in a Dungeons & Dragons campaign or fussing over her cat.

If you’re Googling “best AI prompts for marketing”, you’re not alone. You want output that doesn’t embarrass the brand or create a messy risk you only discover when a complaint lands. 

Marketing suddenly feels easy to produce and weirdly hard to trust. When AI makes words cheap, the cost shifts elsewhere. Into credibility. Into compliance. Into that gut-drop moment when you realise the “helpful draft” slipped into the wild with a claim you can’t back up. 

So yes, prompts matter. They can save time. They can even make work better when used for research. But prompts are not the thing that keeps your marketing steady when output is fast and plentiful.  

“AI prompts” aren’t your marketing strategy

Most teams chasing “the perfect prompt” are really chasing a shortcut around decisions they haven’t made yet. 

What do we stand for in this market? 

What’s our line on exaggeration? 

What counts as proof? 

What do we refuse to do, even if competitors do it? 

If those answers are unclear, your prompts will be unclear too. When the prompt is vague, AI fills in the blanks with generic language. That’s how you end up with copy that sounds fine but could belong to anyone. 

Ethan Mollick (a professor who writes a lot about practical AI use) makes a useful point here: calling it “prompt engineering” makes it sound predictable, like there’s a reliable formula. But we’re not there. People are still arguing about what works consistently, across tools and contexts. So the “magic prompt” idea is more comforting than true. 

Google is blunt about this too. It warns that AI-generated search features can be wrong: “AI Overviews can and will make mistakes.” If the platform serving the answers is warning you upfront, it’s a sign not to base your brand on a few clever instructions. 

Values are what you refuse to do

Values often show up in the things you tend to actively avoid doing:  

  • The claims you won’t make unless you can prove them 
  • The things you won’t imply with an image, even if you could get away with it 
  • The tone you won’t slip into just to sell faster 
  • The customer data you won’t feed into tools, even if it would help the output

 

This is why values matter more when AI speeds things up.  

When work is slower, judgment happens naturally because humans have time to think. When work is fast, you need a shared standard that travels with the work. Otherwise the “standard” becomes whoever wrote the last prompt. 

Why prompts break when values are missing

Prompting fails in marketing because it can’t replace judgment.  

One person prompts for “bold”. Another prompts for “professional”. Someone else prompts for “salesy but not salesy”. You end up with a brand that shape-shifts by channel and by mood. The work looks inconsistent because the decision-making is inconsistent. 

You also get “plausible nonsense”, which is the most dangerous kind. OpenAI’s own research on hallucinations describes the problem “models can confidently generate an answer that isn’t true.” In marketing that shows up as: 

  • invented proof 
  • shaky stats 
  • made-up customer pain 
  • confident claims that don’t match reality 

 

Then the team blames the tool. But the tool didn’t decide what “good” looks like. You didn’t either. The tool simply exposed the gap. 

AI and brand trust are now tied together

Trust used to be the soft bit you talked about on brand workshops. Now it is a buying factor people actively use to filter options. 

Edelman’s 2025 research on brand trust says: “Trust is as much of a purchase consideration as quality and price.”  

AI raises the stakes because it multiplies output. If you publish twice as much, you double the surface area for mistakes. Not just factual mistakes, but trust mistakes. The too-confident “results” language. The case study that reads like it was written by a stranger. 

Once you start feeling like you need to publish constantly to keep up, it’s tempting to let the machine drive. That’s the moment values stop being a brand exercise and become an operating system. 

AI in marketing compliance is no longer theoretical

In the UK, the ASA has said there is “no blanket legal requirement” to disclose AI use in ads. That doesn’t mean “do whatever you want”. It means the rules you already have still apply.

Misleading claims are still misleading. If AI makes it easier to generate realistic content quickly, it also makes it easier to generate the kind of content that gets you in trouble quickly.

If you’re processing personal data, you need a lawful basis and you need to treat AI tools as part of your data handling, not as a magical separate zone.

In the EU, the AI Act adds another layer for organisations operating across borders. Transparency obligations and accountability are becoming more formal, not less.

Basically, if your internal AI plan is “people can use ChatGPT if they want’, you’re exposed.

This is exactly where values help, because values turn “we want to be ethical” into “here is what we do, every time, before anything goes out”. 

Using AI in marketing safely is mostly about process, not prompts

The most useful mental shift is that prompts are a tool for drafting. Process is what keeps the output true and on-brand.

Tools like ChatGPT and Copilot can help you draft. Tools like Perplexity can speed up research because it’s built around citations. Google Search still matters for checking live intent and what’s actually showing up now.

But tools don’t decide your strategy, they don’t define your audience and they don’t set the standard for what “good” looks like. Someone still has to check tone and truth. Someone still must ask, “Can we prove this?”

That’s where your values, your strategy, your tone of voice and your audience knowledge need to show up in the work, every time (through simple habits and clear guardrails).

Prompts change. Values stay.

Prompts are useful. They will get better. Your team will get better at them too (especially if you invest in training).  

But the thing that protects your brand in an AI-saturated world is not a clever instruction. It’s a clear standard for what you refuse to do and what you need to see before you publish. 

If AI is going to speed up your marketing, start by speeding up your clarity. Then let the tools follow. 

Curious to learn more? Read this 

Sources

  • OpenAI, Why language models hallucinate 
  • Google Search Help, AI Overviews can and will make mistakes 
  • ASA, Disclosure of AI in Advertising 
  • ICO, Generative AI: eight questions developers and users need to ask 
  • UK Government, A pro-innovation approach to AI regulation (white paper) 
  • UK Government, AI regulatory principles 
  • EUR-Lex, Regulation (EU) 2024/1689 (AI Act)  
  • Edelman, 2025 Edelman Trust Barometer Special Report: Brand Trust  
  • Ethan Mollick, One Useful Thing (prompting and “prompt engineering” reliability) 
  • Meredith Whittaker (Signal), comments on AI and surveillance model