Quick note: your team probably has plenty of creative ideas. What they're missing is a system that remembers what they tested, why, and what it actually told them.
ON YOUR MEMO THIS EDITION
Why "Just Test More" Is Broken
The real reason creative velocity isn't translating into creative intelligence.The 3 Gaps That Keep Teams Stuck
Where the process falls apart between having ideas and actually learning from them.What a Real Testing System Looks Like
The shift from spaghetti testing to structured iteration that compounds.The Sticky Note
If you remember one thingFrom Me to You
AI Training ChecklistAcronym Therapy
CBO, redefined: Can’t Build On….
INTRO
Why “Just Test More” Is Broken
We've seen this pattern play out dozens of times.
A team launches eight new creatives.
Two perform well for a week. Performance dips.
Someone says "we need fresh creatives.”
They launch eight more. Two work. Repeat.
Six months later, they've burned through 80+ assets and can't tell you a single thing they’ve learned.
That looks like testing, but it's closer to gambling with a content budget.
These teams are working hard and producing volume. They just have no system for capturing why something worked, what variable actually moved, or what to iterate on next.
Every round starts from scratch because nothing from the last one carried forward.
And this gets expensive in a specific way: Andromeda rewards creative diversity, but it rewards meaningful diversity.
Throwing ten variations of the same flat-lay with different headlines into an Advantage+ campaign doesn't give Meta's system enough differentiated signal to optimize against. You end up diluting spend across assets that are functionally identical, and Andromeda treats them that way.
The teams we've seen win have figured out how to test with memory, so each round builds on the last one.
(I built a creative testing tracker for exactly this problem. The full template is linked below.)
THE 3 GAPS THAT KEEP TEAMS STUCK
Diagnosing The Break
When we sit down with a team that says "creative testing isn't working,” the issue almost always lives in one of three gaps.
Gap 1️⃣: No Testing Hypothesis.
"Let's try UGC" is a direction, not a hypothesis.
"Let's try UGC because our hook rate is below 20% and static ads are getting skipped in under 2 seconds" gives you something you can actually evaluate.
Without a hypothesis, you end up looking at ROAS after the fact and shrugging.
Most teams skip this because it feels like overhead.
But a one-line hypothesis per creative takes 15 seconds and saves you from the most common trap in creative testing: declaring a winner without knowing what it won at.
Gap 2️⃣: No Record of What Was Tested or What The Iterations Were.
This is the one that really costs teams.
We've watched people re-test the same angle three times in six months because nobody documented what they'd already tried.
Or, if a creative works, they want to iterate on it, and they can't reconstruct what made it different from the version that flopped.
If your creative history lives in a Slack thread, a Notion doc nobody updates, and someone's memory, you don't have a testing process. You have institutional amnesia.
(The tracker template solves this at the structural level. Every test gets logged with the hypothesis, the variables, the results, and the recommended next move. I'll link it at the bottom.)
Gap 3️⃣: No System For Turning Results Into The Next Test.
Even teams that do track results often stall here.
They know Ad 7 beats Ad 4, but they can't explain why. Was it the hook? The narrative arc? The offer framing? The format itself?
Without that framework, iteration is just guesswork wearing a nicer outfit.
What I've seen work: after each testing round, force a 10-minute read-out where you answer three questions.
What did we learn about the hook (first 3 seconds)?
What did we learn about the story (the argument or narrative)?
What did we learn about the offer frame (how the CTA or value prop landed)?
That maps directly to the metrics that matter.
Hook rate tells you if people stopped.
CTR tells you if the story earned the click.
CVR tells you if the landing experience matched the promise.
When your hook rate holds but CTR drops, the first 3 seconds grab attention but the story doesn't carry.
You can act on that. You can't act on "the ad didn't perform."
WHAT A REAL TESTING SYSTEM LOOKS LIKE
The shift is disciplined, not complicated.
Spaghetti testing looks like this: launch a bunch of creatives, see what sticks, kill the losers, start over.
Structured iteration looks like this: define a hypothesis, test one variable, log the result, use the learning to design the next test, compound your understanding of what your audience responds to over time.
Spaghetti testing is fast but circular.
Structured iteration is slightly slower per round but linear, because each test builds on the last. The gap between the two grows wider every month.
In a post-Andromeda world, this matters more than it used to.
Meta's system evaluates creative relevance dynamically. When you feed it genuinely differentiated assets with different arguments, different emotional angles, different formats, it can find pockets of your audience that respond to each one.
But if your "diverse" creative set is really just five versions of the same message, Andromeda sees through it. Your learning phase drags, delivery concentrates, and cost per result creeps up.
The teams scaling right now are producing intentionally different creative, informed by a documented history of what they've already tried. That documented history is what makes the difference between volume and velocity.
THE FULL CREATIVE TESTING TRACKER + AI GUIDE
I've put together two things to make this usable for your team.
A Meta creative testing tracker. It's a plug-and-play template where every test gets logged with the hypothesis, the variables changed, the performance results (hook rate, CTR, CVR, ROAS), and the recommended next iteration. No more reconstructing what you tested last month from memory and Slack searches.
An advanced AI guide. It walks you through feeding your tracker data into AI. Coming soon (hint! hint!)
If your current creative process feels like throwing spaghetti at a wall and hoping something sticks, this replaces the wall with a whiteboard.
Forward this to whoever's managing your creative pipeline.
📝 THE STICKY NOTE
(for our goldfish memory)
Testing without tracking is just spending money to forget what you learned.
The teams that compound creative performance are the ones that remember what they already tried.
🎧 FROM ME TO YOU
I've seen too many teams struggle with AI implementation because they skip the fundamentals: clear goals and solid training. That's why I wanted to share this AI Training Checklist from You.com.
It helps you build a confident, capable team with practical steps for creating an effective training program, overcoming resistance, and a worksheet to track progress across your org.
One major reason AI adoption stalls? Training.
AI implementation often goes sideways due to unclear goals and a lack of a clear framework. This AI Training Checklist from You.com pinpoints common pitfalls and guides you to build a capable, confident team that can make the most out of your AI investment.
What you'll get:
Key steps for building a successful AI training program
Guidance on overcoming employee resistance and fostering adoption
A structured worksheet to monitor progress and share across your organization
🧩 ACRONYM THERAPY
CBO
Campaign Budget Optimization
Or in this case,
Can’t
Build
On what you don’t track
Because the algorithm optimizes delivery.
Only you can optimize what you learn.
💌 Before you vanish:
Wait a second…
Was this memo great? 🥺🥺
if it landed for you, hit reply and say “Yes”.
More Ad-ventures coming next week!
The Creative Strategist
at The Marketer’s Memo





