Media Measurement: How to Do Good by Doing Better
The continuing quest to reveal better answers to "what works?"
More than 25 years into my career, I continue to see media executives struggle with how to answer an enduring question: "what works?"
The measurement of consumer experiences has always been tricky, and divining the effectiveness of media spend is especially so.
My own personal quest for answers started early, when I was fresh out of school, working as a digital designer and then creative director. As a committed creative, I invested lot of thought and consideration into my work, and was perplexed when clients made big decisions based on personal preference instead of knowledge and evidence.
As I progressed into agency roles, I met individuals who demystified the art of data collection and measurement, and I become a sponge. The analyses and recommendations of these colleagues were essential to executing strategy at companies like Fidelity Investments and P&G, where bets worth millions were on the line.
Seeing this firsthand, I started gravitating toward strategy and executive leadership roles, propelled by a natural curiosity and motivation to explore new ideas that just won’t quit. Over time, I found that I also had a knack for coaxing answers from data, and learned how to drive strategy with intelligence, insights, and more recently, data science and AI. These skills turned out to be a fundamental ingredient in my career's longevity and success.
The Stakes and Challenges
In media there's a lot riding on getting "what works" right. Estimates of annual global ad spend topped $752 billion for the first time in 2024, with the U.S. expected to capture about 47% of that for a total of $353 billion.1
At the end of the yearly cycle, when all the money is spent, will we be able to determine which investments had an impact on profits or a purchase decision
The answer is often no. Does this mean advertising doesn't work? Absolutely not, and I believe it's the opposite: advertising, along with marketing and other types of media, is essential to the functioning of our economy.
If this seems like a disconnect, well…let's do a quick retro on how we got here.
Before digital, media consumption was measured with tallies of subscribers for print2 and ratings for broadcast, both of which had flaws from the start.
Traditional television ratings, for example, were first used in the 1950s.3 Viewership was estimated based on surveys gathered from household panels that reported (or later, had monitored) what they watched on TV. The data from these household were then used to model the viewing behavior of all households in designated market areas (DMAs).
At a market aggregate level, this provided a picture of the whole audience pie, but slicing the pie by multiple factors (e.g., age, gender, race, income, etc.) would often render the figures less useful. This is in part because the household panels were not representative, and because of issues with sample size.
I’ll illustrate with a hypothetical example based on stories shared with me by frequent users of ratings data.
Let's say a programmer wanted to know the share of adult females in African American households who watched a specific live TV show on a specific day in a specific DMA. If that demographic represented a relatively small number of samples in the panel, the share of audience given to the show could change dramatically, if that household moved out of the market or skipped the show that day. While there would be very little actual change in real viewer behavior, the shift in ratings could affect both commercial ad rates and future decisions on which programs to carry on air.
I’ve personally seen ratings for TV programs with small audiences reported as zeros because none of the households in a market panel tuned in. And despite many advancements, including the addition of more households to panels and the rollout of digital metering solutions, technical issues over the years have undermined the credibility the industry's dominant television ratings provider.4,5
Digital and the measurement of media consumption on screens delivered rapid advancements in the state of the art, but the ecosystem's dizzying array of technologies, formats, walled gardens, exchanges, currencies, and data gaps make it nearly impossible to stitch engagement data into a holistic view across platforms and touchpoints.
If you're not familiar with the digital measurement and attribution space, I'll offer an analogy.
“Welcome to today's media targeting and attribution test kitchen! Instead of a single big ratings pie to slice, the digital ecosystem offers thousands of little pies to be chopped up, sifted through, and mashed together into new pieces of pie we call audiences. Mmmm, looks great and sure to be tasty!”
Kidding aside, this slicing and dicing makes measurement a huge challenge, even before separating out bad faith actors and those who commit fraud. Tech like Automatic Content Recognition (ACR) offers the promise of a more comprehensive view, but conflicting standards and limited access stand in the way of widespread adoption today.
Perhaps the most significant barrier to a holistic view is that we lack a common denominator for sizing audiences and for quantifying consumption – traditional ratings are still in use alongside new digital metrics like impressions, clicks, and attention.
And there’s one more thing: incentives. Many in the ad and marketing ecosystem are allergic to lowering spend because it impacts commissions and service fees. If you’ve ever managed a department budget of any kind, I’d be willing to bet you’ve thought twice about coming in materially under budget for the year, lest that result in less funding next year. This may be the toughest nut to crack of all.
Finding a Better Way Forward
Despite the many hurdles, digital and traditional advertising spend is expected to grow through 2027.6 Brands still need to get their message out, artists, writers, and talent need to be paid to create all the great shows and experiences we enjoy, technology marches forward, and the economy continues to expand.
We must work in the system we have now, and at the same time, I know we can be better at measuring consumer experiences and media effectiveness. I’ve done it. My view is that it will require a both-and approach.
We must both re-anchor on the fundamental questions to solve for
and embrace the best-suited quantitative approaches to reveal the answers.
One example of how we can do better comes from a groundbreaking case study on Budweiser that I came across in b-school. The Vice President of Marketing requested a "significant increase" in advertising spend to boost sales across twelve markets. Rather than blindly approve the expenditure, the company’s CEO commissioned a research team to estimate the increase in sales that should be expected from the additional investment.7
Budweiser found that it wasn't a case of "spend more, sell more." Instead, reducing spend lifted sales for a period after advertising was paused.
Intrigued by this result, more experiments were conducted. By modeling interactions across channels and other factors, Budweiser learned which media types were most effective, and developed a new strategy to pulse its campaigns – ultimately reducing overall spend by more than 50% while increasing market share by 480 basis points (bps).8
You can read all about it in in The Art of Problem Solving: Accompanied by Ackoff's Fables by Russell L. Ackoff which was published in --- 1978! The book also revealed that the leader of Anheuser-Busch who commissioned the study, August A. Busch, Jr., first approached Ackoff in 1961…more than 60 years ago! 7
One might argue that Ackoff had it easier than we do. He didn't have to deal with today’s proliferation of devices, market fragmentation, incompatible currencies, identity resolution challenges, cookie deprecation, data clean rooms, and a patchwork of privacy regulations.
Yet Ackoff and his team clearly answered "what works" with a level of coherence that drove major changes in Budweiser's media strategy and delivered material benefits to the company. And they did so at a time when data moved slowly, consumer research was much sparser, and desktop calculators were the most advanced form of computing available to the general public.
The case in The Art of Problem Solving detailed two principles that guided the design of the study’s experiments, offering key lessons:
The KPI that mattered was sales; therefore, sales is what was measured as a function of spend, instead of “recall of messages or attitudes toward the product” which may have been easier to measure.7
Ackoff’s team sought to move beyond statistical correlations to answer why there appeared to be a relationship between advertising and sales. Going against a common inference, their hunch was that forecasts of increased sales often led to increased advertising, not the other way around.7
A few additional points stand out to me:
Budweiser’s CEO initiated the study, not marketing, which the case states was initially resistant to evaluations of its request to increase funding
The measure of success was clearly defined
Structured experiments were used to test varying hypotheses
Ackloff’s team had the freedom to continuously test different levels of ad spending in test markets, regardless of sales forecasts, to eliminate the link between the two
The team’s research hypotheses evolved over time, and indeed it took time to first understand the behavior of the markets, which led to successful efforts later to optimize media spend
Reading about this study and its impact provides evidence for what I’ve come to believe – that a focus on what works, combined with statistical thinking, behavioral and economic theory, best practice, engineering, and operations can be integrated into a strategy engine that drives business forward.
When we work to advance knowledge, step by step, we do good with each turn. Exercising and building the muscles of curiosity, inquiry, learning, and experimentation leads to more sustained innovation and value creation. Integrating these behaviors creates a flywheel to power a whole system of strategic discipline, and the benefits become multiplicative.
Put simply, we do good by doing better.
I believe we should apply this principle to media measurement, and experience measurement more broadly, to reveal "what works.” This is where I'm focused as I prepare to move into the next leg of my career journey – striving to help brands, executive leaders, and myself to do good by doing better.
I’ll share more examples in upcoming articles and I invite you to join me on the journey by following along. "Stay tuned. We'll be right back."
About the Author
Eric Duell is an executive intrapreneur who works at the nexus of data, analytics, AI, technology, creativity, and strategy.
Eric’s efforts have led to improved profitability, better customer experiences, enhanced media performance, new product development, and growth in customer CLV at leading brands and organizations for more than 25 years, including in leadership roles at Comcast, The E.W. Scripps Company, advertising agencies, and management consulting companies. He is recognized as a Top Community Voice in Advertising and Strategy on LinkedIn.
His accrued experience spans a wide range of industries including digital, media, creative, content, marketing, advertising, brand, television, telecom, technology, publishing, manufacturing, retail, CPG, energy, financial services, healthcare, pharma, non-profits, and performance-improvement consultancies.
Eric holds an M.S. in Innovation Management and Entrepreneurship and an Executive M.B.A, both from Temple University’s Fox School of Business. Previously, he was an assistant adjunct professor at the University of Cincinnati, where he co-created the graduate-level data visualization class in the Lindner College of Business. Eric also attended Ohio University, where he received a B.S. in Communication.
Eric is an avid photographer, as well as a former photojournalist, musician, designer, creative director, and producer for television and video. He writes the Do Good by Doing Better Substack and contributes to The Goldmine from Philadelphia, Pennsylvania, where he resides with his family and their four Tonkinese cats.
Dentsu (2023). “Global Ad Spend Forecasts, December 2023.” Accessed 11-Mar-2024 at Dentsu.com. https://bit.ly/3PiX7d1
The audience of printed media and periodicals is typically estimated with subscriber or circulation figures by counting paid subscribers. In reality, such claims are difficult to verify without a system to track whether publications are received and read by the subscriber. Based on the author’s direct knowledge.
Nielsen. “Celebrating 95 Years of Innovation.” Accessed 12-Mar-2024 at Nielsen.com. https://bit.ly/3VbOmVV
Adgate, Brad (2021). “For Nielsen Ratings Complaints And Potential Competitors Is Nothing New.” Forbes. 01-Sept-2021. Accessed 11-Mar-2024 at Forbes.com. https://bit.ly/3wNP1mq
Cahillane, Mollie (2024) “Q&A: Nielsen's CEO on Rolling With Massive Changes in 2024.” Accessed 13-Mar-2024 at Adweek.com. https://bit.ly/4cfcBsl
EMARKETER Inc. (2024). “Worldwide ad spending growth will accelerate across the board in 2024.” Accessed 6-Mar-2024 at insiderintellice.com. https://bit.ly/3Pe2Lxd
Hoerl, Roger and Snee, Ron (2012). Statistical Thinking: Improving Business Performance, Second Edition, pp. 24-29. John Wiley & Sons, Inc.
Ackoff, R.L. (1978). The Art of Problem Solving: Accompanied by Ackoff's Fables, pp. 162-173. John Wiley & Sons, Inc.