Part 1 of this series prompted some useful pushback on LinkedIn, especially around terminology. Fair enough. The language of measurement in the arts is notoriously unstable. What one person calls an outcome, another calls an output, a goal, or an indicator. But that confusion only reinforces the larger problem. We have built an entire measurement infrastructure on terms the field itself does not use consistently. Strategic plans organize around metrics. Evaluation frameworks track one version of “outcomes” against another. The actual purpose of the organization sits in the mission statement, mostly undisturbed.
Which leads to the question this essay tries to answer: if the measuring is pointed at the wrong things, what should we be paying attention to?
The Reprise is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
We need to look at other things and look at them differently. Getting there takes more honesty than the current system asks for, which is part of why the current system stays in place. Tracking what’s countable is straightforward. Understanding how the work changed people requires judgment, time, context, and a willingness to sit with ambiguity. The field has mostly chosen the version that fits in a spreadsheet.
The cost of the current approach is visible to anyone willing to look. I’ve sat across from hundreds of arts leaders who know their work is changing lives and can’t prove it in the language their funders or boards often require. They know the teenager who discovered something essential in a dance workshop. They know the neighborhood that started talking to itself differently after a theater performance. They know these things the way a teacher knows which students learned something: in their bones, from watching it happen. And then they go home and try to fit all of that into a logic model.
Part 1 described the structural distortion. Here’s what it feels like on the ground. Programming gets designed to generate good data. Artists develop a low-grade chronic anxiety about relevance, measuring their worth through frameworks that were never built to assess artistic merit. Whole institutional cultures form around defensive justification. Ask people in the sector privately and most of them wouldn’t disagree with any of this.
Start with intentions
We’ve gotten the sequence backwards. We measure outcomes and hope they reveal intentions, when we should be starting from the other end entirely.
During my years in philanthropy, I pushed every prospective grantee to get specific about what they intended. I wasn’t interested in boilerplate about “community impact” or “artistic excellence.” I wanted to know what they were trying to do and why it mattered in their particular context. The process was often uncomfortable. It forced organizations to confront gaps between their stated missions and their actual priorities, sometimes for the first time.
Consider what this approach replaces. The standard model asks: what impact did you have? The question sounds reasonable until you realize what it actually demands. It asks organizations to prove causation in a domain where causation is almost never provable. Did your after-school program reduce dropout rates, or did six other things happening simultaneously in that neighborhood contribute? Impact measurement asks artists and cultural organizations to make scientific claims without scientific controls. And when the claims inevitably fall short, the conclusion isn’t that the framework is flawed. The conclusion is that the work didn’t perform.
Measuring intention asks something more useful. Intentions describe the real work. Outcomes are what may follow if that work is being done well. Did the organization know what it was trying to do? Were those goals appropriate to context and capacity? Did they emerge from genuine engagement with the community being served, or were they manufactured to satisfy a grant application? That’s where evaluation earns its legitimacy.
Take a contemporary theater company led for decades by its founder. It may describe its future in terms of outcomes: a new artistic director, a refreshed board, new donors. But the real intention may be to steward a graceful founder transition without losing artistic identity, organizational stability, or the trust built over years. And once you name that intention honestly, you start learning things that outcome metrics would never surface. You discover that the founder holds every major donor relationship personally. That nobody on staff has been empowered to make programming decisions. That the board has deferred to one person for so long it’s lost the muscle for independent governance. Those are the things that determine whether a transition succeeds or destroys an organization, and none of them show up in a strategic plan organized around fundraising targets and audience growth.
The difference between vague and specific was stark. One orchestra came in talking about “audience engagement” and “community relevance.” When I pressed, what they wanted was to build a pipeline between their youth education program and the local public school system, so kids in under-resourced neighborhoods had sustained exposure to orchestral music over multiple years. The explicit goal was creating the next generation of ticket-buyers, yes, but also the next generation of school board members and city council members who understood why the arts mattered. That’s a specific intention. It leads to programming decisions, partnership choices, and evaluation criteria that “audience engagement” never could.
One performing arts center I worked with was in a neighborhood where the demographics had shifted dramatically over fifteen years. The people walking past the building every day didn’t recognize anything inside it as theirs. In their proposal, the company described this as wanting to “serve diverse communities.” That language obscured the real intention, which was far more radical: to become a place where the Somali and Latino families now living in that neighborhood could see their own stories performed by artists from their own communities, in their own languages, and feel ownership of the institution. That’s a fundamentally different project than “diversity.” It implies a different season and different hiring. The board would need to change. So would every measure of success.
A dance company said they were focused on “artistic innovation.” What they really wanted was to commission three choreographers working at the intersection of West African movement traditions and contemporary technique, and to document the creative process so that the resulting vocabulary could be taught and transmitted. They were trying to build something that would outlast the performances themselves. When we named that, the evaluation practically wrote itself: did the vocabulary get created, documented, and picked up by other practitioners? You would never arrive at that question from “artistic innovation.”
What I found, across hundreds of these conversations, was consistent enough to be worth stating plainly. Organizations that could articulate clear, contextually grounded intentions produced more meaningful work. Every time. Clarity of intention turned out to be a better predictor of quality than budget size or institutional prestige. It wasn’t even close.
What this comes down to, I think, is respecting something the field knows but rarely says out loud: different artistic practices serve different functions. No universal metric is going to honor that. The attempt to build one is part of what broke the system in the first place.
Process and ecosystem
Naming intention is necessary but it isn’t sufficient. Once intentions are clear, the question shifts to process. How did the work get made? Did the project engage meaningfully with the community it claimed to serve, or did it consult and then proceed as planned? Whether something actually developed during the making matters. So does whether the relationships that formed outlasted the project itself.
A residency program that builds deep relationships between artists and a neighborhood may produce unimpressive attendance numbers. It may also create the conditions for sustained cultural engagement that extends years beyond any measurement window. The attendance figure captures none of that. I funded a program like this in the rural South. The artist was embedded in a small town for eighteen months, working with local storytellers and high school students to create a performance piece drawn from the town’s history. By the numbers, the project was modest: a few hundred people saw the final performance. But two of the students went on to study theater. The storytellers kept meeting on their own after the residency ended. The school incorporated oral history into its curriculum. None of that was in the grant report, because none of it happened within the reporting window.
At a systemic level, the questions get bigger. Cultural change is diffuse and compounds over time. Trying to isolate the contribution of one intervention is like asking which raindrop filled the reservoir. If individual attribution is impossible, we should be tracking the health of the whole system instead. That means paying attention to whether local artists can stay in their communities and advance their practice. It means looking at whether arts organizations are building real partnerships with schools, health systems, social services. We should be asking whether experimental work is getting supported, or whether only the programming that guarantees safe numbers survives. And we should be honest about whether barriers to participation are coming down or just getting acknowledged in annual reports.
We can still count things that tell meaningful stories. Track who returns, who brings others, who takes on leadership roles. Pay attention to behavioral change: do participants seek out other cultural experiences afterward? And follow trajectory indicators over longer time horizons. How do people describe an experience a decade later? Those numbers don’t fit neatly into a logic model, but they tell you something real.
And we have to make space for what can’t be counted at all. The child who sees a dance performance and becomes a movement therapist fifteen years later. The woman who joins a community choir after her husband’s death and rebuilds a social life around it. The college student who volunteers at a literary festival and changes her major. These transformations happen in the space between experience and its eventual expression. They unfold across years, through encounters no survey will capture. I don’t have a tidy solution for this. The honest answer is that some of the most important things the arts do will never be measurable. Any assessment framework that can’t make peace with that is lying to itself.
Learning as the point
The goal should be learning. What’s working, what isn’t, why. But in practice, the field has built an evaluation infrastructure almost entirely oriented toward justification. The reports we generate are designed to prove that money was well spent. They’re written for funders. They document activity, and almost never insight. Because the underlying purpose is defensive, the people writing them are defensive too. Nobody puts their failures in an annual report. Nobody writes a grant narrative that says: we tried something ambitious and it didn’t work, and here’s what we learned that will make the next attempt better.
That’s the culture we’ve built. It is the opposite of learning.
The infrastructure I described in Part 1 reinforces it at every turn. The whole apparatus of outcome-setting, outcome-tracking, outcome-reporting creates a closed loop. Organizations learn to perform for the evaluation. The performance becomes the point. Opting out feels like professional suicide, so the cycle continues.
Real learning requires honesty about what didn’t work, and that only happens when people feel safe enough to be candid. Funders have to be willing to hear that their money supported a productive failure. Organizations have to treat evaluation as something they do for themselves, to get better, not as a performance for an external audience. I saw this clearly as a funder. The organizations that learned the fastest were the ones that asked hard questions of their own work, adjusted mid-course when the answers were uncomfortable, and brought their communities into the assessment process as partners. The ones that struggled were often producing beautiful reports full of impressive numbers that told me very little about whether anyone’s life had actually changed.
The shift is easy to describe and hard to execute. Funders have to stop demanding proof of impact and start asking what people learned. Organizations have to let go of the performance of certainty and make room for genuine curiosity instead. The more useful question about any piece of artistic work isn’t just “did it succeed?” It’s also “what did it teach us?”
That reorientation changes what gets funded and how organizations relate to their own programming. It also changes something harder to name: the emotional climate of the work. I’ve seen what happens in organizations where people feel permission to be curious instead of defensive. The conversations get better. So does the art. You can feel it.
Getting it right
Somewhere right now, an artistic director is standing in the back of a half-empty house watching an extraordinary performance that can still look like a failure in the report she files. She knows what happened in that room tonight. She could tell you what the intention was, how the process unfolded, what the artists and the audience found together. She has all the evidence that matters. Tomorrow, she’ll write it up as 279 tickets sold.
Lynda de Koning
(she/her)
General Manager | PAC Australia
| +61 (0) 438 860 020
| paca.org.au
Peramangk / South Australia GMT +9.5
Loved this:
https://www.americantheatre.org/2026/03/09/operationalizing-the-art-of-care-lessons-from-sundance/
Katherine Connor
(she/her)
Executive Director | PAC Australia | +61 (0) 419 428 412 | paca.org.au
I acknowledge the Whadjuk people of the Noongar nation, on whose land I live and work, and pay respects to their Elders past and present.
If this email arrives in your inbox out of hours, I don’t expect you to read, action or reply to it outside of your working hours.