I’ve heard the successful ED of a successful organization refer to cost-per-vote (CPV1) metrics as “bulls**t.” Others value this metric quite highly. I believe there’s more common ground than it appears.
Strengths & Weaknesses of CPV
All of us investing time and money into the movement seek to maximize our impact; CPV provides a quantitative framework for that effort. As a scientist at heart, this approach strikes me as compelling, even obvious. CPV is a critical tool that played an important role in winning the presidency; without it we’d be dumber, weaker, and less able to optimize our investments. In addition, CPV analysis has mobilized many individuals as donors, who appreciate the validation it provides.
But CPV has weaknesses that are easy for its users (including me) to underweight. I'll just discuss the most glaring limitation, which is that CPV ignores future impact.2 That’s a big gap.
CPV and the Stack
Just as Rome and Paris rest atop centuries of their own ruins, we stand atop (i.e. are beneficiaries of) a mountain built by past political investments. Let’s call that mountain “the Stack.”
The Stack consists of two categories: (1) Movement assets, and (2) Baseline voter mindsets.
1. Movement Assets, for example:
Human capital (both professionals and candidates)
Quality of data and tools
Local organizing networks
2. Baseline Voter Mindsets, for example:
Starting impressions of parties and candidates
Starting likelihood to vote3
Starting likelihood to participate in other ways (e.g. volunteer or donate)
CPV ignores a project’s contribution to the Stack. Therefore we can comfortably agree:
Unless the Stack is irrelevant, CPV is an incomplete measure.
CPV is a powerful tool to analyze and optimize within a certain set of tactics. But undue emphasis on it risks the “streetlight effect,” which is a “bias that occurs when people only search for something where it is easiest to look.”4
Augmenting CPV
To optimize investment across the full opportunity set, we need a new mental model to sit alongside CPV, something like “Cumulative contribution to the Stack.” I suspect that greater efforts to quantify such a metric would benefit the movement, even without randomized-trial-level rigor. Thoughtful back-of-the-envelope numbers are usually better than none at all.
Frameworks to analyze a broader set of projects would help practitioners and donors better allocate their time and money. In addition, by quantitatively analyzing a broader swath of projects, more of the movement would become fundable by donors seeking quantifiable impact.
Conclusion
CPV is a critical tool and I’ll keep using it. But it’s easy to overweight because our other quantitative tools are so limited, and I’ll try to keep that in mind as well.
I hope that we improve our ability to measure what CPV misses, so that CPV can become just one of several key evaluation metrics.
Closing questions
What metrics besides CPV have you used to compare projects?
What have you found to be the best descriptions of the CPV framework’s strengths and weaknesses?
For readers far up the learning curve: I mean CPNDV, but say CPV for greater accessibility. All points apply equally.
Another important weakness of CPV is that it obfuscates the importance of earning harder-to-get votes. Weaknesses raised by others are that CPV is only usefully expressed as the cost per change in probability of winning an election, and that some ways it’s used contribute to problematic racial dynamics within the movement.
Multi-cycle turnout effects are the only part of the Stack where I’m aware of any RCT evidence. That evidence - just a small handful of studies - shows that these effects are positive but not game-changing.
For more, visit https://en.wikipedia.org/wiki/Streetlight_effect