Think about how much time and energy people dedicate to finding investment opportunities in the market. Obviously not everyone is the cartoon version of the daytrader with multiple trading chart screens open and CNBC blaring financial commentary in the background. But anyone mildly concerned about, say, being able to retire with some savings is likely to clock in a few days in the year thinking about and researching what to invest in. Even if you don’t do it yourself, you most likely spent some time looking for money managers with a track record of successful investments (and for these people investing is a full-time job!)
Thanks to this collective effort money flows efficiently in the economy and goes toward the companies and assets that are likely to be most productive - or at least that’s what economists want us to believe.
How Efficient is the Economy?
“Efficiency,” of course, is a relative term. Obviously it’s better if money goes toward ventures that actually produce something beneficial to society. We don’t want to see money wasted on vanity projects that benefit nearly no one, while lacking basic goods or services in the economy. But we also want people to have an economic incentive to research and build things that can have a great impact on society but are not easy to monetize. In fact, we probably want a lot of money going toward those things, especially if we know which ones are likely to have massive impact.
The question then is, how are we going to know what are those “things”? How much time and effort are we - as individuals and as society - dedicating to finding the next transformative innovation in medicine, open source software, or science? Maybe we’d share a post about it if it comes across our feed. Maybe we’d mention it to a friend in passing, if it’s a really exciting innovation. But for the most part, that’s about it. That’s the extent of the contribution of the vast majority of people out there to helping surface the best and most impactful projects.
Social media platforms are of little use here also. Their algorithms don’t care how many children will be cured of a disease thanks to sublingual immunotherapy. They care if your post is popular or not, and will boost it accordingly. Unfortunately, the more time you have to spend online coming up with catchy memes and clever ways of phrasing your posts is time you’re not spending on building something actually useful. What’s more, you’re competing with about 5 million other people who may be just as good at creating memes and posts but whose work may be of little impact to society.
None of this matters anyway if your efforts to boost an impactful project are not somehow tied to resources going toward that project. The chance that people boosting the signal of potentially impactful projects on social media somehow influencing bureaucrats or philanthropies to allocate money toward these projects is quite low.
Fortunately, we now have mechanisms that allow money to be distributed toward projects directly based on community input. One such mechanism is Quadratic Funding (QF). In QF money from a funding pool is distributed based on a formula where the number of people contributing to a project has greater weight than the total amount contributed. If one project has $100 from 50 people, for instance, it would receive significantly more funding than another project that received $100 from just 2 people. The result is that funding is distributed more closely to the community’s “votes.”
But while QF guarantees that we’d have more influence in choosing which impact projects get funded, it still doesn’t solve the problem of “how” the community knows what projects to fund? How does it know which projects are likely to have the most impact (or already had the most impact)? If people are mainly contributing to the projects they see most often on social media, and mistake these for the most impactful projects, then it’s hardly an efficient mechanism. And it is even worse if project teams need to spend more time on social media boosting their own signal over doing impactful work.
What we want then is for impact-makers to spend their time creating impact, and for contributors to have better insight into project impact. But how do we get there? Sure, if you tell all your friends to dedicate at least one extra hour a week to researching potentially impactful projects that would help. Unfortunately, that’s not an approach that is likely to scale.
Incentives for Impact Signaling & Evaluation
What we really need is to have an incentive structure for impact evaluation that is comparable to the one working in the market; people are spending so much time on investing because they’re expecting a return. But they only get a return if they are correct in their investment evaluation. In turn the money is distributed efficiently in the economy. Most people don’t get paid merely for investing, they get a return if their prediction was correct. Even those that do get paid merely for the job of investing are unlikely to continue getting paid if they’re wrong too many times.
So why not have a similar incentive structure for impact projects? Wouldn’t it be great if people could evaluate the expected impact of projects, and get rewarded based on the accuracy of their evaluation? The incentive structure could look something like this: if a project is worth $100K in funding, the impact estimate proposer could receive 3% of that ($3,000), and would need to spend $1,000 (1% of the expected value). Now suppose the true impact of the project is only $10K. Then the proposer would receive $300 (3% of $10K), thus losing $700 of the initial investment. On the other hand, if the true impact of the project is $300K, the proposer would still receive the maximum of $3,000, thus foregoing $6,000 in profits.
The incentive then is for proposers to always review the impact of projects as accurately as possible (to make the most ROI). It also motivates proposers to look for the most impactful projects, since those generate the most returns. To further incentivize seeking the most impactful projects we can even tweak this mechanism a bit; for instance, projects with greater impact can require the proposer to spend proportionately less money. So maybe a project expecting $100K in funding requires the proposer to spend $1,000 (or 1%), but a project expecting $1M in funding requires spending only $6,000 (or 0.6%). The return in both cases would still be 3%.
What should we do with the money spent by proposers? That money can go toward the income of independent impact evaluators, who can then more accurately determine the credibility of the proposal.
There are still unanswered questions here about how the “true impact” of a project is assessed, how are evaluators prevented from colluding with proposers, and so on. These are topics for another post. Bot for now let us opens the discussion on not only why impact evaluation is important, but how do we incentivize it in a manner that scales while maintaining the integrity of the process.
Ultimately, if we want more money going toward impactful projects, we need incentive structures that can both prioritize the most impactful work, and credibly evaluate the impact of that work. Nailing down both of these objectives will signal institutions and governments that funding impact can be done efficiently and with broad public support.
(Originally posted here)