Thank you for this post. I agree with you that we have a long distance to travel from the current centralized & highly subjective process to a protocolized version of grants review. It’s essential to have a credibly neutral grants curation process that achieves a level playing field for all eligible grantees.
IMHO, “credible curation” is a necessary primitive – arguably on par with Sybil resistance – but much less understood. We need this to be trustware.
For GR15, I worked closely with the Public Goods team on the DeSci round – setting eligibility criteria, doing grantee outreach, and reviewing grant submissions. Here are a few thoughts from my experience, which build on your suggestions for progressive decentralization:
-
Automation: A number of eligibility checks can be automated. For instance, determining the amount of funding received in the past, screening for contributors’ conflicts of interest, pulling heuristics on the age of the project / number of contributors, screening for duplicate submissions, etc. There could be an interesting use case for Gitcoin Passport to score contributors’ trustworthiness too.
-
Batching: Grant reviews can be batched more intelligently for the humans in the loop. For example, grants that have very similar keywords or have shared core contributors can be batched. A reviewer can use batching to make important gating decisions, eg, “are these 5 grants duplicates”. There are a lot of UX improvements to make work easier for human reviewers while at the same time training models to aide human decisions.
-
Biases: Every reviewer will have their own sources of bias that overtime can be controlled for by comparing their reviews with others in a pool. As a reviewer, I shared my heuristics with other reviewers and my doing so likely had some influence on other reviewers’ decisions. It could be helpful to do blind assessments in order to calibrate reviewers and also ensure the right perspectives are represented. Over time, we can develop heuristics for trusted reviewers and provide either rewards or give increased weight to their decisions, like you describe.
-
Feedback loops and notifications: There needs to be a stronger feedback loop among FDD, round eligibility reviewers, and grantees making updates / clarifications. As a reviewer, I would only look at each grant once and it’s possible that grantees made improvements or changes to their profiles that might have changed my decision if I’d been notified. I don’t know how/if my or my peers’ observations were funneled back to FDD or to grantees.
-
Consensus mechanism: It wasn’t clear to me as a reviewer what the ideal consensus mechanism is for determining eligibility. Should it require an unanimous “hell yes” from all reviewers to be accepted? Should it require a single “hell no” from one reviewer to be ineligible? Should it require a consensus score above a certain threshold? Presumably you’d want to support a variety of consensus mechanism that communities can choose from and the rules of the game are made explicit to grantees at the start of the round.
The Grants 2.0 endgame likely needs to involve two sets of reviewers:
- True independents who have no skin in the game apart from wanting to develop a good ratings reputation over time, ie, “professional reviewers”
- Deep domain experts who have skin in the game in the sense that they want the round to succeed by having a high quality grant mix, ie, “peer reviewers”
There can be some compelling interplay and game dynamics between these two groups of reviewers. A bit like the role of triage teams and specialist teams in a hospital.
Anyway, thanks again for shining light on such a critical piece of the puzzle!