Problem Statement
“The Fraud Detection & Defense workstream (FDD) aims to minimize the impact of fraud on our community.”
GR14 Governance Brief
To the extent that FDD has thought about optimizing capital allocation, it has been built on top of the assumption that a Sybil-free grants round is an optimal one. However, this overlooks important factors, such as bribery or quid pro-quo among founders and voters or leveraging quadratic funding for non-monetary gains (such as attracting users/followers) and any other game-theoretic quirk of the system that causes reality to diverge from the ideal model of quadratic funding. Ultimately this challenges a foundational assumption of FDD - that a Sybil-free grants round (GR) is an optimal one.
This means fraud detection and defense should really be one of several foundational pillars of the FDD stream, all of which support the general aim of optimizing the capital flow through each GR.
As Grants 2.0 approaches, and after a bruising round of budget discussions in GR14, this seems like an opportune moment to reconfigure FDD into a more holistic operation. At the same time, one of the pertinent criticisms of FDD in the past has been that its objectives and processes have been somewhat opaque and difficult to appraise. A reconfiguration must therefore start with a clear and unambiguous definition of a specific remit, a clear set of performance indicators, and demonstrable alignment with the priorities of the wider community.
This post is intended to stimulate discussion around evolving FDD so that it becomes more of an optimization layer than a defense widget. Rethinking FDD as an optimization layer flips the narrative from defensive and adversarial to constructive and enabling. It also provides a clear opportunity to start addressing non-Sybil inefficiencies in the grant system alongside the existing Sybil defenses.
The tldr for this post is that FDD should stop asking “How do we stop fraud”? and start asking “How can we optimally allocate capital”?
What does optimal look like?
Promoting the good
The optimal capital allocation does the maximum “good” with the minimum of waste. What consitutes a “good” project is highly subjective but perhaps we can define “good” as “closely aligned with the community’s preferences”. With some a priori assumptions about what the Gitcoin community values, we might define “good” to be something like:
goodness = usefulness + fairness + inclusiveness + sustainability
In reality the community’s preferences are probably a dynamic cloud of concepts that shift over time but usefulness
, fairness
, inclusiveness
and sustainability
seem like core properties that can be fairly well relied upon. That said, these categories are based on my own assumptions and interpretation from my experience in FDD and it would be better to gather some baseline empirical data that demonstrates what the community really values - this could be as simple as a poll or interactive word cloud. The community values can be encoded in a set of grant eligibility requirements, as is currently managed by the “GIA” squad.
Fine tuning of the DAO’s understanding of community’s preferences can be achieved using repeated snapshots of the aforementioned polls/wordclouds or more formal pre- and post round surveys of applicants, reviewers and observers. These surveys should be designed in such a way as to provide metrics against which a GR can be compared post-hoc, for example:
I am interested in funding environmental projects:
strong agree agree neither agree nor disagree disagree strongly disagree
I am interested in funding Ethereum infrastructure
strong agree agree neither agree nor disagree disagree strongly disagree
etc
The resulting data could then provide a semi-quantitative heatmap of community interests. Grants could then be marked with complementary tags by reviewers that can then be analyzed to see if the capital was distributed in alignment with the community preferences approximated from survey results. Consequently, FDD’s processes could be course-corrected to improve the allocation round-on-round.
There is potential for round-on-round learning by updating the weightings for each of the criteria by which the “goodness” of a grant is measured according to the evolving community responses. Survey respondents would have to be anti-Sybil checked somehow - perhaps with a Gitcoin Passport. They could also be incentivized by, for example boosting their Trust Bonus for completing the survey or perhaps using monetary or non-monetary rewards (such as POAPs).
These are relatively simple steps that could be taken to ensure the processes implemented within FDD are really well aligned with the values and preferences of the DAO and the wider community.
This takes care of the “good”, what about the “bad”?
Minimizing the bad
The “bad” is capital going to waste. Assuming obviously invalid or fraudulent grants are filtered out effectively by the eligibility scoring, there are two main ways capital can be wasted within a round:
- Inefficiencies within the grant reviewing process
- Capital capture by Sybils and airdrop farmers
At face value it is easy to reduce the cost per review - simply pay less to existing reviewers or hire cheap temps to conduct reviews. However, there is also a cost associated with low quality reviews as they either require multiple rounds of reviewing or increase the likelihood that a grants that are not “good” end up being funded, undermining one of the core principles.
The value added by reviewing experience and prolonged active engagement with FDD has not yet been quantified, but it could be measured simply by paying for temporary reviewers to review grants in parallel to the existing set of FDD reviewers for a fixed period of time and comparing the grant outcomes. One aspect that I have not seen discussed so far is the cost associated with reviewer attrition - each experienced reviewer that becomes disatisfied and leaves presumably either increases the burden on the remainers, adding “dissatisfaction contagion” risk and accelerating attrition, or incurs a cost to train and onboard replacements. Assuming retaining experienced reviewers is a net value-add to the grant review process, the question then pivots to the best way to remunerate those reviewers without overpaying (wasting capital) and also without underpaying (risk of attrition).
Finding the optimal incentivization mechanism that maximizes review quality and minimizes review cost is the primary objective of the FDD “Rewards” team. In GR14 they prototyped simulations using an agent-based model to extract insights about the optimal set of conditions for grant reviewing. This used synthetic data in GR14 to develop and calibrate the model and will be fed with empirical data in the coming months. The insights from these experiments should provide an initial foot-hold into optimizing the grant review mechanism that can then be iterated on in successive rounds.
Finally, Sybil attacks aim to skew FDD’s view of the community preferences by amplifying individuals’ voting power. Minimizing Sybil attacks is therefore a way to maximize the alignment between the capital allocated and the community’s preferences. Sybil attacks exploded in number in GR14 - at the same time FDD has developed real expertise in dealing with Sybil attacks over several rounds. However, there are also areas for expansion, for example tackling “airdrop farming” Sybils who are difficult to distinguish from genuine new users. This was a major issue in GR14 with the general model being that influencers rally new users to vote for a specific project on the expectation of an airdrop in return and encourage them to maximize their airdrop potential by splitting their votes across multiple wallets. This turns the users into unwitting Sybils. It is far less obvious how to classify, and how to treat, airdrop farmers that only use a single wallet - are they distinguishable from genuine new users?
Outlook and outstanding questions
FDD could have a more holistic approach to GR security if it reconfigures with “capital optimization” as its overarching goal rather than “Sybil defense”. This reconfiguration is an excellent opportunity to realign FDD with the DAO and the wider community, to set some new baselines building from empirical data and to enable closer tracking of FDDs progress against clearly defined KPIs.
The answers are not all presented in this post, it is intended to start a discussion around how FDD should configure itself going into the Grants 2.0 era. Some of the more nuanced questiions that could be pertinent to the discussion but have not been mentioned in the outline above are:
-
What are the sensible KPIs FDD can define to track its own progress?
-
Can an optimization framework eventually extend to allocate social, emotional, intellectual capital as well - do we get these things for free by optimally allocating monetary capital?
-
How can the $GTC token fit be used to best effect in a new capital-optimization framework?
-
How can we ensure FDDs capital optimization processes are transferable across rounds in Grants 2.0?