[Proposal] Ratify the Results of GG18 and Formally Request the Community Multisig Holders to Payout Matching Allocations

Thank you ​​to @Joel_m, @ghostffcode, @gerald, @jeremy, @owocki, @Sov, @M0nkeyFl0wer, and @Connor for getting us to final round results!

Payments have now gone out!

This proposal has passed on snapshot


GG18 matching results are live here! We propose five days for discussion and review followed by a 5-day snapshot vote to ratify before processing final payouts. In GG18, we move to a variant of QF that uses cluster-matching to introduce sybil and collusion resistance natively into the mechanism and reward projects with more diverse and pluralistic communities.

Edit: Results have been updated.
Version 2 detailed
Version 2
Version 1

Round Results

This round saw increased crowdfunded contributions above the previous two rounds on Grants Stack & Allo – making it the biggest ever on the new decentralized tech stack.

  • With $680k crowdfunded, we saw a 12% increase from the Beta round :tada:
  • With 328k contributions, we saw a 65% increase from the Alpha round :tada:

This round also saw the Grants Stack team make significant improvements to the product, including Multi Round Checkout, which makes it easy to donate across rounds and chains.

Every core round was also on an L2, including the first round on the new Public Goods Network. This round, fittingly, funded Ethereum Infrastructure. It ended up being the second-largest round of the season by crowdfunding despite having the smallest matching fund and fewest grantees of the four core rounds. This is a testament to the ease of bridging and the focused interests of our community.

Kudos to everyone who worked hard to make this round a success!

Round and Results Calculation Details

The complete list of matching results & payout amounts can be found here. Below, we’ll cover how these results were calculated and other decisions.

Post-round analysis had a $300k financial impact. This means $150k was reduced from projects that saw sybil or collusive activity and given to other projects.

Core Rounds:

Round Matching Pool Matching Cap
Open Source Software $300,000.00 5%
Ethereum Infrastructure $200,000.00 or 106.9 ETH 10%
Web3 Community & Education $250,000.00 6%
Climate $350,000.00 10%

In the climate round grantees were given the option to opt-in to an extra $100k of matching funding from Shell. Over 64% of projects chose to opt-in to this funding.

Next Gen Quadratic Funding

In theory, quadratic funding combines democracy and markets to create an optimal mechanism for communities to fund what matters. Under this mechanism, a project with many different supporters contributing some amount will receive much more funding than another project that gets the same total contribution from a single “whale.” However, Quadratic Funding’s optimality relies on assumptions that don’t hold in the real world.

It assumes that each donor is entirely different from every other and perfectly rational when deciding what projects will create the most value for them. However, we have users who will produce hundreds or even thousands of fake wallets to support themself. We also know users who will conspire to vote a particular way based on others voting the same way. Further, we’re not all completely distinct; many are members of the same social circles or communities.

Cluster-Match Quadratic Funding takes a step toward solving the sybil and collusion problems by embracing the meaning of our social connections. It was developed by Joel (who joined Gitcoin as part of the QED program), E. Glen Weyl (who co-authored the original QF paper with Vitalik), and our very own Erich (who has been working on pluralism since at least 2019 and most recently on Gitcoin Passport).

Cluster-Match QF takes the projects you vote for as signals of the communities you belong to. It then calculates matching amounts for each supporter and unique community combination. This method provides more significant matching funding to projects that receive support from more diverse communities.

The outcome is clear: sybils and colluders receive fewer matching funds, while grants that create value for the broadest range of communities receive the most. By implementing this method, we reduced the match of the most suspicious projects by up to 70% and redirected those funds to other projects.

For more details about pluralistic QF methods, check out this paper and/or this podcast.

Sybil Detection

The primary anti-sybil mechanism for this round is Gitcoin Passport. Passport aggregates identity signals from across web2 and web3 to understand the likelihood of an individual being a unique human. If an individual’s score is below a set threshold, then they’re less likely to be a real human because they don’t have all the identity signals a real human typically would. Individual donations have previously received multipliers of anywhere from 2 to 25x, and this means our system is a target for bad actors that’s worth making it hard to get into.

73.2% of wallets that donated reached a score of 20 or higher.

As with most QF Rounds, we saw some sophisticated Sybil attack patterns that still needed to be stopped by Passport (yet!). We can learn from these attack patterns and modify the stamp scoring mechanism to make it impossible for Sybils to get in the same way again.

We were able to detect these sybil attacks through programmatic analysis built specifically for Anti-sybil. We leveraged the new regendata database to build new python-based tools for finding Sybils. These tools can continue to be used and iterated on such that every round sees them get better.

After our on-chain data analysis, donations from addresses associated with these behaviors were excluded from match calculations. This includes the following:

  • Enhanced analysis of passport stamps to flag evidence of abuse between different wallets
  • ​​Flagging known sybil networks/addresses from our 40k address blacklist
  • Suspected bot activity based on anomalies detected in transaction patterns
  • Suspected bot activity based on anomalies detected in donation patterns


We ask our Community Stewards to ratify the GG18 payout amounts as being correct, fair, and abiding by community norms, including the implementation of Passport scoring as well as Sybil/Fraud judgments, squelching, and quadratic funding parameters made by the Public Goods Funding workstream.

If stewards and the community approve after this discussion, we suggest voting on Snapshot from Wednesday, September 26th to Sunday, October 1st. If the vote passes, the multisig signers can then approve a transaction to fund the round contracts, the results will be finalized on-chain, and payouts will be processed promptly after

Options to vote on:

1. Ratify the round results

You ratify the results as reported by the Public Goods Funding workstream and request the keyholders of the community multisig to payout funds according to the GG18 final payout amounts.

2. Request further deliberation

You wait to ratify the results and request keyholders of the community multi-sig wallet to delay payment until further notice.


meme credit to @McKennedy


You ser are a gift to this community. Thank you!


Congrats on a successful round!

One idea, process-wise here for the future:

I always found it unintuitive that we have the community ratify results & approve payouts for something that it really makes more sense for core team members to validate. Those of us outside of the weeds just don’t have the context/process required to say “yes these results adhere to the intent laid out for the round” or “no they don’t”.

  • Maybe in the future this gets excluded from governance surface area? (i.e., the stewards agree that this is something dedicated DAO contributors can decide on themselves — and we get an FYI on what the end payout amounts were, rather than having to formally ratify them)

  • Better yet, perhaps there’s a low-fi “dual control” type process that a handful of stewards could run to say “yes this looks good”? (e.g., spot checks that there are no projects above matching cap thresholds, etc)

Just food for thought!


Fully agreed with previous post. I see zero way to reproduce any of this without being led behind the curtain by someone. In ideal world report would be crafted in a way that gives confidence in matching calculations while obscuring details of sybil defense.

I understand algorithm is new and link between raw donations and matching results may be more muddled, but is something preventing you from sharing algorithm-agnostic information that used to be present in past reports:

  • how many distinct non-sybil donations were made for each grant?
  • how much “valid, non-sybil” money was donated to each grant (before matching calculations).

This is information only Gitcoin may know, so it is especially valuable when looking at results.

Since you did the counting I assume you have actual implementation of novel algorithm on-hand, not just paper so why not share it? That way we could at least see what kind of results the algorithm is producing when run on example data and see if matching results look plausible in light of that.

Past reports (Alpha/Beta round) gave opportunity to stewards, donors, voters, anyone to look at results and intervene if they spotted something strange in data presented. This gave some extra credibility to vote counting process which current single-column spreadsheet does not provide especially when coupled with new method of counting votes.


Thanks for sharing your perspective from leading PGF Annika! I really agree with this especially given that this process slows down payouts by 10 days. In an ideal world I think we have payouts be nearly instant after the round ends and all sybil defense either handed by passport or automated.

I hear you. We really want to move this to a transparent mechanism without giving away the game to sybils.

We could and should share more high-level information that sheds more light on what we do for those community members interested in the details.

Let me get some sleep and then pull together + share the following:

  • Code/formula used for cluster match QF
  • Pre / Post Sybil Analysis Donor #s and Donation $'s
  • Pre/ Post Cluster-Match QF Matching $'s

If anyone from DAO core has objections to the above information being shared please let me know!


As social centrality comes into focus, cluster-match QF is going to break some online friendships for the greater good (jk…:sweat_smile:).

Kudos and gratitude to the entire team and the supporting community crew for this analysis and protecting the funds for public goods!


Thank you for all the work and for showing the extensive details about how they were calculated.

Love to see the dedication to transparency and fairness in the process. Kudos to everyone involved in making this round a success.


Thanks for all the work on this Umar <3

Will be voting yes on this + agree with annika’s comments.


Congratulations to the team and all the projects on GG18.

It’s a vote of yes for me!

However, I observed that the results didn’t show the number of contributions for each projects and the amounts contributed. What is shown in the results is just the matching amount.


I am intending to bring a proposal to ratify the round structure and release the funds pre round moving forward. This would speed up the payouts dramatically.

I do think its good to share the process and details of Sybil defense. Hopefully we can do that in more details as a WIP as the process unfolds so folks can provide context and feedback sooner after the end of the round. @umarkhaneth has built some fantastic tools along with others on the team and in the community which should speed this process up and increase credible neutrality.


Good point, perhaps that should be changed. We will discuss how best to do this so people understand how cluster match QF and the rules of the round are applied and are represented in the results.


Hey, it’s Yazdani from Unitap. When the previous round got finished we were in 10th place and now we’re in 17th place. And honestly it’s really weird to us cause we didn’t do anything suspicious, we did only one or two announcements.


Thanks @Yazdani - @owocki wouldn’t be able to answer right now but I’m sure the team on the ground could.

@M0nkeyFl0wer are there logistics that come into play after the round closes that the community needs to account for?

I suspect @Yazdani that the results you were seeing (10) - dropped you down due to the manual work that the Gitcoin crew did after round close - resulting in a lower score (17) @connor I’m making things up, can you speak to why?

I could be WAYYY off base here but want to continue the convo… cc @umarkhaneth @Sov

We would love to hear more on how that’s not an optimal user flow but will also let the team talk about processes that I’m (literally) assuming at this point. This is how we backlog improvements to Governance. Thanks!

1 Like

Hey happy to help. Sucks that this can be confusing. Sounds like we are talking about a drop from the previous round to this round? There are any number of factors in terms of the level of support one grantee gets compared to the rest within a given round and from one round to the next. Happy to take a look but it doesn’t seem unusual to me that a grantee would move a fair amount from one round to the next with all the changes in other grantees in the round as well as in the market overall.

If we are talking about “standings” at the end of the this round compared to now that first depending on what dashboard you were looking at the result. Its likely this wouldn’t take into account how many of the supporters had passport scores that qualified them to count towards matching fees or if they had donated at least the minimum amount to be considered (generally at least a dollar). Finally it wouldn’t account for any squelching of sybil attacks,

Hope that helps. Apologies for any confusion.


Hi, this is Priyank from Nawonmesh. We were in the Climate Round.

I am super perplexed about Nawonmesh’s matching amount calculations. It seems to be way off from our expectations.

Two questions -

  1. We had 300 votes as per our gitcoin grants page. And 240 passport votes as per the ChainEye dashboard. Considering this, the matching amount for us is too low compared to the others in the Climate round. Want to understand the reasons for this huge gap.

  2. I am very sure we opted into the Shell funding while filling the grant application. I even dropped an (unresponded) msg to Jon post-GG18 to check if our email id shared with the Gitcoin team is correct because we were expecting the Shell KYC email. But I don’t see Nawonmesh in the Shell funding list. Was it really a silly miss on our end or did something else go into the decision making of who all are eligible for that?

I raised these concerns to @M0nkeyFl0wer and he told me to post them here.


1 Like

Any follow-up on efforts to deliver updated columns?

  • Code/formula used for cluster match QF
  • Pre / Post Sybil Analysis Donor #s and Donation $'s
  • Pre/ Post Cluster-Match QF Matching $'s

Community dashboards and on-chain information is insufficient to correctly compare donations against matchings as they do not account for manual interventions done by Gitcoin team.

I believe that explaining what happened to @priyank and @Yazdani would be clearer, more persuasive and more quantitative if they were given access to dataset containing grant-level information promised by @umarkhaneth.

From voter/grantee perspective it is impossible to analyse what exactly influenced final matching without knowing the starting point. “How much money was donated to this particular grant (that actually counts for purposes of matching)?” feels like a question deserving a clear non-ambigious and quantitiative answer.


Hey, thanks for your answer. My question was about our place at the end of this round compared to now. So now it’s clear, and as a product manager/designer, I would say that Gitcoin should make it way more clear for users so they know the rule. I know that they can find the rule but we can’t see them highlighted in the user flow.

And it’s so simple to highlight them cause there are only 2 rules:

  1. Minimum amount
  2. Minimum GP score

To this point, I wonder if these result posts could continue the precedent of sharing high-level “rules” that the sybil defense is based off of (i.e. logic + ideally code snippets used to surface related accounts, minimum donation definition, etc.) Then if a particular project has additional questions about the rules/logic that their community triggered- this can be addressed in greater detail for their instance.

Since sybil action shouldn’t be conflated with self-attack- I think it would be helpful for groups like @yazdini and @priyank who might want more info on the silencing, and then can verify for themselves the donors who were hitting these rules.

Overall though- I support ratifying these results. In line with expectations and past rounds’ sybil-rates.


Just wanted to express my best wishes to the team behind GG18, what a great round this was! Thank you to @M0nkeyFl0wer @umarkhaneth @jon-spark-eco @MathildaDV and the rest for making sure to support the community whenever and however possible throughout those two weeks :slight_smile:


Delivering here on my promise to share more details.

How to Calculate Cluster Match QF:

  • First, a quick review of simple QF:

    • Sum the square roots of each individual’s contribution to a project
    • Square that sum to get a per-project value
    • Distribute the matching fund proportional to the relative size of each projects square (and enforce a matching cap so that no project takes too much of the pool by itself)
  • Next, cluster-match QF. Cluster-match QF orients matching funds around communities rather than individuals. This is mainly the same overall process however before we square root contributions we cluster them together.

    • Cluster based on the donation profile of a donor. A donation profile is defined as the set of decisions you made on each project: donate or don’t donate. Donors who made all the same decisions are clustered together
    • The contributions to a project by the same cluster are added together as if they were the same voting bloc. Then their square root is taken.
    • After that the process is the same: sum the square roots of all clusters grouped by project, square the sums, and payout the matching fund proportionally.

In Code:

Thank you to @Joel_m for writing this python function:

def donation_profile_clustermatch(donation_df):
  # run cluster match, using donation profiles as the clusters
  # i.e., everyone who donated to the same set of projects gets put under the same square root.

  # donation_df is expected to be a pandas Dataframe where rows are unique donors, columns are projects, 
  # and entry i,j denote user i's total donation to project j 

  # we'll store donation profiles as binary strings.
  # i.e. say there are four projects total. if an agent donated to project 0, project 1, and project 3, they will be put in cluster "1101".
  # here the indices 0,1,2,3 refer to the ordering in the input list of projects.

  projects = donation_df.columns

  clusters = {} # a dictionary that will map clusters to the total donation amounts coming from those clusters.

  # build up the cluster donation amounts
  for (wallet, donations) in donation_df.iterrows():

    # figure out what cluster the current user is in
    c = ''.join('1' if donations[p] > 0 else '0' for p in projects)

    # now update that cluster's donation amounts (or initialize new donation amounts if this is the first donor from that cluster)
    if c in clusters.keys():
      for p in projects:
        clusters[c][p] += donations[p]
      clusters[c] = {p: donations[p] for p in projects}

  # now do QF on the clustered donations.
  funding = {p: sum(sqrt(clusters[c][p]) for c in clusters.keys()) ** 2 for p in projects}

  return funding

More Numbers

Here are the calculation details including both matching formulas and pre/post squelching voter numbers and donation amounts.

The ‘base’ totals are the numbers after applying our basic rules: have a passport score over 20 and donate at least $1.

The ‘eligible’ totals are the numbers after applying our sybil squelching based on the rules stated above:

I’ll note that while pulling this data together I found a bug in how my data was being aggregated. I fixed this and it affected the results. To me this underscored the necessary importance of transparency. We need to rapidly move toward turning off post-round squelching and relying only on passport + better QF.