[Discussion & Proposal] Ratify the Results of Gitcoin’s Beta Round and Formally Request the Community Multisig Holders to Payout Matching Allocations

EDIT #3 - The snapshot vote to ratify these results is live - thank you to everyone who helped us get to this point, and for your patience during the process. Please vote here: Snapshot


EDIT #2 - We have recently posted the second iteration of revised results (and hopefully the last) :slight_smile: You can see these final results here and more information on what additional Sybil detection was done in the comment below. Thank you to all in the community who helped to review and flag discrepancies in the results, helping us prevent more Sybils. We plan to move forward with the ratification of these results via a Snapshot vote in the coming days.


EDIT - We have posted revised results with more robust Sybil detection here - you can see the full write up towards the bottom of this thread (direct link)


TLDR: Beta Round final results are live here! We propose ~5 days for discussion and review followed by a 5-day snapshot to ratify, before processing final payouts.

Background

First of all, I’d like to thank @ale.k @koday @umarkhaneth @M0nkeyFl0wer @jon-spark-eco @nategosselin @gravityblast (and many more from the PGF, Allo, and other workstreams) for all the work that went into running this round and helping to get these final QF calculations.

This Beta was the second Gitcoin Grants Program QF round on Grants Stack. It ran from April 25th to May 9th and unlike the Alpha Round, it was open for anyone to participate (as long as projects met the eligibility criteria* of one of the Core Rounds). $1.25 Million in matching was available between 5 distinct rounds (and correspondingly 5 smart contracts). There were also 10 externally operated Featured Rounds, however, this post only pertains to the Core Rounds operated by Gitcoin. These rounds and matching pools are broken down below:

  1. Web3 Open Source Software [Explorer | Contract]
  • Matching pool: 350,000 DAI
  • Matching cap: 4%
  1. Climate Solutions [Explorer | Contract]
  • Matching pool: 350,000 DAI
  • Matching cap: 10%
  1. Web3 Community & Education [Explorer | Contract]
  • Matching pool: 200,000 DAI
  • Matching cap: 6%
  1. Ethereum Infrastructure [Explorer | Contract]
  • Matching pool: 200,000 DAI
  • Matching cap: 10%
  1. ZK Tech [Explorer | Contract]
  • Matching pool: 150,000 DAI
  • Matching cap: 10%

*To view and/or weigh in on the discussion of platform and core-round specific eligibility criteria, see the following posts in the Gov Forum:

Results & Ratification

The full list of final results & payout amounts can be found here. Below we’ll cover how these results were calculated and other decisions that were made.

We ask our Community Stewards to ratify the Beta Round payout amounts as being correct, fair, and abiding by community norms, including the implementation of Passport scoring as well as Sybil/fraud judgments and squelching made by the Public Goods Funding workstream.

If stewards and the community approve after this discussion, we would like to suggest voting on Snapshot to run from Friday, June 2nd, to Wednesday, June 7th. If the vote passes, the multisig signers can then approve a transaction to fund the round contracts, the results will be finalized on-chain, and payouts will be processed shortly after.

Options to vote on:

1. Ratify the round results

You ratify the results as reported by the Public Goods Funding workstream and request the keyholders of the community multisig to payout funds according to the Beta Round final payout amounts.

2. Request further deliberation

You do not ratify the results and request keyholders of the community multi-sig wallet to delay payment until further notice.

Round and Results Calculation Details

***Note: The final results you see here show data that has been calculated after imposing various eligibility controls and donor squelching (described below). The numbers may not match exactly what you see on-chain or on the platform’s front end. For example, donations from users without a sufficient Passport score or donations under the minimum will not be counted in the aggregate “Total Received USD” or “Contributions” column for your grant.

To summarize:

  • The Gitcoin Program Beta Round was conducted on Grants Stack from April 25th to May 9th, 2023.
  • It consisted of 5 Core Rounds with their own matching pools: Web3 Open Source Software, Climate Solutions, Web3 Community & Education, Ethereum Infrastructure, and ZK Tech.
  • There were 470 grants split between the rounds and approved based on round-specific and general platform eligibility requirements.
  • A total of ~$600k was donated across the 5 core rounds and 10 featured rounds (see this Dashboard for detailed stats).

Key Metrics by Round


Matching Calculation Approach

The results shared above stray from the raw output of the QF algorithm due to a few variables:

  • Donors must have a sufficient passport score (threshold for all rounds = 15)
  • $1 donation minimum
  • Removal of Sybil attackers and bots
  • Round-specific percentage matching caps imposed

Passport scores
The Gitcoin Passport minimum score threshold was set to 15 for all Beta rounds, which was notably lower than the previous Alpha Round. As a result, we saw a higher passport pass rate (shared in the data above). A donor’s highest score at any point during the 2 week round was used for all of their donations, even if they donated and later raised their score. Only donations from users with a passport and sufficient score were counted for matching calculations.

Donation minimum
Past cGrants rounds used a $1 donation minimum, while the Alpha Round cut off a variable bottom percentile of donations after manual analysis. Now that the Allo Protocol has a minimum donation feature built into Round Manager, we decided to go back to using a predetermined and public minimum value of $1.

An interesting result of using a USD-denominated minimum was that a 1 DAI donation was not always sufficient, depending on the conversion rate of DAI at the time of the transaction. There were thousands of 1 DAI donations, and while they typically ranged from $0.998 to $1.002, over half fell below the $1 USD threshold and were not initially counted towards matching. It was decided that a user should reasonably expect a 1 DAI donation to count, and to ensure their vote was not decided by arbitrary small price fluctuations, we used $0.98 as the minimum for the final calculations.

Sybil attacks and suspected bots
As with most QF Rounds, we saw some sophisticated Sybil attack patterns which were not stopped by Passport (yet!). After on-chain data analysis and a manual sampling process, donations from addresses that were associated with these types of behaviors were excluded for the purposes of matching calculations. This includes things like:

  • Suspected bot activity based on specific transaction patterns and similarities
  • ​​Flagging known Sybil networks/addresses from prior rounds
  • Enhanced analysis of Passport stamps and other data to flag evidence of abuse between different wallets
  • Self-donations from grantee wallets

Matching caps
Matching caps were introduced many rounds ago on cGrants, where there is a maximum percent of the total matching pool any one grant can capture. Once they hit this cap, they will not earn more, and any excess is redistributed to all other grants proportionally. Round caps were:

  • 4% for Web3 OSS
  • 10% for Climate
  • 6% for Web3 Community & Education
  • 10% for Eth Infra
  • 10% for ZK Tech

These amounts were selected based on the number of grants expected to be in each round and the size of each matching pool, using prior rounds to help advise. We are always looking for community feedback on matching caps for future rounds!

Please note: In an effort to create a transparent minimum viable product of QF formula, simple quadratic voting has been deployed here. In the future, we hope to offer pairwise matching and other options for possible customizations that will be set and published on-chain at the time of round creation.

Future Analysis & Takeaways

DAO contributors will soon share detailed analyses, statistics, and takeaways from the Beta Round in addition to these results. You will find these posts in the governance forum in the coming weeks. For takeaways from the Alpha Round, see here for data, operations, and Passport retros.

Again, we thank everyone for your patience and for participating in the Beta Round. We’re excited for the next iteration of the Gitcoin Program and to begin supporting many other organizations as they run their own rounds.

For questions, comments, or concerns on any of the above, please comment below or join the Gitcoin Discord. Thanks for reading!

17 Likes

Hello, DuckDegen from JobStash here.
The first donation we received doesn’t seem to be included in the totals.

Here is the tx hash:
0xb6909082e607aadde4dbd46b80f4b77732c88e6ede47b609777e99a89e928511

I’ve asked the donor if they had a sufficiently high score at the time, and they’ve confirmed they indeed did.
Could anyone please clarify ?
Thanks <3

2 Likes

Congratulations, fellow grantees! It was our first time taking part in a Gitcoin round, and I must say it’s been a bit of a bittersweet affair. We managed to raise -$213 from 9 donations, only for the final tally to show that we raised $193 through 5 donations that unlocked $186 in matching funds.

We are grateful for the opportunity to take part in the beta round, but I believe there is much room for improvement. Here are the key pain points:

  • The top 3 managed to raise a combined total of $5,000 and are walking away with $105,000, leaving the other remaining 102 grantees, who raised a combined total of $30,000, to split the $245,000 match funds.
  • Supporting one project with $35,000 and another with $4 seems a bit off to me. I think funding a climate solutions project/founder with a $4 grant is an absolute joke.
  • We should have a minimum support amount from the matching funds for Gitcoin to truly be impactful. This will help eliminate the current image of the “big brother finishing the food for the younger siblings.”
    Nicholas From Sungura Mjanja Refi
4 Likes

Hi Connor thanks for all the work you and the team have put into this! The round has been incredible for not just allowing us to be paid by an algorithm (so much more empowering) but also the boost in twitter growth for many projects.

I did want to flag one concern that was raised by some of the other projects too. For all the talk around $1 donations being what matters, it appears that absolute amount given matters much more. Consider some of these stats from the Web3 Community & Education Round

404DAO: 9 contributors giving $2340 = $7130 matching
ZachXBT: 41 contributors giving $500 = $5909 matching

Why is ZachXBT getting a smaller grant despite having nearly 5x the votes? Similar story below;

CryptoCurious: 7 contributors giving $2550 = $3078 matching
GreenPill Podcast: 42 contributors giving $236 = $3103 matching (less than half of ZachXBT despite having a vote more but half their amount in community contributions)

I don’t mean to single out any particular project, it’s a pattern i’ve noticed across projects & rounds. This really sucks because it encourages wash trading, cycle large amounts of the funds you’ve earned from other sources through gitcoin and come out with a higher amount.

I did see the Gitcoin proposal by @Joel_m on rethinking the QF formula, is it taking these factors into account? How do we as a community plan to stop wash trading if absolute amount given per vote to each grant matters as much as it did in this round?

8 Likes

Thanks Connor!

Lots of work and a great step forward for the protocols.

I’m concerned because the Sybil squelching and Grant review results seem to differ from the results from the OpenData Community hackathon - although it is hard to tell because while the calculations of the participants in the hackathon are transparent, so far at least that’s not the case for whatever Sybil squelching was done.

In addition to publishing the code, algorithms, and analysis we are also conducting several deeper dives held by hackathon participants in the next several days whereby contestants volunteer their time to explain exactly how they arrived at their calculations.

It would be best if Gitcoin was similarly transparent.

Just as an example, one of the submissions to the OpenData Community hackathon pointed out the presence of many Sybils identified from the Hop airdrop in our Gitcoin beta round. Were results impacted by the results of these apparent attackers?

Based on the little information I have I cannot vote in favor of approval. While I’m sure everyone that worked on the round did so in good faith, the lack of transparency is concerning.

I’m not sure of the best path forward, to be honest. Perhaps as a starting point the analysis that was done could be shared?

5 Likes

Agreed. Isn’t the point of QF to distribute funding to the edges? It seems those without much power, sway, or existing funds coming in haven’t really had much of a chance in this round or any I’ve been involved with prior. This seems contrary to the proposed intent of QF while undermining the credibility and integrity of not just Gitcoin but QF more broadly.

Considering the quite prohibitively high gas fees this round, I expect it’s even worse than usual.

3 Likes

You, ser, have a point- thank you so much for surfacing.

It appears in our own docs (done offline) the vote you reference should be counted; no sybil-defense rules were triggered and you’re right- Passport score of 18.46…

We will continue to dig into this with the product team’s help tomorrow to find out what’s happened here between our submission of vote coefficients (0 or 1 based on suspected sybil) and the match calculations. This is why we post, I guess… really appreciate your speedy call-out.

3 Likes

@nick_smr and @thedevanshmehta - Really appreciate this line of questioning, and I’m inclined to agree that the definition of a “good” allocation method, to me, means more money to more people as a function of matching_funds over number_of_grants_funded, the closer that gets to 1, the more successful - at least when we know all our grantees are high quality and making good effort to build in the space… Although I think there are other cases that “good” allocation may mean the most funds possible going to projects with the most community sentiment.

In any event, evening out distribution doesn’t seem to me to be inherent in a QF method, though… and I think this raises questions of whether Gitcoin Grants will continue to use pure Quadratic Funding or adopt new mechanisms for its core grant program.

Re: @Joel_m’s work in particular will look to establish mathematically sound ways of identifying collusion between community members. While I think this could show up in wash-trading cohorts (and we have provided some examples of these- especially certain defi day-traders using rewards programs that are optimized for daily volume) it is a bit broader - but I remain extremely excited for this work, too!

@Joel_m if you get a chance to weigh in for yourself, would love to hear thoughts on “success” of allocation in our case…

3 Likes

Heya Evan -

Couple things to respond to here.

(1) Surprised to hear the accusations of non-transparency, as it seems so far Gitcoin is the only DAO who has taken time from their contributors and dedicated significant resources to giving the ODC datasets to play with :slight_smile: I’d also push back on the idea of “volunteerism” when again, Gitcoin has given significant funds to incentivize the hackathon experiment… As I told you in person: I’d highly recommend for the best interest of the ODC that use cases outside of Gitcoin be explored to add some legitimacy to this work and credibility for the quality of the work.

(2) As I do for all other auditors and friends of the DAO, happy to provide cleaned datasets if there’s something that you’re missing that you want? These forum posts have always served as a high-level reporting opportunity, and not a deep dive. As @connor mentioned here, we are moving towards an automatic rule system by which any rule triggered by a voter could be queried or otherwise looked up. (Same for grantee applications - moving towards “kerckhoff compliant” a la @disruptionjoe’s thought leadership.) In the meantime, again, happy to provide these scripts where there’s interest, but also the timestamp checks and passport checks mentioned are all derived from public, on-chain data that can be pretty easily replicated with good fidelity.

Again- please feel free to follow up with me with questions about our methodology, but that seems a bit beyond scope here, as we’ve always done significant sybil silencing where a couple hundred MBs of voter data doesn’t fit easily into a governance post… We aspire to regularly publish internal and external data for the whole community to learn from.

2 Likes

In final report there is one grant “WGMI community DAO [Web3 Community Round]” with exactly 0 eligible contributions that also reports 8.81$ of total USD received in donations.

This contradicts explanation post, which made me assume that column “Total USD received” refers to final subset of donations eligible for matching that is:

The final results you see show data that has been calculated after imposing various eligibility controls and donor squelching For example, donations from users without a sufficient Passport score or donations under the minimum will not be counted in the aggregate “Total Received USD” or “Contributions” column for your grant.

Can someone explain what happened here?

4 Likes

Hey, I know the “burn” felt during/after the Gitcoin rounds so just want to clarify one aspect. Evan said that the hackathon participants are volunteering their time because the prizes were awarded to them for the work did during for the hackathon. They are not obligated to dedicate more time to help or explain their analyses because they already received their prizes.

2 Likes

Kudos for all the work done everyone and congrats for finalizing yet another amazing Gitcoin Round :robot:

My only worry (kinda related to @thedevanshmehta) is that projects with relatively few contributors got huge matching amounts(some even got the max amount). That’s concerning because it seems that in rounds with a few donors -like the web3 community and education round) it can encourage people(could be friends of the project, could be sybils, could be actual members from their community- we cannot know for certain) to donate larger amounts to get a larger matching.

so…I either don’t understand how the calculations are being done or this will prove just another lesson that needs to be learned for the next round I guess :saluting_face:

5 Likes

Thank for your posting this, as a first time grantee this information is enlightening.

Great work.

2 Likes

Thanks for your response. I guess this could be cleared up if we knew how the squelching was done.

Maybe I missed that?

I see the following text:

After on-chain data analysis and a manual sampling process, donations from addresses that were associated with these types of behaviors were excluded for the purposes of matching calculations. This includes things like:

  • Suspected bot activity based on specific transaction patterns and similarities
  • ​​Flagging known Sybil networks/addresses from prior rounds
  • Enhanced analysis of Passport stamps and other data to flag evidence of abuse between different wallets
  • Self-donations from grantee wallets

Any details on the Legos used to identify the transaction patterns and similarities, the addresses squelched (we could put this in a private location if that was preferred - the OpenData Community keeps certain suspect Sybil addresses access controlled for example), and other explanations of the “enhanced analysis” and so on would be useful.

Soap box - and the nuance may be lost here - I’m 100% confident that great analysis was done. I’m also pretty sure that non-transparent analysis puts at risk the credibility we are all seeking to build or maybe rebuild in the space. By sharing more of how the analysis was done we can all gain confidence while learning more about how to protect rounds in the future.

4 Likes

Thank you for appreciating our concerns! From my understanding, the spirit of QF is letting the wisdom of the crowds decide which projects should be funded and in what amount. The results from the current round contravene that spirit, as pointed out by @ZER8 , @flowscience & many others on Twitter that have privately messaged me. The folks in the climate round are particularly aggrieved at mini meadows nearly getting the matching cap despite only having 13 (!) contributors.

For me to vote in favor of ratifying these results, I would need to see some discussion around an alternative formula (even as simple as all votes are equal) and.a spreadsheet showing how the funds in the beta round would have been allocated had we used the alternative formula. Only with this comparison can we intelligently decide the path ahead and whether to stick with pure QF or some other version thereof.

4 Likes

Hey @duckdegen thank you for pointing this out - it sounds like the product team discovered a bug with the calculations impacting a handful of transactions, likely including this one, that has now been fixed. We’ll be sharing updated results soon and this should be resolved. Thank you again for bringing this to our attention!

@DistributedDoge thank you as well for flagging this - I believe this should also be fixed with the new patch mentioned above (and as you noted is a contradiction and not intended). Appreciate everyone who is digging into the data and helping us spot discrepancies like this!

2 Likes

Hey @nick_smr thank you for sharing these ideas. Looking at your grant it looks like 1 donor did not have a passport, one donation was ~$0.90, and 2 others were flagged in the Sybil squelching.

I think this point is something that could be directly addressed by matching caps. Before each round, we do try to gather community feedback on eligibility criteria and other parameters like matching caps. If most of the Climate grantees felt the cap should be lower, it is certainly something that could be done (CC @M0nkeyFl0wer)

I do think part of the reason the Climate round was very skewed was that the total contributions and donation amounts were generally lower than we’ve seen in prior rounds. So with fewer “votes”, each vote (and its weight) matters more.

I do get your point about minimum support and low matching amounts. It is primarily the nature of the QF mechanism and letting the community “decide” where funding should be allocated. We do plan to add a variety of new and different funding mechanisms to the Allo protocol, which would likely provide allocations more in line with what you are thinking.

2 Likes

In response to this post and similar concerns from @thedevanshmehta, @flowscience, and @thedevanshmehta -

I agree at a glance many of the results do seem “off” when comparing contributions vs. total amounts, and the associated matching amount (versus what might be expected). I think this can be broken into two distinct conversations:

  1. Why do these results seem different than usual?
  2. Is QF the optimal funding allocation mechanism?

Regarding 1 - my personal view here is that due to smaller numbers of donors and total amounts donated, specific large donations can have a larger impact than one might expect. This is still the same QF algorithm used in many past rounds, so the math is not different.

In this specific case, it is an interesting outcome, which I think is due to 9 and 41 being fairly small “sample sizes” of data to calculate from. If this was 10xed (ie. 90 and 410 contributors), I believe the classic QF impact of “many small donations outweighing fewer large donations” would be much more amplified.

Regarding 2 - this is a larger discussion happening in many places, so won’t get into it much here, but I do think we should be experimenting with various tweaks to “classic” QF, as well as entirely new funding mechanisms. As Allo gets built out, there should be many more options, and we will hopefully be doing more retroactive data studies on how results may change.

Finally re: wash trading - this is something we are certainly aware of and looking for in Sybil and fraud analysis. We need better tooling and more automation, but that work is in progress, and any wash trading discovered will 100% be dealt with.

6 Likes

Hi @connor thank you for taking time to respond to my queries. Its our first time participating in a gitcoin round so I’m not sure if this ever happens,but is it usually possible to know & seek a review on the votes that were considered a sybil squelch? Whatever the outcome,I believe this will put us in a better position as we prepare for future rounds.

3 Likes

Hey @epowell101 I really appreciate the constructive criticism and productive line of questioning, both in this initial post and your follow-up comment. I know you have the DAO and the community’s best interests at heart. I would love to collaborate more on this round and going forward

So I actually had no idea that the ODC was looking at data and calculating results for these Beta rounds, this is news to me. And great news at that! We absolutely could use more eyes on the results here and I am very open to collaborating. I’d love to learn more about what has been built in the hackathon and what differences you are seeing. I’ll reach out privately to chat about next steps.

This is great to hear - as I mentioned in one of my above posts, we have discovered a bug in the product impacting certain transactions and the resulting match calculations. We do plan to post an updated version of the round results in the coming days, taking this fix into account, as well as any other issues or tactics that arise that are beneficial. Let’s combine forces - we have a chance to improve our Sybil defense and reach a “fairer” outcome, before this goes through ratification and payouts.

This is an area I am somewhat torn on. We can definitely do a better job of openly sharing data, details of our methodologies, etc, but I also am not sure if 100% transparency is the ideal solution right now. If we publish data on every single donation and whether it was counted, we’d create an environment where users may feel specifically targeted and where we’d have to justify every decision that was made (often by automated tools). Ideally, we could hear out objections and investigate each one, but I’m not sure that can work at scale (over 100k txs in the Beta) and we also don’t want to make it so only those with the loudest objections get rewarded.

But my bigger concern is if we publish every algorithm and tactic used for Sybil and fraud prevention, it makes it much easier for bad actors to use that knowledge to game the system in the future. There’s a reasonable argument to be made that “if your Sybil defense is strong enough it doesn’t matter if the methods are fully transparent, it still cannot be cheated” but frankly I don’t think Gitcoin (or Web3 identity systems as a whole) are anywhere near that point yet.

IMO there is a sweet spot between being completely transparent and being a black box, where we maximize community knowledge sharing while minimizing bad actor empowerment, we just need to find where that is. Perhaps we’re leaning too far into the “black box” direction right now, but I’d love to get your opinion, and work together on a path forward.


I just want to reiterate since it may get buried in my string of posts - we do expect to re-run matching calculations based on new findings in the coming days and will share updated results here. If/when this happens, the timeline:

will be moved back, so there will be 5 additional days after updated results are posted, before anything moves to a Snapshot vote.

9 Likes