[Discussion & Proposal] Ratify the Results of Gitcoin’s Beta Round and Formally Request the Community Multisig Holders to Payout Matching Allocations

TLDR: Beta Round final results are live here! We propose ~5 days for discussion and review followed by a 5-day snapshot to ratify, before processing final payouts.

Background

First of all, I’d like to thank @ale.k @koday @umarkhaneth @M0nkeyFl0wer @jon-spark-eco @nategosselin @gravityblast (and many more from the PGF, Allo, and other workstreams) for all the work that went into running this round and helping to get these final QF calculations.

This Beta was the second Gitcoin Grants Program QF round on Grants Stack. It ran from April 25th to May 9th and unlike the Alpha Round, it was open for anyone to participate (as long as projects met the eligibility criteria* of one of the Core Rounds). $1.25 Million in matching was available between 5 distinct rounds (and correspondingly 5 smart contracts). There were also 10 externally operated Featured Rounds, however, this post only pertains to the Core Rounds operated by Gitcoin. These rounds and matching pools are broken down below:

  1. Web3 Open Source Software [Explorer | Contract]
  • Matching pool: 350,000 DAI
  • Matching cap: 4%
  1. Climate Solutions [Explorer | Contract]
  • Matching pool: 350,000 DAI
  • Matching cap: 10%
  1. Web3 Community & Education [Explorer | Contract]
  • Matching pool: 200,000 DAI
  • Matching cap: 6%
  1. Ethereum Infrastructure [Explorer | Contract]
  • Matching pool: 200,000 DAI
  • Matching cap: 10%
  1. ZK Tech [Explorer | Contract]
  • Matching pool: 150,000 DAI
  • Matching cap: 10%

*To view and/or weigh in on the discussion of platform and core-round specific eligibility criteria, see the following posts in the Gov Forum:

Results & Ratification

The full list of final results & payout amounts can be found here. Below we’ll cover how these results were calculated and other decisions that were made.

We ask our Community Stewards to ratify the Beta Round payout amounts as being correct, fair, and abiding by community norms, including the implementation of Passport scoring as well as Sybil/fraud judgments and squelching made by the Public Goods Funding workstream.

If stewards and the community approve after this discussion, we would like to suggest voting on Snapshot to run from Friday, June 2nd, to Wednesday, June 7th. If the vote passes, the multisig signers can then approve a transaction to fund the round contracts, the results will be finalized on-chain, and payouts will be processed shortly after.

Options to vote on:

1. Ratify the round results

You ratify the results as reported by the Public Goods Funding workstream and request the keyholders of the community multisig to payout funds according to the Beta Round final payout amounts.

2. Request further deliberation

You do not ratify the results and request keyholders of the community multi-sig wallet to delay payment until further notice.

Round and Results Calculation Details

***Note: The final results you see here show data that has been calculated after imposing various eligibility controls and donor squelching (described below). The numbers may not match exactly what you see on-chain or on the platform’s front end. For example, donations from users without a sufficient Passport score or donations under the minimum will not be counted in the aggregate “Total Received USD” or “Contributions” column for your grant.

To summarize:

  • The Gitcoin Program Beta Round was conducted on Grants Stack from April 25th to May 9th, 2023.
  • It consisted of 5 Core Rounds with their own matching pools: Web3 Open Source Software, Climate Solutions, Web3 Community & Education, Ethereum Infrastructure, and ZK Tech.
  • There were 470 grants split between the rounds and approved based on round-specific and general platform eligibility requirements.
  • A total of ~$600k was donated across the 5 core rounds and 10 featured rounds (see this Dashboard for detailed stats).

Key Metrics by Round


Matching Calculation Approach

The results shared above stray from the raw output of the QF algorithm due to a few variables:

  • Donors must have a sufficient passport score (threshold for all rounds = 15)
  • $1 donation minimum
  • Removal of Sybil attackers and bots
  • Round-specific percentage matching caps imposed

Passport scores
The Gitcoin Passport minimum score threshold was set to 15 for all Beta rounds, which was notably lower than the previous Alpha Round. As a result, we saw a higher passport pass rate (shared in the data above). A donor’s highest score at any point during the 2 week round was used for all of their donations, even if they donated and later raised their score. Only donations from users with a passport and sufficient score were counted for matching calculations.

Donation minimum
Past cGrants rounds used a $1 donation minimum, while the Alpha Round cut off a variable bottom percentile of donations after manual analysis. Now that the Allo Protocol has a minimum donation feature built into Round Manager, we decided to go back to using a predetermined and public minimum value of $1.

An interesting result of using a USD-denominated minimum was that a 1 DAI donation was not always sufficient, depending on the conversion rate of DAI at the time of the transaction. There were thousands of 1 DAI donations, and while they typically ranged from $0.998 to $1.002, over half fell below the $1 USD threshold and were not initially counted towards matching. It was decided that a user should reasonably expect a 1 DAI donation to count, and to ensure their vote was not decided by arbitrary small price fluctuations, we used $0.98 as the minimum for the final calculations.

Sybil attacks and suspected bots
As with most QF Rounds, we saw some sophisticated Sybil attack patterns which were not stopped by Passport (yet!). After on-chain data analysis and a manual sampling process, donations from addresses that were associated with these types of behaviors were excluded for the purposes of matching calculations. This includes things like:

  • Suspected bot activity based on specific transaction patterns and similarities
  • ​​Flagging known Sybil networks/addresses from prior rounds
  • Enhanced analysis of Passport stamps and other data to flag evidence of abuse between different wallets
  • Self-donations from grantee wallets

Matching caps
Matching caps were introduced many rounds ago on cGrants, where there is a maximum percent of the total matching pool any one grant can capture. Once they hit this cap, they will not earn more, and any excess is redistributed to all other grants proportionally. Round caps were:

  • 4% for Web3 OSS
  • 10% for Climate
  • 6% for Web3 Community & Education
  • 10% for Eth Infra
  • 10% for ZK Tech

These amounts were selected based on the number of grants expected to be in each round and the size of each matching pool, using prior rounds to help advise. We are always looking for community feedback on matching caps for future rounds!

Please note: In an effort to create a transparent minimum viable product of QF formula, simple quadratic voting has been deployed here. In the future, we hope to offer pairwise matching and other options for possible customizations that will be set and published on-chain at the time of round creation.

Future Analysis & Takeaways

DAO contributors will soon share detailed analyses, statistics, and takeaways from the Beta Round in addition to these results. You will find these posts in the governance forum in the coming weeks. For takeaways from the Alpha Round, see here for data, operations, and Passport retros.

Again, we thank everyone for your patience and for participating in the Beta Round. We’re excited for the next iteration of the Gitcoin Program and to begin supporting many other organizations as they run their own rounds.

For questions, comments, or concerns on any of the above, please comment below or join the Gitcoin Discord. Thanks for reading!

4 Likes

Hello, DuckDegen from JobStash here.
The first donation we received doesn’t seem to be included in the totals.

Here is the tx hash:
0xb6909082e607aadde4dbd46b80f4b77732c88e6ede47b609777e99a89e928511

I’ve asked the donor if they had a sufficiently high score at the time, and they’ve confirmed they indeed did.
Could anyone please clarify ?
Thanks <3

1 Like

Congratulations, fellow grantees! It was our first time taking part in a Gitcoin round, and I must say it’s been a bit of a bittersweet affair. We managed to raise -$213 from 9 donations, only for the final tally to show that we raised $193 through 5 donations that unlocked $186 in matching funds.

We are grateful for the opportunity to take part in the beta round, but I believe there is much room for improvement. Here are the key pain points:

  • The top 3 managed to raise a combined total of $5,000 and are walking away with $105,000, leaving the other remaining 102 grantees, who raised a combined total of $30,000, to split the $245,000 match funds.
  • Supporting one project with $35,000 and another with $4 seems a bit off to me. I think funding a climate solutions project/founder with a $4 grant is an absolute joke.
  • We should have a minimum support amount from the matching funds for Gitcoin to truly be impactful. This will help eliminate the current image of the “big brother finishing the food for the younger siblings.”
    Nicholas From Sungura Mjanja Refi
2 Likes

Hi Connor thanks for all the work you and the team have put into this! The round has been incredible for not just allowing us to be paid by an algorithm (so much more empowering) but also the boost in twitter growth for many projects.

I did want to flag one concern that was raised by some of the other projects too. For all the talk around $1 donations being what matters, it appears that absolute amount given matters much more. Consider some of these stats from the Web3 Community & Education Round

404DAO: 9 contributors giving $2340 = $7130 matching
ZachXBT: 41 contributors giving $500 = $5909 matching

Why is ZachXBT getting a smaller grant despite having nearly 5x the votes? Similar story below;

CryptoCurious: 7 contributors giving $2550 = $3078 matching
GreenPill Podcast: 42 contributors giving $236 = $3103 matching (less than half of ZachXBT despite having a vote more but half their amount in community contributions)

I don’t mean to single out any particular project, it’s a pattern i’ve noticed across projects & rounds. This really sucks because it encourages wash trading, cycle large amounts of the funds you’ve earned from other sources through gitcoin and come out with a higher amount.

I did see the Gitcoin proposal by @Joel_m on rethinking the QF formula, is it taking these factors into account? How do we as a community plan to stop wash trading if absolute amount given per vote to each grant matters as much as it did in this round?

4 Likes

Thanks Connor!

Lots of work and a great step forward for the protocols.

I’m concerned because the Sybil squelching and Grant review results seem to differ from the results from the OpenData Community hackathon - although it is hard to tell because while the calculations of the participants in the hackathon are transparent, so far at least that’s not the case for whatever Sybil squelching was done.

In addition to publishing the code, algorithms, and analysis we are also conducting several deeper dives held by hackathon participants in the next several days whereby contestants volunteer their time to explain exactly how they arrived at their calculations.

It would be best if Gitcoin was similarly transparent.

Just as an example, one of the submissions to the OpenData Community hackathon pointed out the presence of many Sybils identified from the Hop airdrop in our Gitcoin beta round. Were results impacted by the results of these apparent attackers?

Based on the little information I have I cannot vote in favor of approval. While I’m sure everyone that worked on the round did so in good faith, the lack of transparency is concerning.

I’m not sure of the best path forward, to be honest. Perhaps as a starting point the analysis that was done could be shared?

1 Like

Agreed. Isn’t the point of QF to distribute funding to the edges? It seems those without much power, sway, or existing funds coming in haven’t really had much of a chance in this round or any I’ve been involved with prior. This seems contrary to the proposed intent of QF while undermining the credibility and integrity of not just Gitcoin but QF more broadly.

Considering the quite prohibitively high gas fees this round, I expect it’s even worse than usual.

1 Like

You, ser, have a point- thank you so much for surfacing.

It appears in our own docs (done offline) the vote you reference should be counted; no sybil-defense rules were triggered and you’re right- Passport score of 18.46…

We will continue to dig into this with the product team’s help tomorrow to find out what’s happened here between our submission of vote coefficients (0 or 1 based on suspected sybil) and the match calculations. This is why we post, I guess… really appreciate your speedy call-out.

@nick_smr and @thedevanshmehta - Really appreciate this line of questioning, and I’m inclined to agree that the definition of a “good” allocation method, to me, means more money to more people as a function of matching_funds over number_of_grants_funded, the closer that gets to 1, the more successful - at least when we know all our grantees are high quality and making good effort to build in the space… Although I think there are other cases that “good” allocation may mean the most funds possible going to projects with the most community sentiment.

In any event, evening out distribution doesn’t seem to me to be inherent in a QF method, though… and I think this raises questions of whether Gitcoin Grants will continue to use pure Quadratic Funding or adopt new mechanisms for its core grant program.

Re: @Joel_m’s work in particular will look to establish mathematically sound ways of identifying collusion between community members. While I think this could show up in wash-trading cohorts (and we have provided some examples of these- especially certain defi day-traders using rewards programs that are optimized for daily volume) it is a bit broader - but I remain extremely excited for this work, too!

@Joel_m if you get a chance to weigh in for yourself, would love to hear thoughts on “success” of allocation in our case…

1 Like

Heya Evan -

Couple things to respond to here.

(1) Surprised to hear the accusations of non-transparency, as it seems so far Gitcoin is the only DAO who has taken time from their contributors and dedicated significant resources to giving the ODC datasets to play with :slight_smile: I’d also push back on the idea of “volunteerism” when again, Gitcoin has given significant funds to incentivize the hackathon experiment… As I told you in person: I’d highly recommend for the best interest of the ODC that use cases outside of Gitcoin be explored to add some legitimacy to this work and credibility for the quality of the work.

(2) As I do for all other auditors and friends of the DAO, happy to provide cleaned datasets if there’s something that you’re missing that you want? These forum posts have always served as a high-level reporting opportunity, and not a deep dive. As @connor mentioned here, we are moving towards an automatic rule system by which any rule triggered by a voter could be queried or otherwise looked up. (Same for grantee applications - moving towards “kerckhoff compliant” a la @disruptionjoe’s thought leadership.) In the meantime, again, happy to provide these scripts where there’s interest, but also the timestamp checks and passport checks mentioned are all derived from public, on-chain data that can be pretty easily replicated with good fidelity.

Again- please feel free to follow up with me with questions about our methodology, but that seems a bit beyond scope here, as we’ve always done significant sybil silencing where a couple hundred MBs of voter data doesn’t fit easily into a governance post… We aspire to regularly publish internal and external data for the whole community to learn from.

In final report there is one grant “WGMI community DAO [Web3 Community Round]” with exactly 0 eligible contributions that also reports 8.81$ of total USD received in donations.

This contradicts explanation post, which made me assume that column “Total USD received” refers to final subset of donations eligible for matching that is:

The final results you see show data that has been calculated after imposing various eligibility controls and donor squelching For example, donations from users without a sufficient Passport score or donations under the minimum will not be counted in the aggregate “Total Received USD” or “Contributions” column for your grant.

Can someone explain what happened here?