Thank you to everyone who puts their effort into to making Gitcoin and Quadratic Funding a meaningful way to fund public goods and projects. Every time the Tor Project participates in these funding rounds, I am impressed by the amount of collective effort goes into making them run, communicating clearly with the community, and improving over time.
Cluster-Match QF takes the projects you vote for as signals of the communities you belong to. It then calculates matching amounts for each supporter and unique community combination. This method provides more significant matching funding to projects that receive support from more diverse communities.
Awesome. Thank you for making it clear the evolution of QF and the reasoning behind the changes.
There might be one possibility with cluster-matching QF that could benefit local communities. The new algorithm increases the total cost for Sybil attackers and tilts the scale for the system to be âcheaper to defend than attack.â It might be a worthwhile exercise (possibly a prospectively funded project in Citizens Round if anyone is interested) to evaluate if we can lower the requirement for Passport Score in the frontend with cluster-matching QF as an additional rear-guard mechanism to nullify Sybil contributions.
To validate this, someone would need to rerun the squelching with one or two lower passport scores and analyze the impact on the final distribution. If the data supports this hypothesis, a lower score will reduce some friction local communities have in onboarding contributors to Gitcoin Grants.
Here is some background in the 2-minute snippet from @owockiâs conversation with Joel Miller:
hey @umarkhaneth, cluster QF def feels like a step in the right direction.
Just trying to wrap my head around whats happening - Cluster QF filters donors having voted for multiple projects and counts them for QF , so does it also exclude a few donors that projects might have had that have just donated to that single project or that figure needs to cross a certain threshold?
Post Cluster sybil analysis the QF formula applied is the same and you are not tweaking the matching multiple depending on multiplicity of votes from a donor yet?
The difference between base and eligible voters represents the no of votes that projects rcvd from donors just voting for that particular project?
First, Thanks for the hard work done for doing the cluster match QF
I just have a question: " the same cluster are added together as if they were the same voting bloc", I do not quite understand it. Say if one cluster/community has 100 voters, it is considered as one voter?
I understand that the votes from the same cluster should be given less weight, but treating them as from one voter is not quite fair especially for some local communities.
Hi Umar, I agree 100% with your statement. But as of today, none of the climate projects write their GHG reduction potential on their gitcoin grant pages. The simple reason for this is that it is something very difficult to quantify accurately at the early stage of the projects. Due to that, the global appeal of a project does not mean that it has the highest GHG reduction potential. It might just be due to an interesting product, eg. medicinal herbs, or the likability of the founder on twitter interactions. Many grassroots orgs like Nawonmesh work on not-so-interesting things like local regeneration. And are run by senior citizens who are non-digital savvy and non-native English speaker, so spending time on Twitter spaces to showcase their charisma is not their forte (The main reason I am representing Nawonmesh in all the online interactions). It is much easier for them to interact with their local community for support.
Unfortunately, the experience of Nawonmeshâs founder says that the fundraising opportunities for regenerative activities are limited in their region.
Fair logic.
I also feel that we need to increase the donor base of the whole climate round. Compared to the other core rounds, the amount donated and unique donors are way less for the climate round. One of the easy ways to achieve that could have been to let climate projects onboard their communities and then few members from a particular projectâs community would have started cross-funding other projects too in subsequent rounds. Imagine if 100 climate projects could bring in just 50 new people, it would have almost doubled the âunique donorsâ count for the climate round. But, due to Cluster-Match QF, the strategy of onboarding new communities has been somewhat disincentivized.
Why are the âEligible Votersâ numbers different for the same project in the âClimateâ and âClimate - Shellâ sheets in the updated results?
itâs not just a mathematical formula but a social consensus on the best way to leverage the wisdom of the crowds. And the post analysis, pre-payout period is when some of the most active discussions take place. it would be tragic to let go of this tradition.
My main concern with this rounds distribution is just how closely it mirrors the âwinner take allâ approach of the real world. Consider this chart i found showing the distribution in the open source round, the inequality is worse than any capitalistic nation.
I wonder if we could develop a gini coefficient or some such metric capturing inequality among projects as a 1st step to possibly reducing it in future rounds. Hereâs some interesting research on progressive taxation in quadratic funding systems from DoraHacks thats worth exploring
I also donât know how much were following the gitcoin beta round squelching, but the difference that my project received from the 1st spreadsheet to the last was over 30%. These window periods are valuable for getting the communityâs assistance in identifying sybil attackers, such as how mini meadows & some others got caught last round in this window period.
I will say that contrary to my expectations, the teams active on gitcoin radio have performed better under cluster QF. Maybe because we each gave to so many different projects that it increased the value of our vote. So while it wonât initially help local convergence, it is certainly helping digital coordination!
I agree with this point, I urge the team to consider making 10 cents the minimum vote for matching. $1 while living in the west is very different from $1 in the global south. Also, 20-35% of my project votes came from those giving less than a dollar, sometimes 10 cents and tragically even a few 95 cents
Finally, I request the team to not publicly list the payout address of projects as many operate in hostile environments where this information could be used against them
Following some email exchanges with ben , he made me realize it would be more beneficial for us to shift our discussions to the government forums to embrace a âbuild in publicâ approach.
We also see some similar points as @priyank 's regarding our project.
After extensive internal discussions, we made the decision to participate in the Climate-Shell round, so we were opted-in during the application. However, we have noticed that our project is not listed in the Excel file.
@umarkhaneth would you also please kindly look for our project again as numbers show some unfairness I canât comphrend ?
Marked our Earthist - Decentralize the Seeds project with a magenta color in the climate round, and just to understand the numbers in comparison:
We have the second-highest number of eligible voters and eligible crowdfunding yet our match rating is lowest on below example sheet. While the average contribution for our project is $1.66.
We are eager to gain a better understanding of the situation for such low matchmaking even with high passport granted supporters. Your guidance and support in this matter would be greatly appreciated.
Fully in support of this statement. This can help reduce the turnaround time in payouts.
I can relate to these challenges faced in the global south. Iâd like to highlight a Climate Solutions project that has been working extremely hard in a country where minimum wage is $5 and a family needs 108 minimum wages to sustain a family of 4 with basic needs. Mi Costa de Oro has spent months onboarding their community members to web3 tools (snapshot voting, paying for basic needs with crypto, sending/ receiving tokens when compensated for beach clean ups, minting mirror articles, etc). Despite all their hard work since April, not one of them is able to obtain a Gitcoin Passport score that enables matching. This means they didnât even attempt to vote in the round. They were able to create impressive results this round by providing constant and transparent proof of their work and work incredibly hard to promote their grant during the Shill spaces. None of their contributors speak English. This means they mustered up the courage to participate in English speaking spaces, request the mic, shill their project in Spanish and hope people understood or someone present could translate for them. Iâm pointing this out because your project can take a page from their playbook. 100% of the images published by the Nawonmesh twitter account are AI generated. almost all AI generated images. This looks pretty compared to the low quality images published by Mi Costa de Oro, but they donât do a great job showing the work and impact being pitched in the grant application.
Iâm really interested in learning how you were able to get the contributors to achieve passport scores above 20 points because it has been a huge barrier for the communities I work with. I havenât been able to get a single contributors in these communities above 8 points.
I can understand the frustration with this, but I also think itâs healthy for the ecosystem. It creates a pluralistic and regenerative environment where people looking to be funded also take the time to become immersed in the ecosystem, learn more about other projects and potentially collaborate or copy pasta some of their work to benefit their local efforts.
Iâm in favor of these conversations happening in between rounds in an attempt to establish a structure that doensât require debate after every round. Itâs important to consider that many of the smaller projects are living day to day. Continuously delaying payouts for the sake of big brain back and forth seems like torture to many of these projects. Letâs come up with a more streamlined process and stick to it until something serious breaks and needs fixing.
Iâd be interested in seeing how this correlates to userbase. For example - Do Lenster, revoke, JediSwap have a much bigger userbase or transactional volume than projects on the lower end of match funding? If yes, I think this pays out fairly. I donât know those figures, but my gut tells me the funding received reflects the size of their userbase as well. It would be really interesting to identify projects that didnât perform well, but house big userbases.
These might be bot donors. It was something that also occurred in the C grants platform, even between rounds. It always confused grantees. I donât think this is a case of donors giving 95 cents, or less than $1 because thatâs all they could afford.
My big question looking ahead is - Wen Cluster Match + trust bonus based on passport score?
Hey @priyank , I really appreciate you sharing your unique perspective. As we have these discussions about what funding allocations should look like, I want to give my interpretation of the problem QF was originally designed to solve, which I think is different from the problem youâre pointing out.
QF was designed to solve the problem of public goods funding under imperfect coordination. The first iteration of QF solves this problem in a world where people are maximally uncoordinated (i.e. everyone is completely selfish and isolated). In contrast, if everyone was perfectly coordinated, we wouldnât need QF at all.
But the real world has a mix of coordination and isolation. We have local communities with internal communication channels (coordination), but people in different communities may still be isolated from each other. So the new algorithms like Cluster Match try to make funding work in this world by giving less money to projects supported by just one community, and instead favoring projects with diverse bases of support. If a project only has local support from one community, Cluster Match assumes that project doesnât need as much extra funding, since the people in that community should be able to figure out how much to fund it on their own.
Correct me if Iâm wrong, but I think youâre pointing out that this isnât the whole picture. It may be the case that a local community knows how much money a project should get, but doesnât have the cash to fund it. IMO this is kind of an orthogonal issue which is important to address, but needs different tools and different analysis. Of course, itâs important to be aware of a case where trying to be more optimal along one axis (accounting for coordination) may have been less optimal along another axis (accounting for differences in ability to pay). But I think being clear about the microeconomic foundations of whatâs going on here can help us move forward in the best way.
For what itâs worth though, I think the picture around how Cluster Match impacts communities with differences in ability to pay isnât so simple. I actually think that with all else held equal, switching to Cluster Match tends to help communities with less ability to pay. But this post is already too long so Iâll leave out that explanation for now.
Thank you for the work you do to get these results and thanks to Gitcoin for the transparency, this is excellent work and the fact that not everything is automated shows the level of dedication the team put in, thanks a lot.
I have a question regarding the Eligible Crowd Funding, Simple QF Match, Matching difference, and the rest, I wanted to know if they sum up to what a project would receive. Iâve had these questions from some of my community members, and Iâm unsure how to respond.
I want to give a huge shoutout to @umarkhaneth for driving this and everyone else who helped detect Sybils and implement cluster mapping QF.
I am personally very excited about (and bullish on) cluster mapping and other varieties of QF that can effectively dampen collusion and Sybil attacks across the board in an objective fashion. The old cGrants platform had been using pairwise QF for years (a similar modification to cluster mapping, with similar impact/results). This was first tried in GR5, through GR15 (so almost 3 years from 2020 - 2023). We only went back to âtraditional QFâ for the Alpha and Beta rounds.
Most users probably werenât even aware of this and there wasnât a push to publish the match differences between pure QF and pairwise, it was just the method found to work best. Similarly, while Iâm glad Umar shared matching calcs for both methods in this case, given itâs the first time itâs being tried, I donât think itâs productive to publicize all alternatives every round. If we calculated results with pure QF, pairwise, cluster mapping, Sybil or no Sybil squelching, and shared them all to compare, almost everyone would be able to find a scenario where they would get more funding and thus would not be happy.
I do trust the team doing deep data analysis on Sybils, passports, voting patterns, etc, to find and use the best method to prevent collusion across the board (objectively without manual subjective judgments). We should absolutely be as transparent as possible though about the methods used and decisions made, and I believe weâre only getting better in that regard compared to prior rounds.
So all that said, I really appreciate the hard work and hours that went into this from many people, and although not everyone is happy with the outcome, I am in favor of moving forward to a snapshot vote to ratify these results.
Strongly agree. We should debate and discuss the methods and their trade-offs, along with which ones are suitable for the current scale of Gitcoin Grants and which we should run low-stake experiments for the future. I am certain Cluster-Matching QF too shall outrun its utility at some point, and we will need to keep evolving. Tethering to outcomes of a single round to cherry-pick design choices will add to fragility.
Ideally, I would love to see more feature rounds operating variations of QF (and other allocation mechanisms) and sharing their learnings across the community. However, there is no one allocation mechanism âto rule them allâ.
We at Pollen Buzz Initiative had âopted inâ via email for the additional matching funds from Shell on the 21st of August, but we do not see our projects name in the Shellâs matching round results.
I, too, would like to see a less steeper curve. I am not in favor of any form of taxation, yet. This likely deserves a separate thread since it is independent of GG18 results.
My concern with a direct intervention like taxation is that it leaves endemic issues driving inequity unaddressed and dampens the effectiveness of taxation. Moreover, there is a risk that taxation diminishes input signals that are performance-based, such as grantees who show up round after round sharing their impact and raising support from the community will have a disproportionate share of the funding for the right reasons.
Viewing the allocation of the funding pool in the context of respective eligible crowdfunding contributions adds more to the picture. The top 15 projects in the web3 OSS round received a share of 67.5% of eligible contributions and were allocated 70.3% of the funding pool.
However, I would still support the case for a less steeper curve. Here are not-so-exhaustive measures, which if they donât make a dent, make the case for taxation stronger.
Improve discoverability of grantees on the platform who may not have as strong a marketing muscle as larger projects (a lot is happening in this direction already, also shameless self plug for AI-driven discoverability is here)
Find ecosystem partners who can pool funds to run dedicated feature rounds exclusively for smaller projects (based on consensus on definition) similar to opportunities that accelerators and seed-stage funding offer to young start-ups.
Run QF with weighted votes for subject matter expertise so smaller projects making large strides can see the gains in allocation based on curation from people in-the-know (Token Engineering Commons already did this in a feature round here).
Integrate with a protocol like Hypercerts where firstly, the proof of impact, and then, evaluations, help divert dollars where the action is (food for thought here)
Unfortunately, none of these are silver bullets that will change things overnight, but I am hopeful that ease of discoverability can make an impact in the near term.
The funding dashboard & our Grants page showed 990 votes but the final results are showing only 177 base voters. Obvi a pretty big gap. Would appreciate if you can take a look @umarkhaneth - would that be possible?
We can assure you that only 3 people from our community voted, so the vast majority of those votes were from people who genuinely supported our project.
Our vote is to not ratify this decision yet. We only received credit for 18% of the votes we received and we worked extremely hard on this last Grant round so itâs disheartening for us.
Tipping the scales to make it more expensive to defend than attack is definitely one of the properties we like about it a lot. Down to run this test you mention as well!
Hey solarpunkmaxi! This can be a little confusing but Cluster QF doesnât filter any donors out. Instead, it groups together donors who vote identically (who support the same projects and donât support the same projects) and treats them as a community. Then, each community gets matched instead of each individual.
Not quite â the difference is due to our sybil squelching. Does this quick diagram help?
Dear sir, I have made a dune dashboard back in the days, where even my most basic of analysis shows that there are far less donations above 1 USD than the total count.
Please look here:
dune [com] /queries/2946938
Specifically, for your project my dune query tallies 135 donors that gave more than 1USD, but keep in mind that i dont work for gitcoin and my data may be off.
What is evident from this is that there was a huge amount of airdrop farming going on from donors who dusted a bunch of grantees, and sadly the UI reflected this total tally even if it didnt qualify.
I hope that seeing this data will convince you that the problem is not with the matching, rather with the fact that legit donors were far less than what the UI showed.
It is good to keep in mind that the UIâs purpose per se isnt to show filtered, cleaned and refined data, but rather to just show a total of transactions that came in.
Its good to note that this is a process, and tooling will be improved, but ultimately the data was there to look at even from within your recipient wallet, and considering that the criteria of minimum 1 USD was defined from the start this specific outcome could have been seen from the raw data itself.