Hey @priyank , I really appreciate you sharing your unique perspective. As we have these discussions about what funding allocations should look like, I want to give my interpretation of the problem QF was originally designed to solve, which I think is different from the problem youâre pointing out.
QF was designed to solve the problem of public goods funding under imperfect coordination. The first iteration of QF solves this problem in a world where people are maximally uncoordinated (i.e. everyone is completely selfish and isolated). In contrast, if everyone was perfectly coordinated, we wouldnât need QF at all.
But the real world has a mix of coordination and isolation. We have local communities with internal communication channels (coordination), but people in different communities may still be isolated from each other. So the new algorithms like Cluster Match try to make funding work in this world by giving less money to projects supported by just one community, and instead favoring projects with diverse bases of support. If a project only has local support from one community, Cluster Match assumes that project doesnât need as much extra funding, since the people in that community should be able to figure out how much to fund it on their own.
Correct me if Iâm wrong, but I think youâre pointing out that this isnât the whole picture. It may be the case that a local community knows how much money a project should get, but doesnât have the cash to fund it. IMO this is kind of an orthogonal issue which is important to address, but needs different tools and different analysis. Of course, itâs important to be aware of a case where trying to be more optimal along one axis (accounting for coordination) may have been less optimal along another axis (accounting for differences in ability to pay). But I think being clear about the microeconomic foundations of whatâs going on here can help us move forward in the best way.
For what itâs worth though, I think the picture around how Cluster Match impacts communities with differences in ability to pay isnât so simple. I actually think that with all else held equal, switching to Cluster Match tends to help communities with less ability to pay. But this post is already too long so Iâll leave out that explanation for now.
Thank you for the work you do to get these results and thanks to Gitcoin for the transparency, this is excellent work and the fact that not everything is automated shows the level of dedication the team put in, thanks a lot.
I have a question regarding the Eligible Crowd Funding, Simple QF Match, Matching difference, and the rest, I wanted to know if they sum up to what a project would receive. Iâve had these questions from some of my community members, and Iâm unsure how to respond.
I want to give a huge shoutout to @umarkhaneth for driving this and everyone else who helped detect Sybils and implement cluster mapping QF.
I am personally very excited about (and bullish on) cluster mapping and other varieties of QF that can effectively dampen collusion and Sybil attacks across the board in an objective fashion. The old cGrants platform had been using pairwise QF for years (a similar modification to cluster mapping, with similar impact/results). This was first tried in GR5, through GR15 (so almost 3 years from 2020 - 2023). We only went back to âtraditional QFâ for the Alpha and Beta rounds.
Most users probably werenât even aware of this and there wasnât a push to publish the match differences between pure QF and pairwise, it was just the method found to work best. Similarly, while Iâm glad Umar shared matching calcs for both methods in this case, given itâs the first time itâs being tried, I donât think itâs productive to publicize all alternatives every round. If we calculated results with pure QF, pairwise, cluster mapping, Sybil or no Sybil squelching, and shared them all to compare, almost everyone would be able to find a scenario where they would get more funding and thus would not be happy.
I do trust the team doing deep data analysis on Sybils, passports, voting patterns, etc, to find and use the best method to prevent collusion across the board (objectively without manual subjective judgments). We should absolutely be as transparent as possible though about the methods used and decisions made, and I believe weâre only getting better in that regard compared to prior rounds.
So all that said, I really appreciate the hard work and hours that went into this from many people, and although not everyone is happy with the outcome, I am in favor of moving forward to a snapshot vote to ratify these results.
Strongly agree. We should debate and discuss the methods and their trade-offs, along with which ones are suitable for the current scale of Gitcoin Grants and which we should run low-stake experiments for the future. I am certain Cluster-Matching QF too shall outrun its utility at some point, and we will need to keep evolving. Tethering to outcomes of a single round to cherry-pick design choices will add to fragility.
Ideally, I would love to see more feature rounds operating variations of QF (and other allocation mechanisms) and sharing their learnings across the community. However, there is no one allocation mechanism âto rule them allâ.
We at Pollen Buzz Initiative had âopted inâ via email for the additional matching funds from Shell on the 21st of August, but we do not see our projects name in the Shellâs matching round results.
I, too, would like to see a less steeper curve. I am not in favor of any form of taxation, yet. This likely deserves a separate thread since it is independent of GG18 results.
My concern with a direct intervention like taxation is that it leaves endemic issues driving inequity unaddressed and dampens the effectiveness of taxation. Moreover, there is a risk that taxation diminishes input signals that are performance-based, such as grantees who show up round after round sharing their impact and raising support from the community will have a disproportionate share of the funding for the right reasons.
Viewing the allocation of the funding pool in the context of respective eligible crowdfunding contributions adds more to the picture. The top 15 projects in the web3 OSS round received a share of 67.5% of eligible contributions and were allocated 70.3% of the funding pool.
However, I would still support the case for a less steeper curve. Here are not-so-exhaustive measures, which if they donât make a dent, make the case for taxation stronger.
Improve discoverability of grantees on the platform who may not have as strong a marketing muscle as larger projects (a lot is happening in this direction already, also shameless self plug for AI-driven discoverability is here)
Find ecosystem partners who can pool funds to run dedicated feature rounds exclusively for smaller projects (based on consensus on definition) similar to opportunities that accelerators and seed-stage funding offer to young start-ups.
Run QF with weighted votes for subject matter expertise so smaller projects making large strides can see the gains in allocation based on curation from people in-the-know (Token Engineering Commons already did this in a feature round here).
Integrate with a protocol like Hypercerts where firstly, the proof of impact, and then, evaluations, help divert dollars where the action is (food for thought here)
Unfortunately, none of these are silver bullets that will change things overnight, but I am hopeful that ease of discoverability can make an impact in the near term.
The funding dashboard & our Grants page showed 990 votes but the final results are showing only 177 base voters. Obvi a pretty big gap. Would appreciate if you can take a look @umarkhaneth - would that be possible?
We can assure you that only 3 people from our community voted, so the vast majority of those votes were from people who genuinely supported our project.
Our vote is to not ratify this decision yet. We only received credit for 18% of the votes we received and we worked extremely hard on this last Grant round so itâs disheartening for us.
Tipping the scales to make it more expensive to defend than attack is definitely one of the properties we like about it a lot. Down to run this test you mention as well!
Hey solarpunkmaxi! This can be a little confusing but Cluster QF doesnât filter any donors out. Instead, it groups together donors who vote identically (who support the same projects and donât support the same projects) and treats them as a community. Then, each community gets matched instead of each individual.
Not quite â the difference is due to our sybil squelching. Does this quick diagram help?
Dear sir, I have made a dune dashboard back in the days, where even my most basic of analysis shows that there are far less donations above 1 USD than the total count.
Please look here:
dune [com] /queries/2946938
Specifically, for your project my dune query tallies 135 donors that gave more than 1USD, but keep in mind that i dont work for gitcoin and my data may be off.
What is evident from this is that there was a huge amount of airdrop farming going on from donors who dusted a bunch of grantees, and sadly the UI reflected this total tally even if it didnt qualify.
I hope that seeing this data will convince you that the problem is not with the matching, rather with the fact that legit donors were far less than what the UI showed.
It is good to keep in mind that the UIâs purpose per se isnt to show filtered, cleaned and refined data, but rather to just show a total of transactions that came in.
Its good to note that this is a process, and tooling will be improved, but ultimately the data was there to look at even from within your recipient wallet, and considering that the criteria of minimum 1 USD was defined from the start this specific outcome could have been seen from the raw data itself.
Thanks for looking into it. Itâs just such a massive drop off that the other top 3 projects (Silvi & Earthist) did not seem to experience (at least not to as severe a degree). It doesnât make sense that âairdrop farmersâ would have chosen to disproportionately focus on our project.
I would also call into question the minimum 1 USD donation criteria. Can someone remind me of the logic behind this criteria? Why should people & projects be penalized for giving less than 1 USD? This is especially relevant for projects like ours that focus on the Global South.
Time to vote ⊠Some amazing discussions have been made above and after observing the results along with the sentiment of participants it appears that a significant improvement can be made to this system.
Itâs obvious that the projects with a stronger following on social media from past grant rounds that were successful and who have more capital allocated up front were able to drive donations to their projects with ease in order to achieve the highest ranks possible for matching funds. They have large teams allowing for low effort campaigns which are not intertwined with the rest of the community & seem to be out of touch with the smaller teams and projects who are participating completely.
My commentary suggestion about hosting spaces to highlight other projects to one team I donât want to mention here was completely ignored on social media when they made a post about how they could help other projects during this last round.
I do NOT see the camaraderie that one another offers when participating in each round in order to help shill it forward for other impact makers in the round.
The competition between the larger projects to overtake the entire round without some sort of system put into place for checks and balances will continue to hinder the overall growth and evolution of the regenerative movement that has sparked the flame of many passionate individuals who have joined Gitcoin.
Meanwhile the projects with low visibility yet have a ton of impactful potential are not able to campaign without putting in a massive amount of effort, energy, and time which could be used for development work during the round.
It is important to keep in mind that the pie will not continue to grow larger if a majority of it is consumed by one entity. The math is simple and plain. No one else will be able to sustain themselves, their projects developments will dwindle, and builders ideas will continue to struggle to survive along with their livelihood. Every last drop of energy put into a project then becomes a waste.
Questions will be made as to why they didnât do more in between rounds with what little funding they were given from the previous round. Scrutinizing smaller teams during the intake filter is also a concern because they are questioned more heavily than a large establishment is about their proof of impact.
I am curious ⊠Where is the proof of impact threads with onchain data for the biggest projects that have received the most funds in the BETA round?
Do we have any updates from any of the projects that received QF showing their impact onchain ?
Iâll share a different experience⊠Iâve self-funded my project for the last three years, and submitted it for a grant hoping to get a little funding to help offset the costs. The project doesnât have a huge community or a lot of marketing capacity, but based on the organic traffic to the GG18 round alone I was able to raise a very meaningful amount of money.
While I canât speak to what extent large active communities and coordinated marketing efforts may have allowed some projects to secure significant grants in the round, I can say that at least in my case the process worked as intended, where organic traffic chose to support a project that seemed promising.
First, I want to give a big thanks to @umarkhaneth and the rest of the team who worked tirelessly to evaluate GG18. I believe the new tools you deployed along with utilizing cluster matching provided solid results and I have voted to ratify them.
In watching the round progress Silvi & Earthist received support earlier in the round, and then your project gained significant donations a bit later. This would lead me to believe that your donors were different donors than the other two projects. This is likely why they were squelched less than you, as their donors had different behaviors. Most of the donations your project received were less than $1, which is often an indication of a sybil attack or airdrop farming (though I am not suggesting it was you or your community that attacked). Since the Red team is not transparent about what they do and why they do it, I cannot tell you why your project was chosen by these donors.
Regarding the $1 minimum, this is another sybil protection. Our goal is always to make it more expensive for the Red team to attack and cheaper for us the Blue team to defend. This minimum allows us to remove some of the sybil attackers or airdrop farmers from the start because it forces them to pay more for their vote to count towards matching. I do hear you with regard to communities in the global south that this can be a gating factor. As our passport system continues to improve there may be a point where we can rely more on it and remove this barrier. For now this has been a part of our criteria for many rounds.
Since you participated in the Community & Education round, I will speak specifically about that round though some of what I write may apply to other rounds. In this round that focuses on community, I would say the objective of the projects in this round is to grow community, educate community or be a resource to community, and if the project is successful, that community will fund what matters to them during a round. I would also say that several of the top projects are run by small teams. The fact that they are at the top tells me they are fulfilling the objective of projects in the round (grow community, educate community or be a resource to community).
On the flip side, it is difficult for QF mechanisms not to become popularity contests, and I think that may be what you are pointing at. I think cluster matching helps with this but does not completely solve it. In that respect, you are right the system will continue to evolve to be fairer over time. Even the authors of the cluster matching research paper do not believe this is the final evolution of this funding mechanism.
To this point, the round has a 6% matching cap, so a project canât earn more than 6% of the pie or $15k. If you assume these projects participate quarterly, that is $60k to support a project that, for most projects, is not sustainable and likely doesnât support their livelihood. We could reduce the %, but given the size of the matching pool, I am not sure that would be the right direction to go if we want to fund what matters to the community. We could make the eligibility requirements stricter, but I suspect this would not make the community happy. In general, it isnât easy running a grants program, and in the end, you can never make everyone happy.
Impact is clearly an area where all of web3 is struggling. It is something we are always discussing internally, and something I am passionate about and working on outside of gitcoin. We hope to see more review and impact tools emerge to track impact on-chain. For now, I would suggest that of the top 5 projects in the community round, I think there is likely significant on-chain activity to back up the work they are doing, but I leave it to you to do that research.
I appreciate the thinking and work around cluster matching. But seeing the length of replies to this idea brings me to 2 very succinct points:
Isnât this adding even more to the issue of âover-engineering the solution to the problemâ which Gitcoin seems to be obsessed with i.e. preventing Sybil vs. just encouraging more donations/votes?
Why wasnât this communicated before the round so that voters knew the rules of the game? Retroactively doing just punishes projects who encouraged people to get out and vote/donate for them.
I would not vote to ratify this result mostly based on point 2 and request further deliberation.
I would also encourage more discussion about where attention is placed based on 1.
Jon thanks for taking your time to respond to us today. Yes. I think itâs a positive sign to see the space that was held today with security experts who were joining together.
That was a great example of how the energy can be laser focused on initiatives and security is definitely number one.
On this note I definitely understand what you mean & respect everyone for their hard work during or in between rounds. Itâs truly amazing. I assume the passport is constantly being improved upon for matching fund eligibility slowly over the course of the future.
Has there been any proposals from projects that have offered to pay back a portion of their donations into the matching pools at the end of the round if they received âxâ amount of funding in round ? This is an idea that I think would be interesting to experiment with if builders who are heavily committed to the grants stack are also willing to turn around and help spread the distribution more evenly.
Dashboards for analytics research are great a way for anyone to learn more about blockchain in general. We will do our best to add value to the GTC ecosystem by doing our own research.
Thank you, @umarkhaneth, and the entire team for your efforts and for providing clarity. Would it be possible for you to add a column indicating the initial number of votes?
From my rapid calculations based on the OSS round data, it appears that 58% of the base votes were not considered. How much does this compare to previous rounds?
In my opinion, these numbers are staggering. Additionally, considering that 73% didnât surpass the passport score, how many total voters and votes does this account for? It seems that fewer than a third of the votes are getting counted. Is that an accurate estimation? Does this also represent a third of the unique voters?
I also have some questions regarding the cluster QF. Do you consolidate votes from a single donor before the clustering process? For instance, if a donor votes multiple times within a round, are all their donations counted as one? If this isnât the case, I imagine the results might vary, especially as voters familiarize themselves with new projects and subsequently vote for them during the round.