I was intrigued by this and thought of digging deeper into the data. I would like to think the following supports @Joel_mâs assertion about donor diversity.
I profiled BerryLabâs top 50 donors compared to the other project (with fewer contributors/contributions, letâs call it Project A). The top 50 donors of Project A collectively supported 152 out of the 153 projects in the dApps & Apps round, while the top 50 donors of BerryLab have contributed to 34 out of the 153 projects. In relative terms, the top contributors for Project A are likely to be part of more diverse clusters by a long shot.
While this is not conclusive, it is a directionally indicative reason behind the difference. I have spot-checked this with 3 other projects with less than 100 unique voters contributing to less than $350 in crowdfunding but have received an average matching Per Voter > $15. In each case, their top 50 donors collectively support at least 120 or more projects in the round.
(Query to support this analysis is available in the Metabase instance of Regendata here).
Hey! 100 is good. We apply a sliding scale to donations based on the passport score. If your score is 0 you get no matching. If your score is 1 you get 50% matching, and this increases linearly until a score of 25 at which point you get 100% matching. The amountUSD is the value after this scale is applied while the startingAmountUSD is the original donation.
Iâm looking into the bug in the registration process. If you DM me a good email address I can add you manually for now. Iâm @umarkhaneth on tg
Weâve mentioned above and in other communications how COCM prioritizes projects with diverse sets of donors. The charts below give one way of visualizing that on GG20 data.
Each dot on these scatterplots is one project. The X axis is the average diversity of that projectâs donors, and the Y axis shows the amount that project benefited from moving from standard QF to COCM.
In more detail: to calculate the X axis number, we looked at all pairs of donors to a project. For each pair, we found the number of other projects that just one person out of the pair donated to, and averaged all those numbers. To calculate the Y axis number, we found the percent of the matching pool that project got under COCM, and divided that by the percent they wouldâve captured under standard QF. Also, since the DApps round was so big, we used 10k randomly sampled pairs instead of all pairs, per project.
These scatterplots show a correlation between this particular measure of donor diversity and project success. They might also help to explain why BerryLab, WalletX, and Stogram performed less well than expected: in regards to this measure, theyâre on the lower end of the distribution, coming in at 9.57, 4.97, and 10.64 respectively.
Hopefully these charts can be helpful to everyone trying to understand why funding results look the way they did, and can help all of us in the Gitcoin community understand how we want to align the algorithm in the future. I would love everyoneâs feedback on whether or not this is the type of behavior we want out of the algorithm, and why or why not thatâs the case.
Thank you for providing this data and taking the time to explain all the details. I think it might be beneficial to exclude the outliers for each project. This could help to eliminate any potential bias if a random wallet only donated to one project.
hey @umarkhaneth , since $EARTH was rejected initially and then we got through after making the necessary changes for the OSS round, there were 2 grants LIVE - the one that got rejected and the one with which we reapplied.
Have been given to believe that votes given to only of the 2 live are getting counted.
Any particular reason not to include votes to both ?
Hey @solarpunkmaxi, our program automatically pulls data on accepted projects and calculates the matching results. Including the votes from a rejected project and/or combining votes on two projects means a manual intervention.
With that said, weâve gone ahead and processed this for $EARTH in the Dapps round and for DSPYT (who reached out via telegram and also had a duplicate project situation) in the Hackathon round.
The updated results are live in the results sheet.
I re-ran the results with the âcarpet bombersâ you indicated removed from the donor pool. Removing them doesnât seem to significantly change outcomes â in particular, your project does get more matching, but only an additional 0.00000027 of the funding pool.
I also tried stress-testing the results by adding in up to five thousand more fake carpet bombers, to see if this can become a viable attack with many such accounts. With 5k carpet bombers added in, your projectâs share of the matching pool does decrease, but only by a fraction of 0.0000021 (compared to the world with all carpet bombers removed).
That being said, these seem to be very different results than what you noted in your original post. If youâre comfortable emailing me with further information about what numbers you crunched to get the results you did, please shoot me another email.
JoelâŠI think this is just a case of most of my contributions being at opposite ends of the spectrum. Most of my donors were coming from outside the ETH ecosystem. Hopefully they are more established now and will get better scores in the next go around. Weâll just have to build it up over time.