Hey Joel!
This was a really interesting read. Contrasting the snapshot vote and grants rounds really shows just how different these outcomes were and I wonder why that is. It’s possible they just appealed to two different audiences. For example the DeSci community did not participate much in snapshot but this analysis shows they took a larger share of the donations than one would expect based on the size of the matching pool. It’s possible they’re less involved/engaged in Gitcoin governance but care more about the grantees in the round.
This really points to me that if we were to use a snapshot vote to pick core rounds again we’d have to do so differently and maybe we should take QF as governance more seriously. If there’s strong community support for a featured round based on donations in the round then perhaps it should be a core round.
I really like how you’re contrasting different QF mechanisms to try and find which one delivers better results. It would be really interesting to see how the results vary by mechanism when zooming in on one round and asking if those results align with what we expect.
For example, in the recent Citizens Round, the results were surprising. This was primarily a round for the Gitcoin community and in the initial proposal I called out examples of who I hoped this round would fund:
Yet, my favorite grants and the favorites of many high-context Gitcoin contributors did not earn much of the matching pool. In the end, the top four projects in the round were all organizations rather than Citizens. We also found significant evidence that the round was attacked by a small army of airdrop farming robots.
I’m wondering how different these results would be if we used a different mechanism. Instead of linear QF, how would pairwise matching, cluster matching, or connection-oriented cluster matching perform? Would the results be more in line with what we’re expecting and resistant to possible sybil/bot attacks?
