Thanks for this suggestion. We also made this budget review tool on Ethelo and would love for you all to check it out to review all of the Season 14 budget requests! https://gitcoinbudget.ethelo.net/
Yes. This is directly a goal of ours with the Catalyst squad (formerly called Matrix).
We actually found that this one bubbled up for us from all the contributors. We weren’t doing a good job of sharing context within the workstream between squads. Most of your other criticisms here seem to highlight the need for some level of comms.
This is a very small budget and the outcome owner is already a source council member with another outcome so there isn’t a full time person dedicated to this. We simply felt the need to be intentional about this.
One answer to this is that we raised the round management budget substantially and put the grant policy work within this stream. We believe that the policy errors were more process driven than due to an unclear policy (although its just a weighting, not a binary).
Another answer is that we are bearish on policy in general. Instead, we are solving for how any community, including our own, can collectively set and review criteria and grants eligibility status against that criteria in a decentralized way that maximizes for trusted outcomes first, then minimizes cost for scalability.
You can see this in our Rewards Modeling OKR report here or if you want a high level overview, watch this 8 minute video.
This workstream was formerly called Matrix. We believe that it’s work is crucial because without it we are not preparing for a grants 2.0 future. Funds removed from fraudulent allocation is the most relevant metric for them.
Without them we do not have a high-level overview of why we are focusing where we are. In the last season, Kylin’s review on the matching caps informed the grants ops decision on what to do with matching caps.
Additionally, they identified an entirely new vector of attack. Their work is needed in my opinion and their budget is only lower than the sybil model now because innovation and research is a hard sell, but unfortunately FDD is tasked with solving 2 unsolved research problems for Gitcoin.
In season 14, they are tasked with coming up with Grants 2.0 components which can minimize fraudulent allocation. They will map the composable components and communicate an understanding of the tradeoffs in different component stacks. They will also be able to inform decisions based on simulations.
Grants 2.0 is going to be a main customer for this squad because they will need a collection of components to start. They are currently focused on building the first component of Grants 2.0 as the ones which are used in cgrants, but this squad can simulate and suggest new legos with code that can be dropped into production.
They can also plugin synthetic data to the community model to help train it faster. Another way they can reduce costs is in prototyping NLP solutions. Lastly, they can help other FDD squads with simulations. Deliverables include reports and graphics and even code for grants 2.0 components.
I will be directly leading this effort. The details are in flux, but we need to begin building some of our standards into protocols which will serve as microservice protocol DAOs. The entire solution can be generalized to serve all of web 3. Here is the Sybil Detection DAO deck.
Overall, thank you for the thoughtful response.