Community Round Gov Process Retrospective
Iâm excited to post this retrospective on the newly implemented governance process for community rounds during Gitcoin Grants rounds. With GG20 wrapping up this week, I wanted to bring some learnings from this process to light. This post will include:
- Retro on Community Council and their experience
- Retro on the selection criteria, additional learnings and how we might move forward differently.
Thank you to @Sov for spearheading the initial idea of the evolution of Community Rounds, and to the 7 members of the council, @mashal, @thedevanshmehta, @azeem, @ZER8, @lanzdingz, @wasabi, @feems, who embarked on this experiment with us! It was an honour to coordinate this undertaking and learn alongside everyone throughout the process.
As a recap, the new governance process included enabling communities to upload proposals to our forum, and applying for extra matching funds from Gitcoin to run in a Gitcoin Grants round (GG20 was the first iteration). The Community Council then reviewed and voted on the proposals, with the top 5 being selected for matching funds. If youâre unfamiliar with this process, read the following posts for further context: GG20 Eligibility Criteria - Community Rounds, GG20 Community Rounds Announced, GG20 Community Council.
Community Council retro:
This being the first iteration of Gitcoin electing an external GG Community Council, I have been personally very pleased with the results and the council that was elected. The councilâs diversity and experience put us in a strong position to govern GG20.
A detailed survey was presented to the council gathering feedback on their experience and ways we could improve the process. They had to scale their overall experience from 1-5 (1 being âpoorâ and 5 being âexcellentâ). The results revealed that the average rating for the councilâs overall experience was approximately 4.71 out of 5. This indicates a highly positive experience among the council members. Each member also roughly spent 10 hours on this initiative, in line with what we set predicted.
âFor how fast everything moved and the care needed to effectively carry this out with a tight timeline I thought it was done as well as it could have been when you factor in everything. We were able to identify what needed to be changed in the future, without letting that affect our ability to get the current task done on time now. everything was handled in a way that gives me a lot of confidence for the foundation of what weâre building/Gitcoin is working towards helping create.â - Lana
Retro on the selection criteria, learning:
Using CharmVerse as a platform to review & vote:
Other options we explored before selecting CharmVerse: Ranked Voting through Google Sheets, which tested unsuccessful due to the results needing more than 7 voters and reviewers. We also looked at manual reviews, which is labour intensive and not favourable. We wanted a platform where the council could easily review and comment on proposals as well as vote using a rubric, which was why CharmVerse was a great option.
Within the same survey, the council was asked to rank their experience with the platform chosen on a scale of 1-5 (1 being âpoorâ and 5 being âexcellentâ). The average rating for using CharmVerse as a platform was 4.14 out of 5. This suggests a generally positive consensus towards continuing to use CharmVerse, with most members indicating satisfaction, though a couple of members saw room for improvement. We have given the CharmVerse team some feedback on how we would like the UI to be improved in the future, but with this outcome in the survey, we will most likely keep CharmVerse as the platform moving forward, making sure that the improvements are implemented by GG21, barring any necessary changes due to adaptations of the rubric and eligibility criteria detailed below.
Improvements to the Rubric:
Here are some suggestions that surfaced in the survey that we would like to implement in the rubric for future rounds. NOTE: these are not set in stone right now, as we will circle back with the council on this and ensure the rubrics are created in collaboration with what the council suggests.
- Include âcommunity reach/impactâ and details about community leadersâ public/social impact.
- Introduce a rubric for round deliverables or milestones, possibly weighing the network effects.
- Expand and refine rubric definitions to ensure clarity, such as defining terms like âleverageâ more explicitly.
- Introduction of mechanics that determine the ratio of matching being received by a Community Round; this could be a subset of the rubric focused on the financial aspects of the application.
- Add more measurement criteria tied to application questions for easier referencing.
- Incorporate individual or group sessions with applicants and add a measurement for more diversity of projects to prevent monopoly by well-known names and popular/ same projects getting funding.
Changes to Eligibility Criteria:
Strong eligibility criteria that are also closely linked to the rubrics used for evaluation can ensure high-quality, values-aligned funding rounds that serve the community well. The councilâs input will set the eligibility criteria for future rounds.
Here are some suggestions:
- Consider restrictions such as disallowing back-to-back funding rounds for previous recipients, with a potential reinstatement after a cycle or annually.
- Require previous recipients to provide impact tracking metrics and improvement plans for subsequent rounds.
- Standardize metrics more comprehensively to enhance comparability across different projects and rounds.
- To share progress, community size, and all other relevant stats
- Address transparency and conflict of interest concerns, such as disclosure of relationships between applicants and round participants.
- Disclose all forms of funding
Those not selected for matching felt âexcludedâ:
One thing that we picked up is the rounds not selected for matching funds felt excluded and disempowered to still continue and run a round during GG20. This is not something that we intentionally set up and one of the biggest pieces of this process I would like to change. How might we ensure each round that applies feels empowered, but still upholds the resources that Gitcoin can give community rounds during a round?
For example, when it comes to marketing rounds outside of those selected for matching, there are many factors to consider: resources, vetting of the rounds, intentional coordination, etc. We are actively working on creating a system that is inclusive and impactful for all involved and making sure the process is fair, vetted, and well resourced.
Moving forward:
- On the topic of making the process more inclusive for all involved, I suggest the following:
The top 5 are still selected for matching through the same review and voting process, but the top [ X ] will be included in our marketing efforts. The exact number of rounds we can support and what this marketing support will look like will be discussed internally and fleshed out in another post leading up to GG21.
-
We will update the rubrics and eligibility criteria with the council, as we would like to give them more autonomy in this regard.
-
An updated Roles & Responsibilities doc will be posted to the forum, outlining the new and updated responsibilities of the council as their roles will evolve moving forward.
âThe comms part by the Gitcoin team was excellent. On the whole, Iâve taken this role pretty seriously and have felt that it was a big responsibility to make sound decisions with intentionality and countering bias that comes with decision making. Iâd love to stay updated re the results of each round, and to understand how they were run ++ learnings et al from them (would esp like to understand challenges and what went well + didnât from each). This will help in improving decision making for the next one.â - Mashal
The goal of Community Rounds is for Gitcoin to continue empowering those within the ecosystem to fund what matters to them most, creating an inclusive and collaborative environment during GG, and further decentralizing our grants program.
We have taken a great first step, and through these iterations and learnings, we can continue to improve over time.