Regen Coordination GG23 - Community Feedback Report
This feedback report synthesizes responses collected from participants and applicants across all three Regen Coordination GG23 roundsâRegen Coordination Global, ReFi Mediterranean, and Regen Rio de Janeiro. Feedback was gathered via a dedicated Tally form, where respondents were invited to rate their experiences regarding the application process, the fairness of outcomes, the quality of communication and support, and the overall program. In addition to quantitative ratings, the form solicited rich qualitative feedback, including reasons for their ratings, suggestions for improving the Gitcoin and Karma GAP experience, comments on impact reporting and evaluation processes, and any other ideas or reflections participants wished to share. This report aims to cluster and synthesize these insights to inform future improvements and highlight key themes emerging from the communityâs experience.
Overview Summary
Regen Coordination Global
- Number of responses: 9
- Average Score: 4.65 / 5
- Breakdown:
â The application process: 4.8
â The fairness of outcome: 4.3
â The communication and support: 4.7
â The overall program: 4.8
Most participants rated the Regen Coordination Global round very highly across all categories, highlighting a well-documented program, clear communication, and strong support from organizers. Respondents appreciated the clarity of expectations and the resources provided for both donors and grantees. However, some concerns were raised about the fairness of outcomes, particularly regarding the COCM (Community of Communities Model) and Quadratic Funding mechanisms. One participant felt that these systems could be manipulated by coordinated groups or fake accounts, potentially disadvantaging genuine projects and creating âdonor fatigue.â Despite this, the majority of feedback was positive, with participants noting the transparency, helpfulness, and overall effectiveness of the program.
ReFi Mediterranean
- Number of responses: 1
- Average Score: 5 / 5
With only a single response, all aspects of the ReFi Mediterranean round received perfect scores. While this is a positive indicator, more responses would be needed to draw representative conclusions or identify specific themes.
Regen Rio de Janeiro
- Number of responses: 5
- Average Score: 4.7 / 5
- Breakdown:
â The application process: 4.6
â The fairness of outcome: 4.6
â The communication and support: 4.8
â The overall program: 4.8
Participants in the Regen Rio de Janeiro round also gave high marks, especially for communication, support, and the overall program. Respondents described the experience as inspiring, with strong community engagement and effective support from organizers. The timely delivery of resources and the organization of follow-up activities were particularly appreciated. Some feedback pointed to areas for improvement, such as occasional confusion in the process, communication issues among round operators, and the need for better onboarding and security protocols for new users unfamiliar with Web3 wallets. Overall, the round was seen as transparent, fair, and impactful, with a few suggestions for enhancing clarity and user support in future rounds.
Now moving on to deeper thematic analysis of the richer qualitative feedback also provided from respondents in the feedback form:
1. Application Process & User Experience
Positive Feedback:
- Many respondents found the application process clear, well-documented, and user-friendly.
âIt was a well documented program with all the resources for donors and grantees to make informed decisions every sep of the processâ
âOverall very user friendly and great support!â
Areas for Improvement:
- Some found the process confusing or inaccessible, especially for those less familiar with Web3 or digital tools.
âI couldnât quite figure any of that out which is one of the reasonâs I didnât apply⊠that and knowing that because our community doesnât have that strong of a crypto presence⊠the amount of work required for the potential payout would just not make the numbers line up.â
Suggestions:
- Improve onboarding, navigation, and language support (especially for non-English speakers).
âItâs still too early to say, but my feedback is mainly about making the platform more accessible to Portuguese-speaking users. I noticed that there is no support for Portuguese.â (translated)
- Make the process more accessible for analog, global south, and non-Web3 native participants.
2. Fairness, Evaluation, and Outcome Perception
Positive Feedback:
- The majority of participants felt the process was fair, transparent, and balanced.
âIt was a very detailed process with multiple layers of evaluation, a balanced approach.â
âThe team organized multiple support rounds and were very efficient⊠really transparent, consistent and fair with the outcome.â
Critical Feedback:
- Others expressed perceived bias or issues with the evaluation criteria.
âThe current impact reporting and evaluation process doesnât feel entirely fair. It often favors those who are more experienced in framing their work in technical or metrics-heavy language, rather than those actually creating meaningful on-the-ground impact.â
âRated a 2 in the fairness of outcome because of an expressed lack of faith in the COCM mechanismâ
Concerns Raised:
- The process may favor projects skilled in digital/metrics-heavy reporting over grassroots, analog, or less technical projects.
- Potential for conflicts of interest (e.g., council members with projects in their own rounds).
- Risk of competition and âpopularity contestâ dynamics, rather than true impact assessment.
Suggestions:
- Increase contextual understanding and diversity in evaluation panels.
- Separate governance and grantee/operator roles to avoid conflicts of interest
- Consider anonymizing projects during voting to reduce bias and focus on project qualities.
3. Karma GAP Platform & Impact Reporting
Positive Feedback:
- Many found the platform useful for tracking and reporting impact.
âeverything was clear and well articulated, no surprises what we were being judged on and our impactâ
Critical Feedback:
- Some found Karma GAP difficult to use and identified bugs in the software + difficulty in tracking non-digital or ecosystem work.
âKarma GAP needs significant improvements before it can be considered a mandatory tool. It is difficult to track ecosystem work outside the main GitHub or DAO tools.â
âKarma GAP was a bit clunky to fill out metrics. Also there were sometimes many similar indicators in the dropdown menu/autocomplete, which may or may not be suitable. A lack of consistency makes it hard to compare metrics like for like.â
âOften it feels like it takes more time to prove impact than to make impact, which does not make sense. A new system should be created.â
Suggestions:
- Improve UI/UX, onboarding, and navigation.
- Add features for better status updates and progress tracking for projects.
- Automate impact proofs and reduce the reporting burden, especially for intangible or qualitative impact.
4. Communication & Support
Positive Feedback:
- Many praised the support and communication from organizers.
âThe guys were always available to assist, if we had questions or needed help, the communication was as clear as daylight, especially as to how the money would be paid out.â
âThe team organized multiple support rounds and were very efficient of organizing everything.â
Areas for Improvement:
- Some noted occasional confusion or communication breakdowns.
- âhouve alguns ruidos na comunicação entre os operadores do Round que atrapalharam o entendimento geral.â
Suggestions:
- Provide more real-time updates and status tracking for projects and donors.
âWould be nice to keep donors better informed as the round progresses - not sure how we could automate round updates? For example, crowdfunding apps let you send emails to all your donors.â
5. Inclusivity, Diversity, and Accessibility
Concerns:
- The process may unintentionally exclude analog, grassroots, or global south projects, and those not engaging with AI or digital tools.
âI would recommend that you guys ensure that you have analog, global south participants when you decide how to frame future rounds and also that you provide some kind of parity for projects who are choosing not to engage with AI.â
Suggestions:
- Broaden accessibility and support for non-digital, non-English, and less technical participants.
- Ensure evaluation criteria and processes are inclusive of diverse types of impact and project approaches.
6. Quadratic Funding, COCM Algorithm, and Voting Dynamics
Critical Feedback:
- Concerns about the COCM algorithm and QF mechanisms leading to unintended consequences:
âEncouraging donors to vote for more projects creates the opposite effect of Gitcoin growth. It creates âDonor Fatigueâ making it harder for voters, making them put more money when they really do not want to, and discouragement of the more genuine, ethical and honest projects / stewards / donors.â
Suggestions:
- Re-evaluate the goals and outcomes of the COCM system.
- Consider mechanisms to reduce gaming, donor fatigue, and to better align incentives with genuine impact.
7. Security, Resource Management, and Onboarding
Suggestions:
- Provide security protocols and onboarding for new users, especially those unfamiliar with Web3 wallets and resource management.
âMy suggestion is to create security protocols for the projects to follow as soon as they receive the resource, directing them to a possible off-ramp with security.â
8. General Praise and Encouragement
- Many respondents expressed gratitude, appreciation, and encouragement for the organizers and the process.
âCongrats for this beautiful work, is very important support who is invisible for the system, but do a great impact for every one.â
âkeep up the good work!â
9. Meta-Reflection: Tensions and Trade-offs
- There is a recurring tension between digital/metrics-based evaluation and the recognition of intangible, qualitative, or grassroots impact.
- The need for both transparency and inclusivity, and for systems that are robust against manipulation but not overly burdensome or exclusive.
- The challenge of scaling impact evaluation while maintaining trust, fairness, and community alignment.
Feedback Summary Table
| Theme | Positive Feedback | Critical Feedback / Concerns | Suggestions / Requests |
|---|---|---|---|
| Application Process & UX | Clear, user-friendly for some | Confusing, inaccessible for others | Better onboarding, navigation, language support |
| Fairness & Evaluation | Transparent, balanced for some | Perceived bias, favoring digital/metrics | Diverse panels, anonymized voting, separate roles |
| Karma GAP & Impact Reporting | Useful for some | Buggy, unintuitive, hard for non-digital | UI/UX improvements, automation, easier editing |
| Communication & Support | Responsive, clear | Occasional confusion | More real-time updates, status tracking |
| Inclusivity & Accessibility | - | Excludes analog, non-English, grassroots | Broaden accessibility, inclusive criteria |
| QF/COCM & Voting | - | Manipulation, donor fatigue, misaligned incentives | Re-evaluate mechanisms, reduce gaming |
| Security & Onboarding | - | - | Security protocols, onboarding for new users |
Conclusion
The feedback from GG23 Regen Coordination participants reveals a vibrant, committed community with a strong desire for fairness, inclusivity, and meaningful impact. While many aspects of the process are praised, there are clear calls for improvements in accessibility, evaluation fairness, platform usability, and the alignment of funding mechanisms with real-world impact. Addressing these themes will help strengthen trust, participation, and the overall effectiveness of future rounds.