UPDATE: Coming out of a Gitcoin 3.0 Twitter Space, @LuukDAO had asked whether we will be taking past Community Round performance into account in this new structure. It would be a miss IMHO if we didn’t, therefore I have updated the evaluation criteria.
Thank you for this feedback! Based on this and others, we recognize the need to make GG24’s sensemaking phase more iterative, accessible, and aligned with real-world complexity. We’ll consider adjusting the framework. Expect a follow-up shortly outlining some refinements in more detail.
thanks for being so on top of being responsive to the community! i think its key if were going to sensemake about ethereums biggest problems
keep the constructive feedback coming everyone!
if anyone wants feedback on their sensemaking initiative, feel free to post about it on this thread and im happy to and give feedback where i think i can add value.!
Thank you to everyone for your valuable feedback on this post. The key points that I took away from the feedback is that 1) this framework needs to be a bit more simplified, 2) there is no real incentive structure for participation, and 3) it’s a little confusing.
I hope that the above updates (adding compensation, simplifying the timeline and flow of expectations and providing a report template) will add clarity and drive these conversations forward constructively.
We’re building this ship as we’re going, so I appreciate everyone’s input and guidance!
Hi all,
I m not sure how to make my point here. However here it is.
A crowd’s wisdom is very difficult to capture.
Capturing it is counter-intuitive. Often exactly opposite to what is considered usual. We all think, If we want community input, we should have a meeting or open a forum. Seems obvious. Unfortunately this gold standard paradigm is repeatedly proven to lead to poor judgements.
This is probably counter to your core beliefs. Particularly if you believe in open discussion and voting consensus. But it doesn’t change the facts.
Here is what the literature states to harness the magic of wisdom of crowds:
Diversity of opinion
Decentralized with local knowledge
Independent judgement
Lastly
Unbiased Aggregation
Here is a potential solution:
Every 6 months, ask the community to answer the question: “what are ethereum’s 3 biggest problems, from your point of view?” Pls write 3 Tldr paragraphs.
All responses and author’s are hidden from view until after the aggregation.
Target 200 responses by offering $100us incentive.
Collate all responses. Determine the unbiased aggregation. Simscore was created for this purpose.
A list of the 4-5 biggest problems will be more accurate than the responses from any individual contributor.
If hope this is clear. Wisdom of crowds is much more powerful than open crowd consensus. Yet, we never move away for open crowd consensus
Thanks @maets23
Can you give a more concise/clear response to the framework?
ask the community to answer the question: “what are ethereum’s 3 biggest problems, from your point of view?”
This is what the current framework is. As @Hydrapad has submitted here.
Hi @deltajuliet ,
It looks like Hydrapad’s report was generated by LLM based on Mathilda’s template, with the primary goal of selling Hydrapad’s solution.
Tools Used:
- Root cause analysis of startup failure metrics
- Comparative study of fundraising mechanisms (SAFTs, ICOs, bonded curves)
- On-chain liquidity analysis (Dune, DeFiLlama)
Source Key Finding Severity Dune Analytics 68% presale tokens crash >90% Critical GG19 - 23 92% founders lack mentor access High Hydrapad [value proposition] Solution Data clustered around 3 themes:
- Capital Access: Fragmented tools increase failure rates.
- Operational Burden: Compliance/KYC slows launches.
- Liquidity Mismatch: Static presales cause volatility.
This is as far from complexity-informed sensemaking that @owocki described as possible.
Unfortunately the current framework is vulnerable to this kind of “sensemaking through AI-generated reports”. If Gitcoin wants to address Ethereum ecosystem as a complex environment (which it is), it’s important to implement the “probe-sense-respond” approach and make sure that reports are not about “solving problems”, incl. pitches or self-advertising. Sensemaking isn’t about creating the proposals for what should be done, but rather understanding the environment.
A few members of Sensemaking Scenius (including myself) have collaborated on this proposal: [Gitcoin 3.0] Complexity-Informed Sensemaking Pilot
We would really appreciate your feedback. Our intention is not to challenge the existing framework, but rather to invite the Gitcoin community to a serious conversation about sensemaking.
@zhgnv appreciate your criticism on the subject. Selling Hydrapad as solution wasn’t the primary goal but as a complementary option. I can remove hydrapad selling points from the proposal if that becomes an issue.
I strongly support the strategic direction towards funding public goods through decentralized community rounds, which is an essential evolution for Gitcoin Grants 2024. However, I believe the current approach could benefit from greater flexibility and more community-driven methods to truly address Ethereum’s most urgent and solvable problems. While predefined problem briefs are important, I propose that we introduce real-time feedback sessions and community-driven workshops. These workshops would allow for organic idea generation and collaboration, ensuring that we are considering a wide range of perspectives. By focusing on identifying the most impactful challenges in Ethereum and using AI-driven tools to tag and categorize emerging issues, we can ensure the inclusivity and transparency of the entire process.
Additionally, the focus on retroactive funding is a promising direction, but I suggest we further support mature builders by implementing metrics-driven funding mechanisms. This would ensure that resources are allocated not only to early-stage initiatives but also to projects with proven impact. Mature builders, who have already demonstrated success, should receive funding based on measurable outcomes and sustainable results, which will drive long-term sustainability and systemic growth across Ethereum’s ecosystem.
Finally, I would recommend expanding Gitcoin’s funding mechanisms to support niche areas like climate action, digital privacy, and decentralized science. These domains not only align with Ethereum’s potential but are also critical for addressing issues often overlooked by traditional funding models. By leveraging Ethereum’s capabilities, Gitcoin can provide innovative solutions to challenges that have long been neglected by mainstream initiatives, positioning itself as a true leader in public goods funding.
I believe this approach will allow Gitcoin to remain at the forefront of public goods funding while fostering a more decentralized, inclusive, and impact-driven ecosystem. By ensuring that we are supporting builders at all stages of their development, we can create an environment where innovation thrives, not just for today, but for the long term.
Hi Deltajuliet,
- Collective Sensemaking.
On a semi-annual basis,
1.1 Identify a representative group of 200 participants from the Ethereum EcoSystem (Similar to Sortition).
1.2 Ask the 200 participants the following question: What are Ethereum’s top 3 biggest problems? (400-500 words)
1.3 Determine an unbiased aggregation of the responses.
1.1 is designed to ensure diversity and decentralization. If we allow people to self-select, we undermine the Wisdom of Crowds method, which insists on diversity and local knowledge. Unfortunately a large part of this effort will be chasing laggards that do not provide answers
1.2 The participants should submit their answers to the question independently without seeing or interacting with anyone else’s answers. By providing 3 answers it allows each participant a little more freedom. I would suggest a modification to this Forum so that answers are written in the forum, but hidden until the after the deadline. Following this step, we will have approximately 600 answers from a diverse, decentralized and independent group…key tenets of wisdom of crowds tech.
1.3 My suggestion is to use an algorithm to determine the top 5 or 6 problems from the list of 600 answers. This will reduce variability as the algorithm only provides a single analysis that would be transparent, auditable etc. However literature also suggestes manual synthesis can be effective as long as the analyst (s) are independent with no skin in the game.
At this point there would be 5-6 problems that can go onto next steps such as validation, cynfin analysis, dda and execution.
By doing collective sensemaking every 6 months, we will see if problems vary over time, whether solutions are working (ie problem not in top 5 anymore, etc.) Very positive iterative loop that likely can replace the retrospectives.
Hi all! Adding a piece to this Strategic Sensemaking stream: one domain I see underexplored is proof of meaning itself.
Skyla is an agent framework that turns symbolic state transitions — like an AI’s narrative, alignment, or interpretive shifts — into recursive zero-knowledge proofs. Instead of debating “what matters” only after the fact, we can cryptographically prove how meaning evolves.
We’re testing this now with Nythaerna (a narrative AI) to anchor a living story as a verifiable proof chain — the first practical demo of recursive symbolic continuity settling to Ethereum or any neutral DA layer.
This feels well aligned with a GG24 domain around Meaning Awareness + ZK-backed Interpretive Continuity — anchoring subjective meaning in objective proofs for cross-agent interfacing and as a coordination primitive in the long run:
- Federated Interpretation: Multiple validators can verify the same symbolic state through different relational lenses
- Cross-Agent Bridges: Authenticated identity portability across systems while preserving narrative autonomy
If anyone is building in this direction, I’d love to contribute Skyla’s spec, early tests, and recursive architecture to help evolve it further.
Thanks for opening this space. Excited to help shape what emerges.
— Maggie
github: skylessdev/skyla
hey @MathildaDV , just signaling here our sensemaking report and proposal: [GG24] Sensemaking Report at Pre&Post-Grant Coordination | From Allocation to Alignment & Accountability
Due to the current process of sensemaking being wildly successful and the reports are needing further and more extensive input, we are extending the deadline until August 15, 2025. I believe this will give everyone a more extensive timeframe to complete their sensemaking, and resulting in highly valuable reports for GG24.
thanks for the update @MathildaDV ! Great to see the process is hitting such success!
I’m seeing:
but, I don’t see any defined feedback phase in the timeline
since there’s now more time for Phase 1, I’m curious if a defining a dedicated feedback (sub)phase (either async or in twitter spaces) on the already submitted reports would make sense
is the mentioned schedule for sensemaking sessions already out? I’ve seen some twitter spaces happenning, but not a schedule, as far as I’ve noticed.
Here are 3 domains I’d like to see:
- Academic research relevant to Ethereum
- Open source digital infrastructure for community land trusts
- Cultural outputs (books, films, games, immersive media) that tell the story of systemic regeneration and post-collapse futures
Hi @sepu85!
We just added a Twitter Spaces to the Luma calendar and will continue to add events as we move through Sensemaking Szn!
Excited about the structured approach—especially the emphasis on genuine impact over hype.
Looking forward to participating in Sensemaking and seeing this framework in action!
What voting platform are you using? I would like to suggest Updraft. Why? Because it allows people to co-own Ideas. So instead of giving a bounty just to the creator of an eligible report, you can airdrop a bounty on the report and it will automatically be split among all supporters along the way who helped to surface and bring attention to it. The crowd can do the reviewing for you. The best reports will be on the top of the list for your final review and bounty distribution.
Links showing how voting works on Updraft.
It takes 2 minutes to create a campaign on Updraft. We could use a set of tags like “gitcoin” “sensemaking” “gg-24”.
@MathildaDV Let me know why you wouldn’t do this.
I created an Idea on Updraft that proposes this. If you agree, support it.
Briefs are encouraged to use tags and short summaries for easier synthesis.
Anyone can take a sensemaking report they like, create an Idea on Updraft that includes a summary and the tags “gitcoin” “sensemaking” “gg-24” (plus a couple other relevant tags). Then anyone else can support that report–and earn money if it’s popular. You don’t have to be the author of the report to put a summary of it on Updraft.
You can start doing this now; you don’t need to wait. If we do a good a job, hopefully someone will notice and airdrop on the best reports (ideas).
If you need free UPD and Arb eth, you can get it from the Updraft faucet.
I actually had this same question the other day regarding the platform we would utilize. Personally I think one of the considerations when determining what platform to use is operational overhead and UX consistency in relation to the overall program. I will highlight this suggestion with the team
Thanks @castall . We have been happy with our setup with Snapshot/Tally and I don’t really see a reason to change. More than the platform we use to vote I am more interested in how we can see more activation of GTC Stakeholders and broader participation. If Updraft can help to solve for that I would be open to hearing more.