I plan to use this scorecard to quickly assess the strength of any GG24 domain proposal.
I hope that articulating my criteria I will help domain prospers understand the criteria they will be judged upon (at least by me, I can’t speak for other stewards). The goal of doing this is to make my voting less arbitrary/capricious and more legibile/transparent.
Owockis scorecard
Score each criterion 0–2 points (0 = not met, 1 = partially met, 2 = fully met).
Max score: 16 points. Higher scores = stronger proposals.
#
Criterion
0
1
2
Notes
1
Problem Focus – Clearly frames a real problem, (one that is a priority), avoids “solutionism”
2
Credible, High leverage, Evidence-Based Approach – Solutions are high-leverage and grounded in credible research
3
Domain Expertise – Proposal has active involvement from recognized experts
4
Co-Funding – Has financial backing beyond just Gitcoin
5
Fit-for-Purpose Capital Allocation Method – Methodology matches the epistemology of the domain
6
Clarity (TL;DR) – Includes a concise summary at the top
7
Execution Readiness – Can deliver meaningful results by October
8
Other - general vibe check and other stuff I may have missed aboev…
Total Score: ___ / 16
Scoring Guide:
0 points – Not addressed or significantly weak.
1 point – Partially addressed, needs improvement.
2 points – Fully addressed and compelling.
I invite other top stewards ( @ccerv1@MathildaDV@kbw and so on ), or really anyone who plans to vote on proposals, to post their scorecards in the thread below!
I invite anyone who is submitting a domain to critique my scorecard. Should I weigh different pieces higher than others? Should I be using other criteria, let me know below!
i have finished my first round of votes on proposals and this is what it looks like
note im considering these “draft” scorecards. i still may update them in the future if my feedback is considered, or in through dialogue w the teams i change my mind.
looking forward to seeing how other stewards rate proposals.
Thanks for sharing these draft scorecards! super super helpful to see them laid out this way, and very transparent + useful for teams.
quick question: are you also considering giving a short window (eg: one week or so) where teams can make changes based on feedback, raise additional funds, or follow up with you to see if anything may help shift your thinking? Basically, what’s the path for these “drafts” to move into a final stage?
Also curious to know what criteria are the other stewards using for their evaluations!
And is there a set process for how these scores will actually translate into how much matching the domains get? (sorry if I missed that somewhere!)
Overall I loved the process - super interesting and def kept us on our toes!
happy to see how things evolve over the next week or two and revisit my scores then
yeah we’re voting on how the funds will be distributed in a couple weeks i think. i plan to vote pretty close to my scores (but am not committing to 100% fidelity, at least not for this round.
This is the scorecard I’ll be using to evaluate GG24 domain proposals.
It combines two layers:
A basic submission compliance check to ensure proposals follow the required format.
A strategic evaluation rubric that scores clarity, execution readiness, and long-term value.
I’ve kept the total score at 16 points to align with @owocki’s format, but the criteria maps directly to the official GG24 template as outlined by @MathildaDV. This makes it easier to compare across stewards without compromising judgment or perspective.
@owocki noted, “different scorecards are a feature, not a bug in a polycentric political economy.” I agree — variation in how we assess these domains is part of what makes the process more robust.
Submission Compliance Check (Pass/Fail)
Proposals should meet the GG24 template requirements before being scored as outlined here:
800–1,200 words total
Problem & Impact (400–500 words)
Sensemaking Analysis (200–400 words)
Gitcoin Fit & Fundraising (200–400 words)
Success Metrics & Reflection (200–300 words)
Domain Info (experts, mechanisms, subrounds if any)
If a proposal misses this structure, I’ll move on. If it passes, it gets a score below.
Scoring Rubric (16 points total)
Each category is scored from 0 to 2:
0 = Weak or missing
1 = Adequate but incomplete or unclear
2 = Strong, complete, and well-executed
#
Criteria
0 pts
1 pt
2 pts
1
Problem Clarity & Relevance
Vague, no urgency or data
Some clarity, limited impact or scope
Specific, urgent, backed by credible signal
2
Sensemaking Approach
No method or sources
Some tools mentioned, weak synthesis
Clear methodology, good inputs, thoughtful aggregation
3
Gitcoin Fit & Uniqueness
Unclear why Gitcoin would participate
Partial alignment
Strong case for why Gitcoin is uniquely suited
4
Fundraising Plan
No plan, no leads
Loose plan, speculative funders
Realistic path to $50K+, named sponsors or traction
5
Capital Allocation Design
Mechanism doesn’t fit problem
Some alignment, lacks clarity
Well-matched mechanism, good structure, feasible scope
6
Domain Expertise & Delivery
No team, unclear ownership
Named lead but vague capacity
Strong team, committed lead, ready to execute
7
Clarity & Completeness
Disorganized, missing key pieces
Meets minimum structure, some confusion
Clean, well-organized, follows full template
8
Gitcoin Support Required
Heavy lift from Gitcoin to make viable
Shared ownership, but Gitcoin would still need to fill gaps
Proposer has execution covered; Gitcoin’s role is minimal input for success
Total Score: __ / 16
I’ll post scores for proposals that pass compliance, and leave feedback where I think improvements are actionable. This scorecard helps me stay consistent while reviewing at scale, and highlights which proposals are ready — and which still need work.
Agree w/ @sejalrekhan points above - would be great to see time for submissions to amend proposals, and work w/ the community on the funding that translates into domains.
Yes absolutely! That’s the whole idea around having some time 1-2 weeks in between submission deadline and voting - exactly for this!
These scores would likely not directly influence how much matching each domain gets as these scorecards are more subjective than in line with the eligibility/requirements set out by Gitcoin. Funding for GG24 will be ratified at a later date closer to the round! The idea is to ratify funding through a structured process of taking into account the domain quality and needs, and I would like to see the domains more designed before we can assign funding.
I’ll be outlining all of this clearly on the forum within a few days so that we can all remain aligned and on the same page!
what makes my scores subjective but the supposedly official eligibility/requirements not-subjective? i see plenty that looks subjective to me in there. too.
what does “set out by gitcoin” mean? do you mean the program or program ops team? not sure what were calling it these days, but i think we should be using the higher resolution names to denote what part of the DAO is doing some action.
to me if something is ratified by the whole dao/gtc governance structure, its gitcoin. if its not, its the team that did it. i as a steward never was given the opportunity to ratify these eligibility requirements. if they are so important as to be canon and centralizing in some way, they should have the blessing of the stewards.
i certainly plan to vote directionally the same as my scorecards, and as far as i understand it those votes will determine the allocation of $$$ for gg24.
i think we need to take a beat on and evaluate the gitcoin 2.0 era impulse to try and centralize the decision making and criteria. we are a polycentric political economy, and thats a feature not a bug in many ways. more on those tradeoffs here. if we want to align to some central or gatekeeping criteria there is a cost/tradeoff to doing it. efficiency for plurality is the tradeoff i think were talking about here.
Having stewards ratify eligibility requirements is a great piece of feedback! Definitely will adjust this in the future. As we have been moving into the new structure and changing a lot, I don’t think we expected it all to be perfect the first round (GG24), but rather taking each step as it comes, evaluating and doing a retrospective and adjusting in the future where we may do better.
My worry is that if not all stewards submit scorecards (not mandatory for them to), it may not be a good benchmark to ratify the $$$
id be open to having stewards explore other ways of giving feedback. whats important to me is that my votes/thinking is transparent and not mercurial, and to give constructive feedback to the proposers. theres a cost/benefit tradeoff of writing a whole scorecard, and not everyone has time to spend as much time as i did. if other top stewards (maybe @kyle and @ccerv1 and you are the ones remaining to express their intents) want to “copy-trade” my votes or renas votes, that could be one way to go. another could be they only vote on the top domains or the ones on the threshold. another way to go would be to delegate to 2-3 researchers whom we hire to do more research/diligence. im open here!