Outcome Based Funding for Web3 Popups: GG24 sensemaking report

Written by @nidiyia with help from @JamesFarrell and Devansh Mehta

TLDR;

Build a transparent & impact driven funding model for Web3 popups & residencies where each program is documented, evaluated & rewarded based on its contribution to Ethereum

Require popup cities & residencies to publish Hypercerts with clear costs & documented outputs. Calculate an impact score for each program to direct funds in proportion to impact generated.

Problem & Impact

Web3 popup residencies, ephemeral, IRL gatherings like Zuzalu, Edge City, and ZuBerlin, have rapidly become catalytic hubs for Ethereum’s ecosystem. They co-locate builders, researchers, and creatives, fostering innovation, cross-pollination, and collaboration across disciplines, essential for a remote first industry.

Yet despite their growing influence, these gatherings face systemic friction: lack of transparent budgeting, diffuse impact measurement, and opaque coordination, issues that directly hinder Ethereum’s core goals of efficient capital deployment, ecosystem resilience, and reputation.

The need to sustainably fund web popups & residencies is urgent because:

  1. Ethereum popup funding is under strain

Community building in Ethereum is critical. Web3 popup cities and builder residencies have emerged as valuable physical spaces bringing an internet first, remote industry face-to-face & seeding the next big ideas to move the space forward.

Despite being IRL gatherings, there is no standardized reporting and funders lack insight into how funds are used at these events, making resource optimization and shared learnings difficult. Reports note growing concern over “inconsistent evaluation frameworks” and transparency issues in Web3 grants, leading to inefficiencies in matching funding to outcomes (Francis, 2024).

  1. Evidence of opacity

Take Zuzalu (March–May 2023): a two-month popup city in Montenegro conceptualized by Vitalik Buterin. It brought together ~200 residents and up to 800 visitors, yet budget disclosures remain scarce (Bankless Report, 2024). Similarly, residencies like Edge City in Chiang Mai operated as live testing grounds ahead of Devcon but detailed cost breakdowns have not been publicly shared (Francis, 2024). Globally, 2024–25 has seen at least 10 to 15 major Web3 popup residencies, including MU Accra (2025) and ETHiopia but uniform outcome reporting is virtually nonexistent (TechCabal, 2025).

  1. Evaluation of popups is needed

Without transparency, funders may redirect capital away from community-driven innovation, and opt for more measurable, traditional paths. (SSSG, 2025, Binance). This post discusses the use of Hypercerts, on-chain impact certificates with cost and outcome metadata, to enable accountability and verifiability in the popup ecosystem funding. (Li, 2024).

Sensemaking Analysis

My entry into the web3 world was through the popup city Edge Esmeralda, where I spent a month in Healdsburg (April-May 2024). Since then I have participated in the Funding The Commons Residency in Chiang Mai (Oct-Nov 2024) where there was an archipelago of many popups happening simultaneously, ZuBerlin (2025) & also been on the organizing team for the IERR Residency in Iceland that just concluded on August 10, 2025.

Overall, I see popups not just as cultural experiments but as informal infrastructures where Ethereum’s global community tests ideas face-to-face. Yet without transparent costs or clear outcome metrics, they risk misallocating capital and eroding legitimacy

My understanding of the popup landscape draws heavily on each of these lived experiences as both participant & organizer of these events. Drawing on participant interviews, comparative analysis between popups and ecosystem research, I have come to the following conclusions

  • Popups generate disproportionate intangible value (community cohesion, trust, cross-pollination of ideas) but lack mechanisms to prove or measure this.

  • Financial opacity is widespread, most residencies do not publish budgets, leaving funders in the dark.

  • Impact stories circulate informally through whisper networks, but without systematic capture they remain anecdotal, limiting their usefulness for allocation.

The remaining sections will tackle these issues, proposing a structure of hypercerts & benefit-cost scoring of popups that can provide a missing layer of impact legibility. This aligns with Gitcoin’s shift in GG24 toward radically transparent, data-informed funding.

The sensemaking shows that without structured evaluation, Ethereum risks undervaluing some of its most fertile cultural and intellectual spaces, or worse funding them blindly in ways that invite reputational risk

Gitcoin’s Unique Role & Fundraising

Gitcoin has a history of operating multiple Zuzalu-related rounds, demonstrating legitimacy to steward an IRL coordination domain at scale. This domain will build on this history by positioning Gitcoin as the leader converging standards for funding of these popup movement

Specifically, we propose keeping the following eligibility requirements

  1. Must have already hosted a popup or builder residency

  2. Must create a hypercert listing out the costs of their past residency with a transparent breakdown. Amount paid by participants must also be included so we can provide matching on top of what is contributed by attendees

  3. Must list outputs of the popup that can then be quantified to obtain its benefit-cost (BC) ratio. An example of how this might look can be seen here

  4. Must have confirmed location and dates for the next popup that funds will go towards supporting

  5. Must have cofunding from other funders for the next popup so that GG24 funding is less than 30% of total outlay

Success Measurement & Reflection

Six-month outcomes:

  1. Transparency baseline: Onboard 8 to 12 popups or residencies who mint a Hypercert of their past program with total cost and line-item breakdowns. Simply having the finances of past popups become transparent is a win in itself.

  2. Impact legibility: For every residency, ask for a submission of outputs that are attributable to them. Use these to compute a standardized Benefit–Cost (BC) ratio

  3. Funding routed by evidence: Channel funding to purchase hypercerts of these popups, with the support predicated upon receiving an impact evaluation score of their past residency and secured funding and dates for their next edition

At the end of 6 months, publish a Gitcoin-style round report card summarizing allocations, evidence, and learning.

Domain Information

Name : Outcome based Funding for Web3 Popups

Purpose : Residencies and popups that (1) publish transparent costs, (2) issue a Hypercert capturing outputs/outcomes, and (3) undergo impact evaluation (Benefit Cost ratio) so future funding routes toward the highest impact per dollar.

Mechanism : We propose a Dedicated Domain Allocation (DDA) track with appointed stewards that calculate impact scores of past popups, based on which funds get allocated.

Even past popups with a high impact score but unable to lock down dates or secured cofunding for their next edition will be rendered ineligible.

Structure :

  1. Transparency through Hypercerts: each residency or pop-up program issues a Hypercert for their past event. These must include a clear total cost & a breakdown of total costs. This ensures financial transparency, which is currently lacking, and creates a verifiable record of funding needs.

  2. Accountability through Impact Measurement: for each residency, we ask them to give deliverables that occurred from their residency. This is then used along with other information to calculate a standardized impact score using a benefit cost ratio calculator. We document and publish all:

  • Quantifiable outcomes (e.g., number of participants, projects launched, protocols developed).
  • Non-quantifiable outcomes (e.g., community cohesion, knowledge transfer, cultural impact).

This provides a balanced view of both tangible and intangible contributions & will feed into the calculation of benefits in the impact score. Anyone can also challenge these scores and give a better calculation of them.

  1. Funding Distribution: funds from the matching pool are distributed programmatically via impact score calculations. Additional cofunding is done by reachouts to potential donors to buy units of the hypercert or contributions by participants at the popup and local citizens who want more such events to be held in their city.

Timeline :

  • Domain vote: Aug 22–29 (Snapshot).

  • Domains announced: Sept 1.

  • GG24 execution: September to January (hypercert onboarding, schema finalization, review committee, secured cofunding and dates for another popup by the applicant).

9 Likes

have we reached out to popup organizers and confirmed they are willing to try this experiment? is their participation contingent on a certain funding amount $$? what is their feedback on the proposal, if any?

7 Likes

I really like this approach. Will share with some of the pop up builders I’m close with to gauge interest though I am pretty sure they will happy to participate.

3 Likes

Thank you for your inputs :pray: I am in the process of reaching out to popup organizers & will get back with collated feedback this week.

<is their participation contingent on a certain funding amount $$?>
No, the only criteria would be for the next popup dates to be locked in & for there to be some co-funding commitment. This is current direction we are thinking in, open to feedback.

1 Like

That’ll be very helpful @Donny_Jerri :pray: thanks! would be nice to stay connected as we piece together the feedback, what’s the best way to stay in touch?

2 Likes

Draft Scorecard

2025/08/18 - Version 0.1.1

By Owocki

Prepared for nidiyia re: “Outcome Based Funding for Web3 Popups”

(vibe-researched-and-written by an LLM using this prompt, iterated on, + edited for accuracy quality and legibility by owocki himself.)

Proposal Comprehension

Title: Outcome Based Funding for Web3 Popups: GG24 sensemaking report
Author: nidiyia, with help from James Farrell and Devansh Mehta
Link: https://gov.gitcoin.co/t/outcome-based-funding-for-web3-popups-gg24-sensemaking-report/23054

TLDR

Create a transparent, impact-driven funding model for popup cities and residencies. Require applicants to publish hypercerts with clear cost breakdowns and documented outputs, compute a standardized impact score, and route funding in proportion to verified impact.

Proposers

Primary: nidiyia
Collaborators: James Farrell, Devansh Mehta
Context: author has participant and organizer experience across multiple popups and residencies.

Domain Experts

Not explicitly listed beyond the proposers’ lived experience in Edge Esmeralda, Funding the Commons residency, ZuBerlin, and IERR Residency organizing.

Proposal might benefit from having some popup organizers onboard.

Problem

IRL popup cities and residencies generate real value for Ethereum but suffer from budget opacity, inconsistent outcome reporting, and weak impact legibility. This causes allocation inefficiency and reputational risk.
Background context on popup cities like Zuzalu supports the importance and scale of these gatherings, while public budgets and systematic evaluations remain sparse.

Solution

Stand up a GG24 domain. Eligibility requires: prior popup held, hypercert with total and line-item costs plus attendee contributions, documented outputs to compute a benefit-cost style impact score, confirmed next-edition dates and location, and co-funding so GG24 is under 30 percent of total outlay. Allocate matching by impact scores and publish a round report card at 6 months.

Hypercerts provide the shared data layer for claims, verification, and funding.

Risks

  1. Measurement burden and gaming. Applicants may optimize for the metric rather than durable impact.
  2. Software - who is going to input all of the hypercerts metada? will there be a good enough ux/network effects to make it worthwihle?
  3. Attribution and counterfactuals. Intangible benefits like trust and network formation are hard to apportion across overlapping IRL events.
  4. Data quality variance. Cost line items, participant counts, and output claims may be incomplete or non-standard.
  5. Coordinator capacity. Appointed stewards must stand up schema, review process, and dispute resolution quickly.
  6. Ecosystem uptake risk. Some popup organizers may decline transparency or hypercerts, limiting sample size.
  7. Timeline risk. Achieving meaningful outcomes by October is tight given Sept-Jan execution window.
  8. Buy pressure for the hypercerts, where will it come from?

Outside Funding

Not yet specified. Proposal sets a requirement that applicants secure co-funding so GG24 contributes less than 30 percent of total outlay. We will need confirmation that leading popup organizers have expressed willingness to meet this bar.

Why Gitcoin?

Gitcoin has prior rounds tied to Zuzalu and report-card infrastructure, which gives legitimacy to steward standards for IRL coordination domains and to publish transparent post-round analyses. ([Gitcoin Governance][5])

Owockis scorecard

# Criterion Score(0-2) Notes
1 Problem Focus – Clearly frames a real problem, avoids solutionism 2 Sharp articulation of opacity and impact-legibility gaps in popups that matter to Ethereum.
2 Credible, High-leverage, Evidence-Based Approach 1 Hypercerts plus benefit-cost scoring is credible and standardizable, but evidence of reliability across popups is early. Add evaluator independence and audits.
3 Domain Expertise – Recognized experts involved 1 Strong lived experience from proposers. Would like to see named independent evaluators or advisors with measurement chops and real pop up experience.
4 Co-Funding – Backing beyond Gitcoin 1 Co-funding is a requirement, but no commitments listed yet.
5 Fit-for-Purpose Capital Allocation Method 2 Hypercerts with transparent eligibility, impact scoring, and report cards fits the domain’s epistemology and Gitcoin’s capabilities.
6 Execution Readiness – Results by October 1 Sept-Jan window is workable, but October impact will likely be early outputs: schema, onboarding, initial hypercerts. Delivery risk remains.
7 Other – Vibe and blind spots 1 Positive alignment and standards focus. Risks: metric gaming, inconsistent participation, and attribution noise. Plan risk-mitigation explicitly.

Score

Total Score: 9 / 14
Confidence in score: 70%

Feedback:

Major

  • Get explicit comittments from cofunders.
  • Secure written expressions of interest from 6 to 10 target popup organizers agreeing to publish budgets and mint hypercerts, subject to a clear privacy policy and data schema. Include named targets and status.
  • Specify the evaluation pipeline: who computes scores, independence of evaluators, challenge process, and how intangible benefits are translated into the benefit side of the ratio.

Minor

  • Publish a draft hypercert schema with example field values and two worked examples from past popups.
  • Pre-commit to a public dashboard listing applicants, data completeness, preliminary scores, and co-funding status.
  • Outline fraud and gaming safeguards: random audits, document checks, peer challenges, and consequences for misreporting.

Steel man case for/against:

For

This domain operationalizes what the ecosystem says it wants: transparency, comparability, and retroactive funding based on evidence. Hypercerts plus benefit-cost style scoring can converge fragmented popup experiments into a legible market of impact claims, while Gitcoin’s prior Zuzalu rounds and report-card muscle give the right foundation to set standards.

Against

Impact measurement for IRL popups is messy. Budgets are sensitive, outcomes are diffuse, and attributions overlap. If key organizers refuse transparency or if scores are noisy or gameable, the domain could under-allocate to genuinely catalytic culture and over-allocate to well-documented but lower-leverage events. External narratives about popup opacity persist, and may not be solvable in a single GG cycle.

Rose/ Bud/Thorn

Rose
Clear, standards-first approach with eligibility rules that push the ecosystem toward transparency. The plan to publish report cards and route funding by evidence aligns with Gitcoin’s strengths.

Thorn
Evaluator independence, gaming resistance, and data quality are underspecified. Without organizer pre-commitments to share budgets and outcomes, volume may be too thin to benchmark across popups.

Bud
This could become a new way to solve pop up city funding.

Feedback

Did I miss anything or get anything wrong? FF to let me know in the comments.

Research Notes

  • Open questions: which organizers are pre-committed, evaluator names, challenge procedure, how to score intangible outcomes, and data privacy approach for sensitive budget items.
  • Future diligence: obtain letters of intent from target popups, review two example hypercerts with full cost lines, run a dry-run scoring on a past popup, and pressure-test the scoring rubric with independent evaluators.
1 Like

Love to see the activity on this one…

Evaluated using my steward scorecard — reviewed and iterated manually for consistency, clarity, and alignment with GG24 criteria.

:white_check_mark: Submission Compliance

  • Structured and thorough proposal with problem, sensemaking, success metrics, domain structure
  • Team has direct experience with popups but lacks named reviewers or external anchors
  • No confirmed co-funders or popup organizer commitments listed
  • Mechanism is specific and promising (hypercerts + cost-benefit scoring)
  • Verdict: Compliant, but execution readiness and traction unclear

:bar_chart: Scorecard Evaluation

Total Score: 9 / 14

Criteria Score Notes
Problem Clarity 2 Clear articulation of opacity and coordination issues across popups
Evidence-Based Approach 1 Hypercert + cost-benefit model is strong, but reliability across popups is untested
Domain Expertise 1 Strong lived experience; needs external reviewers and evaluators for rigor
Co-Funding 1 Co-funding required, but not secured yet; no letters of intent shared
Capital Allocation Design 2 Excellent match of method to domain: eligibility, transparency, scoring, and reports
Execution Readiness 1 Sept-Jan plan is viable, but October outputs will be minimal unless organizers are pre-committed
Other (Vibe, Alignment, Blind Spots) 1 Strong ethos and alignment; metric gaming and data quality are key risks

:pushpin: Feedback for Improvement

Where I agree with Owocki:

  • Securing popup organizer participation is critical — get written pre-commitments.
  • Evaluation needs to be independent and audit-friendly — define challenge/appeal process.
  • Publish example hypercert schema and data completeness dashboard to build early trust.

What I’d add:

  • A dry-run hypercert scoring for 1–2 past popups would be invaluable for stress-testing impact metrics.
  • Clarify whether Gitcoin infra or a third party will host dashboards, dispute resolution, and fraud reviews.
  • Consider anonymized financials or tiered transparency if organizers are budget-shy.

:yellow_circle: Conditional Support

Would support this domain if:

  • 5+ popups pre-commit to the proposed schema (or a version of it)
  • At least one co-funder confirms in writing
  • Evaluation committee and scoring guardrails are made public
  • Gitcoin’s role is clearly bounded (infra + distribution, not sole arbiter)

This proposal moves the ecosystem toward impact accountability in a messy but important area. Let’s give it the scaffolding it needs to work.

1 Like

Hey folks, thank you so much for starting this conversation; this is awesome. It’s really exciting to see people thinking about funding mechanisms at this level, so props to @nidiyia, @JamesFarrell, and Devansh.

I’m happy to share my perspective from building Edge City. I’ll share a couple of quick thoughts below to start.

Sustainability & transparency
For us at Edge City, sustainability is something that we think about a lot. Our events are run through a nonprofit, and every edition is a big lift. They are budgeted to be breakeven through tickets and sponsorships, which can be a lot of work to align. So if there was a mechanism that could help meet up to 30% of our outlay, that would be a huge help.

At our last few gatherings, we’ve run sessions where we’ve shared the open budget with attendees, and people are always surprised at how much it actually takes to make one of these things happen. That’s why we’d be thrilled to see tools that help make popups sustainable in the long run, not just for us but for everyone else experimenting in this space.

Sustainability for everyone
Edge City is increasingly evolving into a network of residencies which happen alongside each other during the months of our popups (but also often have their own communities and other activations through the year). Each one brings its own theme, community, and in some cases, its own funding model. At Edge City Patagonia this fall, we’ll have 10–12 different residencies happening concurrently during the month before Devconnect in a beautiful mountain town in Patagonia; some sponsored by organizations, others completely community-run.

A few are even crowdfunding ahead of time using tools like ante.xyz, which is a great first step towards sustainability and helps solve the collective action problem of everyone feeling “yeah I would totally join that residency if it happens, but I’m not willing to be the first mover”.

It would be amazing if outcome-based funding could support these residencies as well, because if they’re able to be sustainable and drive impact, that would be great for the whole ecosystem.

Impact & evaluation
We couldn’t agree more that measuring impact is key, but for us, it is also one of the trickiest parts. We can definitely track how many startups, projects, academic papers, etc. get created during the popups (at this point there have been many), but part of the reason why it’s hard to track impact for popups is that often the most meaningful impact shows up months later.

Just the other day someone mentioned to me they’d raised funding for their startup and casually added, “Oh, I actually met my cofounder at Edge Esmeralda 2024.” We would have had no idea, and I’m certain there are many more anecdotal cases like this where so much value has been driven through the community work that people are doing in the space, but it’s hard to measure those ripple effects, especially in a lightweight way that doesn’t add more administrative burden to an already stretched small team.

We’re huge fans of mechanisms like Hypercerts, so if there’s a way to plug into a lightweight, standardized approach that helps prove impact without creating a ton of extra work, we’d love to be part of it.

Really grateful to you all for starting this conversation. We’re locked in and will continue to execute, but we see a lot of our peers burning out from the sheer amount of work it takes to do these things well, and it’s a pity because they do drive so much value to the broader ecosystem. I see these gatherings as necessary building blocks for any kind of future network-society, because people need surface area to meet and connect and develop affinity for each other.

Finding ways to make them sustainable feels like something the whole ecosystem will benefit from.

With love,
Timour

6 Likes

thank you @owocki & @deltajuliet for the detailed scorecards. These are all valid points & we are speaking with popup organizers, potential cofunders & working on the rest of your feedback. We will get back with an update soon.

These scorecards themselves are good examples of how we could assess a popup. The idea is to build an LLM-assisted IE system that could generate a similar assessment with a score (BC ratio). I really liked the addition of the “Confidence in the score”, something I would love to add to our own thinking of the impact evaluation process & I’m curious to know how it was calculated @owocki

Also, as an example schema, here is a hypercert that was created for the Impact Evaluator’s Research Retreat that concluded on Aug 10: Impact Evaluator Research Retreat 2025 | VoiceDeck.

1 Like