Revised Proposal: Outcome driven funding for Web3 Popups & Residencies by @nidiyia with help from Dipanshu Singh and Devansh Mehta
TLDR
This is an update to the proposal published on August 16, 2025.
It incorporates the feedback received from @owocki (scorecard) & @deltajuliet (scorecard); and reflects a revised plan of action after conversations with six Web3 popups & residencies.
Please consider this revised proposal for âOutcome-Driven Funding for Web3 Popups & Residenciesâ when casting your votes for GG24 domains on September 11, 2025.
Goal: build a funding model for Web3 popups and residencies that encourages a culture of robust impact documentation and rewards each popup for verified benefits generated for the broader Ethereum community, while taking into consideration the cost to run a popup
Mechanism: participating popups submit detailed impact assessments of their past or recently concluded programs. Impact reports are fed to various LLM based Impact Quantifiers that assign a $ value to outcomes submitted by each popup. Each popup applies with a hypercert that represents their funding target in GG24. Funds are allotted algorithmically, in proportion to the benefits from each popup divided by the fundraising target listed in their hypercert, with an appeal period for unfair calculations and a maximum cap.
Revision Highlights:
1. Focus on benefits and fundraising targets
Being mindful of budget privacy concerns & learnings from the earlier Gitcoin Zuzalu round, the revised mechanism focuses on calculating the benefit (in $) accruing from each popup. Sharing cost data of past hypercerts is optional & not required The project applicants are popup city hypercerts with fundraising targets for GG24 and GG25. The focus is on funding popups based on the verified benefits it has generated in the past divided by the amount it wishes to raise for the future, rather than only a reimbursement of past costs incurred to host it.
2. Popups are responsible for their own impact documentation and hypercert creation
Conversations with six web3 popups/residencies revealed that a one size-fits all impact framework is unviable given the unique mission & nature of every popup. Each participating popup is responsible to document its own impact based on its own defined metrics to submit a true & detailed report to domain operators.
Along with this, an application form is to be filled out by all participating teams eliciting program related information common to all popups, along with minting a hypercert stating clearly they amount they want to raise in Gitcoin rounds.
3. LLM based Impact Quantifier
Existing impact models often output similar scores across projects, when they shouldnât. We have overcome this issue, by feeding it a formal evaluation schema-the Relentless Monetization Technique (Weinstein & Bradburd, 2013). This follows a structured process that delivers differentiated and higher-quality results. We have tested this quantifier with preliminary documentation from four popups with their permission. Results are published anonymously in Section C.
The revised proposal begins with describing the mechanism process (Section A), goes on to highlight important domain details like popups onboarded, co-funding, evaluations process, public dashboard, privacy policy, risks & timeline (Section B) & wraps up with test results from the LLM Impact Quantifier, based on trial runs using impact reports shared confidentially by four Web3 popups (Section C).
Section A: How it Works
Step-1: Popups submit their own records of impact
-
To enter the round, each popup fills out an application form that elicits general program related information like location, no. of attendees, no. & formats of events held, specific outputs/outcomes arising out of each event if any & clear impact metrics they are tracking.
-
They also submit their own impact evaluation reports which have already been prepared for other purposes. Impact evaluations can be submitted in varied formats (grant applications, detailed reports, narrative, spreadsheets etc.).
Domain operators provide a suggested framework but do not interfere in evaluations and documentation of impact per popup.
-
Any popup that has taken place and recorded its impact on or after February 15, 2024 will be eligible for submission and funding in the proposed GG24 domain round. The cutoff date of February 15, 2024, is chosen because it marks the conclusion of the last web3 popup funding round, i.e., the Zuzalu QF Grants Program on Grants Stack.
-
Submitted impact metrics & reports are transparent & made publicly accessible. If popups do not want to disclose specific information, they must indicate the same in the application form. However, private information will not be considered by the impact quantifier in its calculations.
-
Popups also mint a hypercert representing these past impacts, as part of which they need to list a fundraising cap (price per unit of hypercert). This will be considered for funding in 2 ways; popups cannot raise more funds than that possible via purchase of their hypercert; their overall score determining allocations between popups is quantified benefits of past residencies divided by cost listed in the hypercert.
Step-2: LLM based Impact Quantifier calculates a $ value for impact per popup
- All submitted reports are processed through an LLM-assisted Impact Quantifier.
- The system uses 3** independent LLMs to generate a net benefits (in $) value for each popup. The mean or median value is published with an end result reading for e.g., â$20k in benefits generated by Popup Xâ.
**Note: For the test run results (Section C) the following models were used: Chat GPT 5, Gemini & Claude 4/Grok 4.
- If additional data is needed for a more thorough calculation, popups will be asked to provide supplemental information. If they do not have this additional data, the calculation would be produced independent of it.
- During GG24, their community can purchase units of their hypercert and leave a comment describing the impact of the popup or residency on them. Both the dollar value purchased by their community and the comments left behind will be considered by the quantifier, example here
- The main calculation is done by the LLM Impact Quantifier (see Section B). A technical lead fixes any bugs & ensures the smooth functioning of the LLM. An evaluations lead reviews the LLM calculations for general validity of method & data sources submitted by popups for factual accuracy (see Section B).
Step-3: Publication of results & appeals
-
The impact reports submitted by each popup are made public, along with the quantification of these reports by the LLM.
-
If popups do not want information disclosed to be made public, they can indicate the same in the application form. But private information will NOT be considered by the quantifier.
-
If popups feel the quantification of their impact report is unfair, they can lodge an appeal within a specified time period. Their responses will be fed back into the LLM after review from the evaluations lead, with updates to the quantification process made accordingly. The appeals will also be transparently documented on the forum.
-
The final quantified benefit will be divided by the cost listed in their hypercert to obtain the score according to which allocations are made. This step provides a moderating feature designed such that popups do not request excess funds as it reduces their score; and they do not list too low an amount as it caps the aggregate funding they can receive from the past impact they have created.
Step-4: Allocation mechanism
-
The final output will be an excel/.csv file with the names of participating popups, linked self-conducted impact evaluations & the median net benefit value (in $) accruing from each of them as calculated by 3 LLM models. It will also link to the hypercert created by each popup to apply in the round, which will be divided by the median net benefit to obtain their final score.
-
This file is programmed into a wallet with a formula that distributes the common pool of funds among participating popups in proportion to their benefit:cost ratio.
-
The maximum amount of funding per popup in GG24 will be capped at 20% of the funding pool at most.
-
We anticipate continuation of the popup funding round for GG25 with the same hypercerts (and new applicants that didnât participate or those created more impact in the next 6 months), so participants are encouraged to think long term
Section B: Domain Details
Popups signaled for participation
We have had conversations with six popup organizers since August 16, 2025. The following popups have provided in principle commitments to participating in the proposed outcome driven funding round for Web3 popups & residencies for the upcoming GG24 cycle;
Funding
Based on consultations with popups, an average of $20k-50k per popup applicant is considered viable to incentivize their participation in the round.
One level of cofunding is from popups asking their participants to purchase units of their hypercert in GG24 to leave a comment listing the impact it has had on them, so that it can be considered by the quantifier in aggregating total benefit.
We have yet to confirm cofunding for the matching pool, our time this month was spent on speaking with popups and refining this proposal.
Evaluation
A human evaluator/domain operator feeds the impact reports submitted by popups to an LLM assisted Impact Quantifier.
The Impact Quantifer runs the popup submissions on 3 different LLMs & takes a median value to generate a net benefit (in $) value per popup.
The Quantifier is built on the Relentless Monetization (RM) method (Weinstein & Bradburd, 2013) & follows five steps to monetize outcomes per popup. See Section C for these detailed steps that go into the system prompt. The system prompt containing the evaluation mechanism is open source & accessible to all.
All calculations are ultimately based on the reports & data submitted by popups. It is the responsibility of popups to create the hypercert defining their own metrics, record outcomes unique to their own mission and list the total amount they want to raise . No standardized, one-size fits all metrics are suggested by the LLM or human evaluators in the domain team across popups (albeit some general program details will be collected via the application form).
The LLM Impact Quantifier makes the final calculations on $value generated per popup, drawing on the RM method. Humans in the loop in the form of a technical & evaluations lead ensure that the calculations are free of technical bugs, based on factually correct data & follow the method correctly. They also manually divide the $ value generated per popup with the cost listed in their hypercert to obtain the final score upon which allocations get made.
Evaluation Team
Team Member |
Role |
LLM Impact Quantifier |
Calculates a $ value for outcomes generated by each popup |
Technical Lead Dipanshu Singh |
Built the Quantifier, will oversee all technical matters pertaining to the Quantifier to ensure its sound functioning. |
1. Human evaluator Nidhi Harihar |
Well versed with Relentless Monetization & web3 popups, will 1. Check for validity of sources and references cited by the LLM in making its calculations to ensure that the data sources are relevant, sound & valid 2. Cross check data & ensure sound methodology 3. Liaise with popup POCs to procure relevant additional data that may be needed for fairer & sounder calculations. |
Privacy & data sharing policy
Popups submit their own impact metrics & evaluations in a format tailored uniquely to their program. The data they choose to submit (or not) is at their own discretion.
By default, all impact metrics per popup will be made public & published openly. This is an important step to making popup funding transparent & outcomes driven.
Impact evaluation reports submitted by each popup will also be published openly on the public dashboard. The LLM Quantifier reports with calculations of $value generated & detailed methodology followed to arrive at the value will be published publicly by default.
If popups want to keep certain sections of this report anonymous/ locked; they must indicate the same in the application form or simply not share it in their submissions. Any private impact data will NOT be considered by the quantifier.
Risks & sensitivities
The proposed mechanism comes with the following risks:
- Popups may misrepresent or overstate their outcomes, resulting in the calculator assigning dollar values to impacts that did not actually occur.
- Given that both the prompt and calculator are public, popups may strategically adjust the way they present information, without explicitly falsifying data, in order to maximize the benefits calculated.
- To preserve budget privacy, cost data is kept optional. But a fundraising goal has to be specified in the application form, based on which a benefit cost ratio is calculated for allocation.
- The Relentless Monetization technique widely used to calculate social impact benefits is built to measure & responsibly monetize intangible benefits.
Public dashboard
The round will be hosted on VoiceDeck: app.voicedeck.org, a platform to fund impact. VoiceDeck was launched in November 2024 to fund concrete outcomes resulting from journalism investigations. Outcome driven funding for popups & residencies extends its mission to the web3 domain. The dashboard will be open source & publicly accessible. It will serve as a repository to track key program details & specific outcomes resulting from web3 popups & residencies participating in the round, with the following details:
Details of popups published
- Popup Name
- Year/s for which impact evaluation has been conducted
- Recurrence/frequency of event
- Impact evaluations submitted by popups
- Impact metrics tracked (all tangible & intangible metrics)
- Costs (optional) , fundraising goal for GG24 and GG25 (mandatory)
- Current funding status
Information from domain operators
- System prompt to generate a $ value of outcomes per popup
- Detailed LLM generated report complete with all calculations, assumptions & inferred data points
- Funding allocation mechanism (formula, codes etc.)
- Benefit cost ratio
Section C: LLM based Impact Quantifier & Test Results
LLM assisted Impact Quantifier
Our LLM assisted Impact Quantifier is built to assign a $ value to outcomes listed out in impact reports.
The Quantifier uses Relentless Monetization (Robin Hood Rules for Smart Giving) pioneered by the Robin Hood Foundation (Weinstein & Bradburd, 2013- chapters 3, 4, 5 & 8) to derive a dollar value for each impact, with much less overhead.
It gives a tangible representation of impact by saying, for every $1 given to this project, $X in value was created.
The Impact Quantifier can be run using:
Relentless Monetization: 5 steps of arriving at $ value of benefits per popup
A key risk of using LLMs (without a structured method) to score projects is that they lack variance or differentiation. This means that they usually end up generating similar values or outputs across projects, even though the projects may be significantly different in impact produced
To address this issue, we embedded a full evaluation schema, centered on the Relentless Monetization technique, into the LLM.
This guides our model to follow five distinct steps (widely recognized in social impact evaluations) to calculate a $value in benefits unique to the outcomes listed in reports submitted by each popup.
Outputs generated by our LLM calculator thus account for higher-quality differentiation among projects with significant variance between popups based on the quality of documentation they provide.
Step |
Description |
Formula |
Define outcomes |
Clear outcomes of residencies & popups are defined. This includes listing both tangible outcomes (e.g., follow-on funding secured per project, jobs landed by attendees) & intangible outcomes (say, knowledge transfer & skill development) |
The LLM reads through the documents shared by popups & lists outcome/s |
Measurement of Causal Effect |
Quantify what % of these outcomes can be fairly attributed to be a result of the residency/popup exclusively, vis-a-vis other factors/network effects |
The LLM assigns a % value drawing on relevant data points shared by popups & benchmarking it vis-a-vis relevant & latest industry studies |
Calculating Gross Benefit |
Against each outcome, the no. of beneficiaries reached & a benefit per beneficiary (in $ value) is calculated |
Gross Benefit = ÎŁ Number of Beneficiaries Ă Benefit per Beneficiary (per outcome) |
Counterfactual Analysis |
Calculates the net incremental benefit of the residency/popup by adjusting for the loss/gain in benefits if the residency has not taken place |
Net benefit = Gross benefit - Total Counterfactual benefit |
Discounted Future Benefits |
Finally, the net benefit amount is adjusted for the decreasing $ exchange value in the years following the residency/popup |
Discounted Net Benefit = Net Benefit / (1+r)^t |
Calculate Cost of the Popup |
While Relentless Monetization considers past costs, we are tweaking this method to include so that cost of the hypercert they create to apply in the round |
Cost = Price of the Hypercert they created to apply in the round |
Obtain a benefit cost ratio |
Impact is always relative to cost incurred; this intuition is applied here by allocating funds based on how much funding popups solicit |
Benefit Cost Ratio = Step 5 divided by Step 6 |
Notes:
-
The RM technique emphasizes referencing related studies, papers & international reports and citing them as proxy sources in quantifying attribution & benefits. Providing clear evidence of data/numbers at every step is the most critical aspect of this method, which is enabled by the LLMs being plugged into the Internet.
-
There is a subtle but important difference in steps 2 & 4: Causal effect measurement is about attribution, i.e., isolating the programâs outcomes from other influences.
Counterfactual analysis is about adjusting the gross benefit to avoid overstating the impact by considering whether the impact would have taken place even without the popup.
-
The discounted net benefit/net present value of benefit thus calculated is taken as the outcomes (in $) generated per popup/residency.
-
The benefits are NOT to be taken as impact created by a popup. It is to be interpreted as the quantifiable value of the documentation they have undertaken of their popup.
Test run results of our LLM Impact Quantifier
Case study 1: Quantifying the $ value of Ethereum open source repos
We used our LLM based Impact Quantifier to generate a gross & net benefit value for 45 open source repos in the deep funding competition.
We ran our system prompt on GPT-5 & Gemini 2.5 Pro, with the question âwhat is the $ value that each repo generates for Ethereum?â. By taking an average of the results from GPT-5 & Gemini 2.5 Pro with an optimal weight percentage of 80% & 20% respectively, we were able to get results with an error value of 6.8, finishing 9th place on the leaderboard.
Case study 2: Quantifying the $ value of Web3 popup cities/residencies
Impact Evaluations shared by popups
We tested our LLM assisted Impact Quantifier by feeding it impact reports shared by four popups. These impact reports were fed into 4 different LLMs viz., Chat Gpt-5, Claude 4, Grok 4 & Gemini.
The names of these popups and the specific reports shared by them are kept confidential at this stage.
Broadly, the impact reports shared by popups were in the formats of:
- Draft preliminary reports with impact metrics
- Substack articles recapping the program
- Tweets capturing event highlights
- Proceedings/research articles produced at the end of the programs
Indicative results based only on benefit quantified are below. Please note that actual weights would be calculated by dividing the benefit with the cost of the hypercert created by the popup in applying to the round.
Popup |
Chat GPT5 |
Gemini Pro 2.5 |
Claude 4/ Grok 4 |
Median Value |
Weights |
A |
$651k, See report here |
$450k in Gross benefits, Net benefit calculation needs more data, See report here |
$1.13 M, See report here |
$651,000 |
0.16 |
B |
Gross benefits: $1M (conservative), $2M (upper estimate) Net benefit calculation needs more data, See report here |
Calculation needs more data, See report here |
$30M in Gross benefits, with inferred ticket price data. Net benefit calculation needs more data See report here |
$2million |
0.49 |
C |
$248k, See report here |
$292k, See report here |
$292k, See report here |
$292,000,Listed cost of their hypercert: $150,000 Benefit Cost ratio = 1.946 |
0.07 |
D |
$7.07M, See report here |
Calculation needs more data, See report here |
$1.17M , See report here |
$1.17M |
0.28 |
General Observations and Notes from the Sensemaking Period with Popups
-
The $ value in benefits in the table above is NOT to be taken as a measure of the impact generated by popups but as a reflection of the documentation submitted by popups.
-
Popups generating intangible value (Popup B) are being valued higher than popups (e.g., IERR 2025) focused on tangible metrics of impact (like projects launched, papers published, follow-on funding raised etc.).
-
We spoke with many popups and were struck by how impact documentation has often been minimal, leaving valuable outcomes under-recorded. Recording of impact remains an exception rather than a rule, lacks rigor and a systematized process.
-
We need to create an incentive for popups to document their outcomes & contributions post their event. The proposed system of funding is a step towards instilling this culture of impact documentation in the popup ecosystem.
-
Besides funding, a crucial benefit of popups participating in this round would be in enabling transparency in impact metrics. The breadth of impact metrics, unique to each popup, will be openly available on a public dashboard and teach other popups how to measure and record their impact. This provides much needed accountability of outcomes, which is a noted demand from web3 popup funders currently.