IMPORTANT - This is a draft direction for Gitcoin 3.0 => 3.2. Digest it, debate it, fork it, make changes. Nothing is official until ratified by @gitcoin governance. The post is for informational purposes only. Do not make any financial decisions based off this post.
Gitcoin 3.2: Accelerating GTC Value Capture
Building on Gitcoin 3.1’s TEV foundation to create the ultimate systematic alpha generation & capture machine
What is Gitcoin 3.2?
TLDR -
A Bittensor inspired version of Gitcoin that evolves the best capital allocation tooling possible
Bittensor is a decentralized network that incentivizes development through a market-based system using its TAO token and Proof of Intelligence. Gitcoin could apply a similar model to capital allocation innovation by creating competitive subnets where novel funding mechanisms are tested, evaluated, and rewarded based on performance.
Gitcoin 3.0 creates the Tokenizable Exportable Value (TEV), 3,1 enables, TEV Capture, 3.2 creates TEV acceleration
What is Bittensor?
Bittensor was recommended to us by Allo.Capital cofounder Juan Benet in February as one of the most innovative blockchain plays out there. It is a groundbreaking decentralized network that sits at the intersection of blockchain technology and artificial intelligence. It creates a peer-to-peer market for digital services (trading, compute networks) where participants can collaborate, train, share, and monetize intelligence.
The network operates through a unique consensus mechanism called Proof of Intelligence (PoI), which rewards participants based on the value of their contributions to the collective intelligence. Unlike traditional blockchain networks that use Proof of Work or Proof of Stake, Bittensor evaluates the quality and value of machine learning outputs.
At the core of Bittensor’s ecosystem is TAO, its native cryptocurrency, which serves multiple functions:
- Rewarding miners who contribute computational resources and AI models
- Compensating validators who evaluate model quality
- Enabling users to access and extract information from the network
- Facilitating governance through staking
Bittensor’s architecture is organized into subnets, specialized domains where miners contribute computational resources to solve specific computational tasks (defined by subnets) while validators evaluate their performance. This structure creates a competitive environment that drives continuous innovation and improvement in capabilities.
By democratizing access to development and creating an incentivized framework for collaboration, Bittensor accelerates the advancement of intelligence generating technology while ensuring that rewards are distributed based on the value contributed to the network.
How Gitcoin Could Create a Bittensor-style Arena for Capital Allocation Tools
Bittensor has successfully created an ecosystem where AI models compete and collaborate to continuously improve, with rewards distributed based on the value contributed. Gitcoin could apply this model to capital allocation by creating a similar competitive arena for evolving optimal capital allocation tools Here’s how:
Proposed Implementation:
- Subnet Model for Capital Allocation
- Create specialized subnets focused on different capital allocation strategies (e.g., grant distribution, investment evaluation, risk assessment)
- Each subnet would contain miners developing and deploying novel capital allocation algorithms and validators evaluating their performance
- Proof of Allocation Intelligence (proof of AI)
- Develop a consensus mechanism that evaluates capital allocation strategies based on predefined metrics (Value Flowed, ROI, distribution efficiency, community satisfaction).
- A simple mechanism could be “proof of flow”
- Alternatively, have each subnet tokenize and judge success by how many that token performs.
- Allocate rewards to strategies that consistently outperform others.
- Allocate more rewards to projects that are outperforming AND agree to do GTC token swaps.
- Synthetic Capital Markets
- Create simulated environments where allocation strategies can be tested using historical data or synthetic scenarios
- Allow real-time competition between strategies to identify optimal approaches for different contexts
- Progressive Learning System
- Enable successful strategies to build upon one another through a knowledge-sharing framework
- Create incentives for continuous improvement and adaptation to changing market conditions
- Identify friction points for builders and reduce the friction, enabling faster evolution.
- Real-world Implementation Track
- Once strategies prove successful in simulated environments, provide pathways to deploy them with actual capital (perhaps via GG)
- Create a feedback loop where real-world performance influences future development
Benefits:
- Evolutionary Improvement: By creating competitive pressure between different capital allocation mechanisms, the system would naturally evolve toward increasingly effective strategies.
- Context-aware Solutions: Different strategies could emerge for different scenarios (early-stage funding, mature project governance, emergency response).
- Transparency and Trust: All allocation decisions would be traceable and explainable, increasing confidence in the system.
- Community Governance: The community could vote on which metrics should be prioritized in evaluating allocation strategies.
The Strategic Evolution:
Gitcoin 3.0 established the foundation: a Network-First Funding Festival for Ethereum’s biggest problems through diverse allocation mechanisms. This arena creates competitive evolutionary pressure on builders to build the dopest crowdfunding and capital allocation technology out there. Access to this tech is alpha; its worth a lot. This is TEV.
Gitcoin 3.1 proved the thesis: Every breakthrough crypto project shows early signals in community funding data before markets recognize their value. We capture this TEV through tokenized products, institutional licensing, and managed funds—transforming public goods funding from cost center to profit center.
Gitcoin 3.2 accelerates the capability: By creating competitive evolutionary pressure on capital allocation mechanisms themselves, we don’t just capture alpha from funding outcomes—we systematically improve the signal quality of our entire TEV generation engine.
In summary, Gitcoin 3.1 established our ability to extract Tokenizable Exportable Value (TEV) from community funding metadata. Gitcoin 3.2 accelerates this capability by creating a Bittensor-inspired competitive arena where capital allocation mechanisms evolve under market pressure, dramatically amplifying our TEV generation while creating the most sophisticated capital allocation intelligence in crypto.
Conclusion
By applying Bittensor’s competitive intelligence model to the challenge of capital allocation, Gitcoin could create an arena where diverse strategies compete, collaborate, and continuously improve, ultimately developing more efficient and effective ways to distribute resources across the Web3 ecosystem.
Appendix A - The GTC Token gets new life
Potential Role of GTC in this world Token
While Allo Capital currently operates without its own dedicated token, launching an ALLO token could provide specific advantages for a Bittensor-style arena:
- Incentive Alignment: A native token could directly incentivize participants who develop and improve capital allocation strategies, similar to how TAO rewards AI model contributions. (Tao is a top 50 token)
- Specialized Governance: An ALLO token could enable weighted voting on which capital allocation strategies should receive more resources and which metrics should be prioritized.
- Value Capture: The token could capture value generated by successful allocation strategies, distributing it to developers, validators, and other ecosystem participants.
- Network Effects: A token could help bootstrap the network by attracting early participants through token incentives.
GTC Token vs. TAO: A Comparison
If implemented, an GTC token would differ from TAO in several key ways:
Feature | GTC Token | TAO (Bittensor) |
---|---|---|
Primary Purpose | Incentivize capital allocation innovation | Incentivize AI model contribution |
Value Metric | Capital efficiency, ROI, distribution fairness | Intelligence contribution quality |
Economic Model | Could use a different supply model focused on sustainable funding | Fixed supply of 21 million with bitcoin-like halvings |
Staking Dynamics | Would likely stake to validate allocation strategies | Stakes to run validators that evaluate AI models |
Target Participants | DeFi developers, economists, governance experts | AI/ML developers, compute providers |
Recommendation
For Gitcoin to successfully implement a Bittensor-style arena for capital allocation tools, a dedicated token is not strictly necessary but could provide advantages for network growth and alignment.
The most practical approach might be a hybrid model:
- Begin without a token, leveraging existing infrastructure (as described in the 3.1 post)
- Measure adoption and effectiveness of the arena.
- If the system proves valuable and a token would enhance its utility, introduce an GTC token utility with careful tokenomics designed specifically for optimal balance and financial thriving.
- Ensure the token has genuine utility beyond speculation, with mechanics that directly tie its value to the effectiveness of the allocation strategies it supports
Appendix B - Feedback on this proposal so far
Summary of all comments to date on a previous version of this proposal note:
• Synthetic test-bed first – Several commenters (cerv1, carlb) like starting with “synthetic capital markets”: freeze a historical data snapshot, have agents allocate blindly, then fast-forward and score thousands of randomized runs to build a leaderboard of strategies that generalize. This lowers risk and lets researchers compare mechanisms side-by-side before real money is involved.
• Key design questions – Repeated feedback (Griff, thelastjosh, multiple anonymous posts) stresses that capital allocation is harder to judge than ML models. You must nail:
– Which metrics truly matter (value-flow, fairness, ROI, community satisfaction, long-term impact)
– How to blend objective data with subjective evaluations without Goodharting the metric
– Who produces/validates the evals and how to prevent gaming or validator fatigue.
• Evaluation scope & phasing – Consensus is to begin with a tiny number of subnets (2-4) and a single simple metric such as “total value flowed,” then iteratively add richer metrics once the system runs. Deep-funding style “indirection” (voting on mechanisms, not projects) and AI-assisted info compression are seen as promising ways to scale evaluation.
• Normalization & comparability – Mechanisms have very different input structures (direct grants, QF, retro funding). Commenters suggest normalizing for apples-to-apples comparisons and perhaps using Shapley values or counterfactual analysis to attribute impact.
• Distribution edge for builders – A benefit pointed out by owocki: founders can focus on code while Allo Arenas supplies distribution and user flow, filling a common go-to-market gap.
• Token debate – Views range from “stick with GTC” (Griff) to a future token or even a two-token model. The prevailing advice: start tokenless or hybrid, gather data, then launch a token only if it clearly enhances incentives and governance. Legal and design ramifications of subnet-level tokens were flagged.
• Viability of evaluation - The more short-term, tighly scoped a grant allocation mechanism is, the easier it is to evaluate. For anything long term it becomes incredibly hard. There’s just a ton of noise, it becomes almost impossible to imply any causality, etc.
• Open questions called out – How to create, update, and wind down subnets; where fees come from; current friction points for Allo builders; why deployment would route via Gitcoin Grants versus Allo Capital; concrete incentive examples; and what a knowledge-sharing framework looks like.
• Strategic framing – Private-chat feedback urges focusing on “missing primitives” rather than immediate market demand, keeping the system product-led, and avoiding metric over-fitting by allowing the weight-setting process to evolve as new data and metrics appear.
• Overall tone: strong enthusiasm for an evolutionary “arena” but consistent concern about evaluation methodology, governance overhead, and practical implementation details. Most contributors urge starting narrow, instrumenting heavily, and expanding only once simple pilots prove robust.