Short Term - Season 15
Perhaps we could put a small amount of our sybil hunting budget towards “retroactive rewards” for top energy groups or players.
Long Term - Project to complete for the full launch of Grants 2
This incentive model outlined by @octopus & @j-cook could be a good starting point, but it would need to be specifically optimized to generate trusted outcomes for Aura participation. It could then be used to reward sybil hunters.
A “Review Protocol” could allow for any community to reward reviewers by paying a minimal amount per epoch and simply “turn the knob” up and down to fine tune the system to the optimal decentralization needed to maintain credible neutrality.
How might we define “collaborative fact checking”?
Would these be different energies or just different games?
Another way to phrase this might be “Would aura be a review tool which gives an interface for users to review a specific question and the system would provide a trusted outcome?”
OR
“Does aura have any math proofs giving reasonable probalistic guarantees of a trusted outcome with minimal reasonable assumptions?” (if not, could we define some that would get us there?)
Do you have a backlog of the tools which need to be built?
Here are some quick win starting points I see:
I think there is a lot of value in combining the aura game mechanics with the reward modeling work we have done.
There are a few things we could do this season. Would it be better to set deliverables for the BrightID/Aura team to build a grants graph (maybe in collaboration) or should FDD build it?
What data could we look at to get an understanding of how well the system is working now?
I’d really love some quick feedback here from @erich @brent @lthrift @kevin.olsen @Sirlupinwatson @kishoraditya @omnianalytics @ccerv1 in addition to those mentioned above
A Review Protocol
Allows a community to retroactively reward the reviewers who participated in a review round if they exceed a certain accuracy/consensus threshold. Scoring may include “gold stars” & “poison pills”. (Rewards Modeling)
Gamifies participation in a way that changes the optimal incentive structure in a way that attempting to attack any one round would be less advantageous than being continued to be invited back to play. (Aura)
Allows anyone to create a scoring system for how trusted a reviewer is. Similar to passport and reader. (FDD can be a first “vendor” of a scoring mechanism)
Another use case could be grant reviews. Our collaboration with Ethelo tool provided a probabilistic score as an output. This can be used to predetermine eligibility or to provide community reviews of whether or not they are effective with the grants they have received. Grant round owners could elect to show these scores (or others!) and/or use them to programmatically filter grants for eligibility.
A continued partnership with Ethelo could use this gamification and trust model to scale and decentralize execution of grant application reviews for any community-curated grant round*, but would have to build out the ability to dynamically change the weighting applied to each of the reviewers based on a stamp, NFT, or token they hold.