Of course the information in the checker is public. That was never a concern, that was never the “original” concern (additional qualifying word).
When writing “after the application though” - I meant my personal use case - dealing with the appeal process. The checker wasn’t in my flow during the grant editing or round application, only after the application was final.
And then explained in the human intelligence prompt what type of feedback I mean.
I think it is worth highlighting the difference:
- human intelligence feedback
- AI checker feedback base on the criteria
Dropping alpha. Meta-rule: changing rules.
Something that observed during GG22:
It made me reflect about game-theory of changing rules. It works only once, then the element of surprise is gone and by default you assume that the rules are changing.
GG22 for active projects, not zombies that updated readme.md
That should be a baseline acceptance criterium.
Another criterium that is not covered: amount of issues opened by the amount of different GitHub users.
(of course not the only metric, part of comprehensive suite of metrics to weed out zombie projects)
Community size and activity: amount of Twitter reactions and Telegram messages.
(analytics is unlikely to be public, Web3 projects are privacy orientated, but the traffic on Twitter / Telegram / Discord / Reddit is a decent proxy for the user base)
