GR14 Governance Brief

Yes. 16,073 contributors were deemed as sybil.

What does this mean and how is it relevant?

  1. Sybil attacks were up, but a very large portion was airdrop farming. An analysis identifying the impact of airdrop farming specifically is on it’s way!

  2. While we can’t infer the reason a new user with a new github and twitter is donating, we can detect a pattern that infers that they are most likely not intending to direct matching pool funding, but instead intending to farm airdrops.

  3. Yes, there is a big difference between an airdrop farming incidence and a user creating a bunch of bot accounts to donate to their own grant. A sybil ring might include both motivations to not only optimize strategy for potential return, but also to obfuscate the donation pattern to avoid obvious detection via standard clustering & classification.

  4. Yes, some airdrop farmers are simply new users. They are degens not yet converted to regens. This is why our heuristic allows any squelched user to be automatically unsquelched by the algorithm as they add a more rich and “human” history of actions.

  5. The algorithm was fairly aggressive this round. It took the learnings from the heuristic and got angry because it didn’t have enough new data (human evaluations) to balance the meaning. It is important to understand sensitivity vs specificity here. This situation means that more users which were squelched belong to the part of the distribution which is “guessing” rather than certain. Either way, we always run a strong preference to avoid false positives because our focus on sybil attacks is to mitigate the largest attacks, then focus on smaller and less consequential ones. The distribution is not unprecedented either. See GR12.

  6. The only way for us to become more accurate is to find more ways to clearly identify intentions and associated behavior patterns, then training the human evaluators to better recognize each situation.

  7. Once a behavior pattern moves from a subjective guess from a human (ideally and expert trained to spot the patterns & motivations) to some thing more objective like matching to multiple specific characteristics which combined give a high level of certainty, it allows the pattern to be machine coded to reduce the need to manual labor in reviews.

  8. More human reviews are better until we are pretty sure we have identified and have accurately machine coded most behavior patterns. Even then, the nature of a red team / blue team exercise is one that sees the red team continually finding new tactics. This is a pattern that we have seen play out as expected.

  9. We might hope that Passport will allow us to open up a variety of ways for people to share the value they have brought to a community, especially the value given via human tasks, can be a very strong signal. Until we are certain about the effects of this, we will need to continue detection & mitigation to establish a “best known true class”. We can then use this as the “assumed truth” on which a/b tests can be conducted. A variety of stamp combinations and alternate weightings can then be examined for which would provide the optimal prevention model.

Perhaps we will get to the point in the next few rounds were we can empirically say that we no longer need squelching… that the prevention is good enough now and that combining it with continual updates based on our detection methods will serve the community as well or better.

4 Likes