Minimum Shippable Increments

One thing I think is interesting about web3 is how it’s evolving how work is done. In closed source organizations, we’d keep a product vision under wraps until the day the Minimum Viable Product (MVP) launched. But that is not so in web3 where Open Source is the norm + we work in public. We are surrounded by tens or hundreds of contributors in discord that work on a DAO’s mission (as opposed to for a company), which blurs the lines between internal an external.

I want to talk about what an evolution I’m observing in myself and in others that is a result of this change in working norms.

As a member of a web3 or a DAO contributor, you may be working on a team that is aiming to ship a Minimum Viable Product of some external-facing deliverable.

When you’re working in public, the guts of the work you do to ship that MVP is in public already (or at least semi-public, depending upon what your DAOs communication structures look like).

What does the work that you ship to your coworkers or followers (the people who follow your “work in public” career), which leads up to the minimum viable product for an external release look like?

I think that its a Minimum Shippable Increment (MSI). I have been thinking about the idea of MSIs a lot recently, because I think that Minimum Shippable Increments are what you get when you cross Minimum Viable Product x Working in Public.

A MSI has three criteria

  1. minimal - scoped down as far as possible
  2. shippable - comprehensible to stakeholders (the ppl following your work in public)
  3. increment - earns value towards the MVP which will ship publicly.

Why do MSIs matter?

All of the work that contributors do in a DAO is the accumulation of their MSIs.

I think that DAOs which successfully design communication structures that allow contributors to (1)find/define their next MSI, (2)execute their MSI, (3) learn from their MSI (4) repeat rapidly & effectively will win.

The MSI Loop

I think we should aggressively lower the barrier to our next MSIs to “safe to try” rather than “do we all agree?”. Having permissionless experimentation allows bottoms up innovation to be a part of our culture. It’s how we’ll win.

I also think we should be comfortable with experiments failing. A minimum shippable increment that fails to achieve it’s hypothesis is not an embarassment - it is an opportunity to learn (provided no harm is caused of course).

I think that each person should have an understanding of why they’re working on their next MSI, and should understand the learnings that come from it. I’ve seen a lot of projects go awry because the person doing the work didnt understand the “why” and so couldnt make tradeoffs. Or was not connected to the learnings from it.

I think that a relentless focus on MSIs paired with a sense of urgency/bias towards action is an extremely powerful combination for DAO contributors to rapidly learn, adapt, & create outcomes, a competitive advantage in a rapidly evolving industry.

5 Likes

+1 on this kind of framing the question like this !
Having it as a discovery / experimenting process makes it easier to test out hypothesis / architctures decisions → and then iterate on it if the MSI output shows promise as opposed to spending time negotiating / re-negotating

1 Like

Definitely share @thelostone-mc’s enthusiasm for this! The “safe to try” framing is super useful for me to orient discussion on ideas. I think it pairs well with some of the writing on MVPs vs Minimum Viable Experiments (MVEs), especially this line from a YC article:

An MVP is a process that you repeat over and over again: Identify your riskiest assumption, find the smallest possible experiment to test that assumption, and use the results of the experiment to course correct.

Said another way, I have a hunch that building a successful MSI muscle will also require honing an organizational ability to identify those riskiest assumptions and make sure your MSI is confronting it. Are we doing any training on this type of thinking in our onboarding? If not, would it be useful to do? I’d love to help stand up a workshop on something like this.

2 Likes

We often debate this, but I feel there is a scale that is appropriate for this approach, and a scale that is not appropriate… In something like Uniswap, their v3 product likely is not an appropriate place to allow for bottoms up experiments (which cold confuse users and add to support woes, break the system, cause outages, etc.)… but in a product that is working to find market fit and is gathering a lot of feedback (tip.party perhaps), this is a very viable strategy.

the same way teams turn from generalists to specialists as they scale to ensure you are taking full advantage of expertise, so too should products.

As we build Grants v2, I love the idea of a consent based approach where we move quickly and iterate based on feedback (shoot → aim → shoot → aim → shoot). Doing that on a product that has 10s of thousands during a grants round however is likely less wise. This is why we would staff more people to explore these ideas (ready → aim → shoot)

Very much agree with scoping experiments in a way that allows for learning. that can often be done without shipping code to all users (ideally in some A/B testing fashion, or simplify figma mockups for feedback).


On the whole, I do like the idea of the MSI as a means to help focus on shipping faster, and learning faster.