Test Ideas, Record Decisions
Our Experimentation Framework to Test Ideas and how we record Decisions
Meetball Principles:
- Build What Matters,
- Learn and Adapt,
- Ask for Help,
- Open Collaboration
Experimentation Principles
Everyone's ideas matter: But to move fast, share them as a clear, actionable pitch, focused on what you can own, targeting our Early Adopters (we know who we're building this for) and the expected outcome.
Doing everything at once teaches us nothing: focused experiments give better insights. document hypothesy and measure outcome.
We'll revisit and revise often: no attachment; everything can be improved.
Execution beats theory: ideas are good, but action is what matters. When you suggest an idea, does it end with "someone could do this!" or with "I can own this"?
Why We Experiment
Experimentation helps us:
- Make better decisions based on real data, not assumptions
- Learn quickly without committing massive resources to unproven ideas
- Stay user-focused by testing with our Early Adopters who we're building this for
- Include everyone by giving clear ways for contributors to test their ideas
- Build institutional knowledge that persists beyond individual contributors
We're swimming in ideas. But without structure, it's chaos. Especially for an Open Startup that anyone can contribute to. New people join, bring a depth of expertise, but lack the context of what's been done so far and why. Being open to new perspectives is what will make our product great.
It takes time to build a culture of experimentation, where ideas can be shared freely without fear of judgment. That's why now, in these early days of shaping Meetball, we have a rare opportunity to lay the foundation right.
This framework introduces how we test ideas, learn fast, and keep momentum while staying true to our mission and values.
How We Run Experiments
Step 1: The Experiment Team:
- Experiment Champion: Leads the experiment, drives it forward, and owns results
- Contributors: Before pitching, the Champion secures at least one willing contributor from each key team who should be involved (e.g., Marketing, Product, Dev, Ops)
Anyone can suggest an experiment regardless of role or area of focus.
Step 2: Prioritize "Two-Way Door" Experiments
- Two-Way Door: Low-risk, reversible tests, easy to roll back if they don't work
- One-Way Door: Irreversible or high-impact experiments that require deeper deliberation
Step 3: Create Your Experiment Pitch
The pitch should look like this:
I'd like to test [experiment] to see whether [expected outcome].
To do this, I need [budget/resources] and expect to measure success by [KPI] by [date].
If successful, this could scale to [impact].
The relevant teams have been informed and the following risks/opportunities have been identified: [issues].
The Experiment Team: [names].
Short Kickoff Meeting Should Cover:
- Problem – What are we trying to learn or solve?
- Hypothesis – What do we believe might be true?
- Experiment – What are we doing, and why?
- Expected Outcome – What measurable result are we looking for?
- Timeline – 1 day, 2 weeks, or 1 month (max)
- Budget – Any financial or resource requirements?
- Risks – Potential unintended consequences?
- Success Metric – How will we know it worked?
Tracking Experiments
Current Process
We track all experiments in Plane in the Experiments project.
Documentation Format
Each experiment should be documented with:
- Initial hypothesis and setup (when created on Plane)
- Execution notes (during the experiment - Comments on Plane)
- Results and analysis (when complete - Final comment on Plane)
- Archive status (Positive/Negative/Inconclusive)
Evaluating Experiment Outcomes
Compare Results with Hypothesis
- Look for evidence supporting or refuting the original hypothesis
- If results are inconclusive, consider alternative hypotheses that might better explain the data
- Accept when hypotheses are wrong (the change had no effect)
Possible Outcomes
Depending on experiment results:
- Implement to scale (if successful)
- Investigate further (if promising but unclear)
- Abandon (if clearly unsuccessful)
The Review Process
Experiment Owner Responsibility: The person driving the experiment publishes the evaluation to start the review conversation. This encourages people to own their experiments and results, helping them learn to read experiment outcomes.
Critical Review Lens: Reviewers (preferably not emotionally attached to the experiment) bring diverse perspectives and ask:
- Is the metric movement explainable?
- Are all significant movements being reported, not just positive ones?
- Is the experiment driving the outcome we ultimately want?
- Are we guarding against confirmation bias?
Building Institutional Knowledge
After review, experiment outcomes are archived in a searchable format for future reference in Plane. This builds insight that persists beyond individual experiments and helps:
- Shape future priorities
- Generate ideas for follow-up experiments
- Fine-tune KPIs for better measurement
- Avoid repeating failed experiments
- Onboard new contributors with context
Types of Experiments
Product Experiments: Feature tests, UX changes, new functionality Growth Experiments: Marketing tactics, user acquisition, retention strategies
Process Experiments: How we work together, communication methods, decision-making Partnership Experiments: Collaborations, vendor relationships, community partnerships
Quick Reference
For Experiment Champions:
- Write a clear short (max 1-page) pitch using the format above
- Get buy-in from relevant team members
- Create experiment issue in Plane
- Execute with clear timeline and metrics
- Document results and lead the review discussion
- Archive with clear status for future reference
For Contributors:
- Anyone can suggest experiments
- Support active experiments in your area of expertise
- Provide honest feedback during reviews
- Help identify risks and opportunities
For Reviewers:
- Ask hard questions about methodology and bias
- Challenge assumptions and alternative explanations
- Help separate correlation from causation
- Keep focus on ultimate goals, not just metrics
Current Examples in Practice
As we make MVP decisions right now, we're already practicing this mindset:
- AI Feature Debate: Testing whether to keep "Improve with AI" vs. staying fully human and authentic
- Onboarding Optimization: Fixing conversion issues between session start and profile creation
- Community Engagement: Testing different ways to involve contributors in decision-making
Each of these represents an opportunity to practice our experimentation culture early, setting the foundation for how we'll operate as we scale.
Resources and Further Reading
- The work of Lukas Vermeer
- Why building a culture of experimentation is worth the risk - Forbes
- The Experimentation Gap - Towards Data Science
- Democratizing Experimentation - Statsig
- Experimentation Resources - GitHub
- Culture of Experimentation - Statsig
- How to Build a Culture of Experimentation - AWS
- Amazon's Experimentation Approach - Conversion Rate Experts
This framework is a living document. As an open startup, we invite feedback and contributions to improve how we experiment together. The goal is to create a system that encourages bold ideas while maintaining the focus needed to build something our users love.