Assess your hypothesis

Fill in all five parts of your hypothesis. Each field has guidance on what belongs there. When you are ready, click Assess — the result appears below.

The specific conditions you are investigating or trying to solve. Must include at least one quantitative or qualitative data point — not just a general observation.

Your interpretation of WHY this is happening. Identify a specific cause — not just a restatement of the observation.

The specific change or intervention you will test. Must be concrete and actionable — not a direction.

The specific metric that will move if your hypothesis is correct. Should connect directly to the change in part three.

The behavioural or design principle that explains why this change will cause that metric to move. Close the loop to the cause you identified in part two.

Assessing
Overall

Meta-analysis tags

Use these tags for filtering across your experiment portfolio.

Belief category
Intervention
Principle

Evidence quality

Causal reasoning

Metric alignment

Resolution logic

One question

Would it be useful to generate solution ideas and experiment hypotheses from just your observations — without writing the full hypothesis first?

Thanks. We will be in touch.
Copied to clipboard

Not sure what you are looking at?

The “if we / then we” hypothesis most teams write is not a hypothesis. It is a guess with a metric attached. A complete hypothesis has five parts, and the ones you skip are exactly the ones that protect your programme from post-result rationalisation and ensure you accumulate real learning over time.

What good looks like

Two examples — one strong hypothesis, one weak. See how the assessment works before writing your own.

Strong hypothesis
We have observed Mobile checkout abandonment is 74% vs 52% on desktop (GA4, last 90 days)
Which we believe The number of required fields creates cognitive load on small screens where users cannot see form progress
If we Reduce the checkout form to show three fields at a time with a visible progress indicator
Then we will see An increase in mobile checkout completion rate
Because Reducing visible cognitive load at each step lowers perceived effort, directly addressing the form complexity identified as the cause
Overall Strong
✓ PassEvidence quality
✓ PassCausal reasoning
✓ PassMetric alignment
✓ PassResolution logic

Clear data-backed observation, specific causal mechanism, directly connected metric, and a because that closes the loop to the identified cause. Ready to take to the bet calculator.

Weak hypothesis
We have observed Users seem to find the checkout complicated
Which we believe Because mobile users find it difficult
If we Improve the checkout experience
Then we will see Better conversion
Because Because it will be easier to use
Overall Weak
FlagEvidence quality

This is an opinion, not an observation. What data shows users find checkout complicated? Cite a specific metric, session recording finding, or user research.

FlagCausal reasoning

This restates the observation rather than explaining the cause. What specifically makes it difficult? Is it the number of fields, the layout, the keyboard type triggered, something else?

FlagMetric alignment

Better conversion is too broad to attribute to a checkout change. Specify which conversion metric and at which step — checkout start to completion rate, for example.

FlagResolution logic

This does not connect the change to the cause. What behavioural principle explains why an improved experience would fix the specific problem identified?

All four criteria need work. Start with the observation — find the data that shows there is a problem, then identify the specific cause before deciding on a solution.