Experimentation done properly changes how a business thinks, not just how a page converts.

I design and run experimentation programmes built around understanding your customers. When you get that right, the wins follow. That’s the approach behind 250+ experiments and $70 million in incremental revenue.

~10x

Return on Investment

250+

Experiments run

$70M

Incremental revenue

60

People trained

What good looks like

What a good experimentation programme actually looks like

Without the right infrastructure underneath it — the documents, meetings, tools and shared knowledge that give every test a purpose — you’re producing activity, not learning.

Rigour and process

Every test has a clear hypothesis, a defined success metric and a documented rationale before it goes anywhere near a development queue. That infrastructure is what makes the difference between a test that teaches you something and one that just produces a number nobody acts on.

The right kind of experimentation

Not every question needs an A/B test. Not every website has the traffic to run one reliably. Experimentation is a process for validating ideas — that might mean a split test, a multivariate study, a qualitative session or a staged rollout. The method should fit the question, not the other way around.

Quality balanced with quantity

A well-run programme finds the right cadence for the business — enough tests to build momentum and learnings, designed carefully enough that every one of them means something. Volume without rigour produces noise, not insight.

Transparent learnings

A test loss is not a failure. It’s evidence. Every result — win, loss or inconclusive — gets documented and feeds the next round of hypotheses. Over time that body of knowledge becomes one of the most valuable things the programme produces.

Wins that become features

A winning test that sits in an experimentation tool forever isn’t a win. Part of running a good programme is making sure results feed into the product roadmap so the business actually captures the value it discovers.

What I do

A programme is only as good as the questions it’s trying to answer. Before any test gets designed, I audit what you already know about your customers — existing research, analytics, previous test results — and identify the gaps. Where gaps exist, I design a tailored research programme to fill them. That research can be delivered by your in-house team or by me depending on capability and need.

From there I build a structured hypothesis framework that connects every test to a real business question. Not a list of ideas. A prioritised roadmap with a clear rationale for every item on it.

  • Customer knowledge audit across existing research, analytics and test history
  • Tailored research programme to fill identified gaps
  • Custom hypothesis framework connecting tests to business outcomes
  • Prioritised experiment roadmap with documented rationale

Why this works as a fractional engagement

The most common alternative to this kind of engagement is outsourcing the programme entirely. That works until the contract ends and everything built walks out with whoever held it. Your team is no more capable than when you started and the programme stalls.

A fractional engagement is built differently. The frameworks, documentation and knowledge stay inside your business. Your team understands why decisions get made, not just what to do next. The programme keeps running after I’m gone because it was designed to.

The other consideration is cost. A full-time experimentation lead would cost $150,000-$180,000 a year before you factor in time to hire and onboard. A fractional engagement scopes exactly what needs doing, delivers it and builds something that lasts.

Work with Storm

Ready to step change your approach to business?

Book a free 30-minute conversation. No obligations — just a chance to talk through how experimentation may be the way to get your there.

What an Experimentation engagement looks like

Weeks 1–2 Customer knowledge audit — I review everything you already know about your customers. Existing research, analytics, previous tests. I identify the gaps and design a research programme to fill them.
Weeks 2–4 Research — Gap-filling research delivered by your team or by me depending on capability and need. Qualitative sessions, behavioural analysis or both.
Weeks 4–5 Hypothesis framework and roadmap — I build the hypothesis framework and prioritised experiment roadmap. Every test connected to a real business question with a documented rationale.
Week 6+ Test design and delivery — Tests designed, briefed and implemented. I build what I can directly in your experimentation tool. Anything requiring a code release goes to your development team with precise technical instructions. Results analysed and documented as they land. Programme cadence established.

“The experimentation culture she created hasn’t just elevated the overall digital capability, it’s uplifted the entire product ways of working.”

– Josh Carius, Senior Product Manager, Bupa

Who this is for

You’re a founder, executive or head of digital who knows experimentation should be part of how your business makes decisions. You may have tried it already — a few tests here and there, a programme that ran for a while — but you haven’t seen it deliver the kind of consistent, compounding value you know it’s capable of.

Or you’re starting fresh and you want to build it properly from the beginning. You’ve seen what happens when programmes get built on gut feel and guesswork and you’d rather not spend two years learning those lessons yourself.

You need someone who can build the foundations, run the programme and leave your team more capable than when they started. Not a vendor. Not a tool. A practitioner who has done this at scale and can bring that experience to bear on your specific business.

Common questions

The easiest way is to book a call using the link on this page. Tell me where your programme is at and I’ll tell you straight whether I can help. If it’s a fit, I’ll turn around a plain-language scope within a few days.

Flexibly. I can work remotely or on-site at your office on agreed days — up to two days a week depending on the engagement. We agree the arrangement upfront and it can flex as the work develops.

I work on a day rate or fixed project price depending on the scope. Payment terms are 14 days.

Most specialists go deep in one area. I work across the intersection of user experience, technology and business strategy. That means I can understand the full shape of a problem and connect the pieces rather than just solving my corner of it. In a world where AI can do a lot of jobs, the person who can bring multiple parts of the problem together is more useful than someone who only knows one piece of the puzzle.

Seven years designing and leading experimentation functions across Bupa, Deep Blue Company and RedBalloon. 250+ experiments delivering $70 million in incremental revenue. I’ve built programmes from scratch, rebuilt ones that weren’t working and trained teams to run them independently.

Ten years implementing and auditing analytics across GA4, GTM, Segment, Adobe Analytics, Amplitude and others. If your data isn’t in good shape before we start testing, I’ll tell you — and I can fix it.

A reasonable win rate for a healthy, mature experimentation programme is typically between 10% and 30%. New programmes often see higher win rates — sometimes over 50% — when tackling the most obvious problems first. As the programme matures, a 20–30% win rate is common, with 30–40% of tests producing learning moments and 10–20% resulting in errors or inconclusive results. If a mature programme is consistently hitting 50% win rates, it’s usually a sign the tests are too safe. The goal is to learn, not to look good on a dashboard.

I’m developing a tool that uses AI to meta-analyse experiment results across categories and programmes — surfacing patterns in your learnings that are hard to see test by test. It’s in development and will be available to clients before it’s advertised more broadly. If you’re interested, get in touch and we can talk about it.

Everything shared with me is treated as confidential. Standard confidentiality terms are part of every engagement and I can work within your organisation’s existing NDA or data handling policies if required. I never share client data and your data is never used to train AI models.

Yes. For engagements where on-site presence adds value I can work from your office on agreed days, up to two days a week. Location and schedule are agreed as part of the scope.

Everything is handed over cleanly. Frameworks, documentation and the experiment knowledge base stay with your business. I walk your team through everything and make sure the programme can run without me. If ongoing support makes sense, I offer a retainer.

Usually within one week of an agreed engagement. Get in touch early if you have something coming up and I’ll let you know availability.

I use AI to review code, support customer research and generate analytics artefacts. Your data is never shared in a way that trains models. If your organisation works with sensitive data I can work without AI entirely or in line with your existing company policies.

Storm Jarvie Experimentation

About

Meet Storm Jarvie

I genuinely believe the best decisions start with admitting what you don’t know yet. Not as a methodology. Just as a way of working. It keeps the work honest and it tends to produce better outcomes than starting with the answer and working backwards. That’s the thread running through everything I do.

LinkedIn

Ready to build something that lasts?

Tell me where your programme is at or where you want it to be. I’ll tell you whether I can help and how. No commitment required.

Book a call

Tell me what you’re walking into and I’ll tell you whether I can help.

Pick a time
or drop me a message