Experimentation at Scale: What Transforming a School System Taught Me About Risk, Learning, and User-Centred Design

In 2020, just weeks before the world shut down, I joined Innova Schools as User Experience Lead to help drive their second wave of innovation. What had once been a disruptive, tech-enabled blended model for K–12 education across Latin America was now facing a new reality:

Students were changing, expectations were shifting, and the system wasn’t keeping up.

The mission was clear but massive:

redesign how 62 schools across 3 countries learned, taught, operated, and supported families—while the ground was shifting beneath us.

Before Innova, I had led an outstanding user-centred design team at Scotiabank, where most of our work lived in digital interfaces and product flows. Education was a leap into the unknown. Suddenly, “user experience” meant understanding students, teachers, families, school leaders, organisational structures, learning outcomes, and community belonging, all at once.

And then the pandemic hit.

Remote teaching became the norm overnight. The pressure to adapt was overwhelming, but paradoxically, it created the perfect environment for experimentation.

We didn’t need perfection; we needed evidence.

We didn’t need certainty; we needed learning.

We didn’t need control; we needed small, safe bets.

The question became:

How do you apply the simplicity of user testing to an entire school system?

Why Experimentation Matters—Especially in Uncertainty

Traditional management relies heavily on planning, control, and prediction.

You create the plan → execute the plan → hope reality matches the plan.

But in rapidly changing environments, like education in a global pandemic, this approach doesn’t just fail; it creates risk.

You spend months designing something based on assumptions, only to deploy and discover that those assumptions were wrong.

So instead, inspired by Janice Fraser’s work (yes, including with the US Navy SEALs), we turned to the Lean Startup cycle by Eric Ries:

Build → Measure → Learn → Iterate.

The goal was to replace risk with evidence and assumptions with insight.

How We Did It: Embracing our Assumptions

Every transformation begins with a set of assumptions.

Most teams skip straight to solutions. But in experimentation, solutions come after identifying what you need to learn.

We followed a clear process:

1. Clarify the Need (Problem Molecule)

This is where the discipline begins.

We forced ourselves to articulate three things in one sentence each:

  • User – Who is this for?

  • Problem – What challenge are they facing?

  • Solution – What value do we believe we can offer?

This created alignment and eliminated ambiguity.

2. Identify the Riskiest Assumptions

A risky assumption is a statement that must be true for your idea to work but you don’t yet have enough evidence to trust it.

Two criteria guided us:

  1. Criticality: If it’s wrong, the entire idea fails.

  2. Uncertainty: We don’t have evidence yet.

We mapped assumptions on a 2x2 grid (criticality vs. evidence) to prioritise the ones that would make or break our model.

3. Design the Smallest, Cheapest, Fastest Experiment

We used experimentation templates to turn assumptions into testable hypotheses:

  • What are we trying to learn?

  • How might we test this quickly and cheaply?

  • What learning metric will tell us if the experiment succeeds or fails?

This mindset shift—from proving ourselves right to learning what’s true—was transformational.

Building Experimentation Squads

This was the hardest part.

We weren’t launching a new feature for an app; we were redesigning:

  • learning models

  • teaching models

  • organisational structures

  • operational workflows

  • and even parts of the business model

We needed teachers, academic leaders, business teams, and support staff excited and aligned.

So we created Experimentation Squads:

multidisciplinary teams responsible for running experiments on specific parts of the new model.

Our support included:

  • frameworks

  • templates

  • coaching

  • working-in-the-open rituals

  • evidence logs

  • decision records

But we also designed a testing pipeline, because you can’t test something once in one school and deploy it across seventy.

The pipeline looked like:

  1. Controlled Test — one campus, limited scope

  2. Pilot — 5–7 campuses in a region

  3. Scale — full network deployment

This allowed us to test complexity gradually: pedagogy, operations, support models, and implementation challenges.

By the end, we had 11 experimentation squads working simultaneously—each contributing evidence, insight, and direction, not opinions or assumptions.

What We Learned

We learned that even in complex systems:

  • Risk can be reduced.

  • Evidence can replace gut feeling.

  • Teams become braver when failure is cheap.

  • Students and teachers are generous when they understand the purpose.

  • Experimentation isn’t chaos; it’s disciplined learning.

Most importantly:

With every experiment, we brought the system closer to what users truly needed—one insight at a time.

Was it possible?

Absolutely.

And it’s a mindset I carry with me into every transformation I work on today.