top of page
Search

Why evidence-based models fail without measurement

  • Writer: Pivot Professional Learning
    Pivot Professional Learning
  • Mar 3
  • 6 min read

Updated: Mar 4


Schools are implementing evidence-based practices more deliberately than ever. Instructional frameworks are being adopted, professional learning is being invested in, and expectations around consistency of practice are rising. Yet implementation frequently falls short – not because the research base on what to do is weak, but because the quality of implementation itself goes unmeasured. Without tracking how well a practice is being implemented, schools have no reliable way to know whether poor results reflect a problem with the practice or a problem with the implementation.


The solution is not necessarily more data collection. It is a more deliberate use of the data schools already have, focused on a targeted set of implementation outcomes that tell you if your initiative is on track, and give you the chance to respond before it's too late.



What implementation outcomes are – and aren't

Implementation outcomes are not the same as effectiveness outcomes. Effectiveness outcomes measure impact on student learning – the ultimate goal for schools. Implementation outcomes measure something different but equally important: how well the implementation itself is going.


If a new instructional practice isn't producing the results expected, there are two possible explanations: either the practice doesn't work, or it wasn't implemented well enough to give it a fair chance (Fixsen et al., 2005). Without tracking implementation outcomes, it is difficult to distinguish between the two.


Proctor et al. (2011) identified a number of implementation outcomes that teams can use to track progress – three of these outcomes are particularly well-supported by student data.

Fidelity refers to the degree to which a practice is being implemented as intended – whether teachers are adhering to its core elements, whether the dosage is right, and whether the quality of delivery is where it needs to be.


Reach refers to how widely the practice has been integrated across the school: which year levels, which learning areas, which cohorts of students are actually experiencing it.


Sustainability refers to whether the practice becomes part of everyday school routines, or whether it fades away once the early enthusiasm wears off.


These three outcomes shift the conversation from "we're doing the thing" to "here's how well we're doing the thing, and here's what the data tells us."



Why monitoring matters – especially in schools

Any change process benefits from purposeful monitoring and evaluation (Kusek & Rist, 2004). Schools present a particular challenge. There are interdependencies everywhere: between teachers, between year levels, between the formal curriculum and the lived experience of students in classrooms. A practice introduced into that system will rarely produce linear effects – outcomes are shaped by a wide range of contextual factors that no implementation plan fully anticipates (Petrie & Peters, 2018). What works well in one faculty, year level, or cohort may gain no traction in another, for reasons that are only visible if you're looking.


Ongoing monitoring allows schools to respond while there is still time to make a difference (EEF, 2024). It is how you find out that a new instructional model has strong take-up in English but is barely visible in Maths. It is how you identify that a practice is reaching higher-achieving students but not the cohorts who need it most. It is how you catch a promising initiative before it quietly disappears at the end of a school year.

Monitoring implementation outcomes gives your team the information it needs to make adjustments while there's still time to make them.



It doesn't have to be complicated

Schools already collect a significant amount of data. Existing student data – survey responses, attendance, assessment results, classroom observation notes – can often illuminate fidelity, reach, and sustainability without requiring a new system or process. The goal is not to build an elaborate monitoring framework from scratch, but to look at data already being gathered through the lens of implementation.


The principle is straightforward: start with what you already have.

Learning walks, interviews, and structured surveys are other low-burden sources of implementation evidence (Evidence for Learning, 2019). A well-designed student survey can illuminate multiple implementation outcomes at once, providing a picture of whether students are actually experiencing the practice as intended, and whether that experience is consistent across different groups and contexts.


The practical approach is to select a small number of outcomes to monitor, choose data sources that speak to more than one outcome at a time, and add sources incrementally as implementation matures. The goal is responsive decision-making, not data for its own sake.




What this looks like in practice

As a Deputy Principal in a small secondary school, part of my role was to oversee and support multiple learning areas. When we administered student surveys, I was able to see responses across each of those areas. One result stood out: students in the Arts rated the question "I know what I am supposed to do in this class" notably lower than students in other learning areas.


This was a fidelity signal. Our school had an instructional model that included the consistent use of learning intentions and success criteria – and the data was telling us that this wasn't landing consistently in one part of the school. Importantly, this wasn't visible in any achievement data at that point. It was an early indicator, not a trailing one.


The Arts team was small and, unlike English or Maths, didn't have a Head of Learning Area embedded alongside them. When we unpacked the survey findings together, the team recognised the gap and agreed to make it a collective focus. I committed to supporting them through learning walks, classroom observations, and instructional coaching. What emerged from that process confirmed what the survey had flagged: learning intentions and success criteria were being used inconsistently across the team.


Through targeted feedback, modelling, and follow-up observations, teachers developed greater confidence and consistency in applying these strategies. In the following survey cycle, that same question – "I know what I am supposed to do in this class" – became the highest scoring question for the Arts team.


The survey didn't solve the problem. But it identified where to look, earlier than any other data source would have.



Practical takeaways

Select a small number of implementation outcomes to focus on. Trying to monitor everything risks monitoring nothing well. Choose two or three outcomes most relevant to your current stage of implementation.


Use data you already have. Before adding new data collection processes, audit what you're already gathering. Student surveys, classroom observation data, and assessment results can all provide implementation evidence when interpreted through the right lens.


Treat monitoring as a tool for adaptation, not accountability. The purpose of tracking implementation outcomes is to help your team make better decisions — not to evaluate individual teachers. The framing matters enormously for how staff engage with the process.


Don't wait for achievement data to tell you something is wrong. By the time a lack of impact shows up in student results, significant time and energy may already have been lost. Implementation outcomes provide earlier, more actionable signals.



A final thought

The evidence base for effective teaching has never been stronger. Many systems are now clarifying which practices the research most strongly supports. What remains the central challenge for school leaders is turning that evidence into consistent, sustained practice across a whole school.


Monitoring implementation outcomes won't solve every challenge. But it gives school leaders something concrete: a way to see clearly what's happening, adapt in real time, and build the kind of evidence base that makes future implementation stronger.

The schools that do this well don't just implement better. They learn faster.



A starting point: Take stock of the data your school is already collecting – surveys, observation notes, assessment results, learning walk records. Ask whether any of it is currently being read through the lens of implementation. In most schools, the raw material for meaningful monitoring already exists. The question is whether it's being used deliberately.


If you're looking for support in building that deliberate approach – or in understanding what your existing student data might already be telling you about implementation quality – we'd love to talk.


Adam Inder


References

 
 
 

Comments


© 2025 by Pivot Professional Learning

bottom of page