What your school’s data isn’t telling you
- Pivot Professional Learning

- 2 days ago
- 5 min read

Every school collects data. Turning it into meaningful change in classrooms is where the work becomes more complex.
The issue is rarely access to data – what happens next is often less clear. Data gets reviewed, discussed, sometimes even celebrated – but the link is not explicitly made between those conversations and day-to-day instructional decisions. Over time, it creates a quiet form of fatigue. The data is there, but it stops feeling useful.
Used deliberately, data becomes evidence – something that sharpens decisions and drives improvement. Used poorly, it produces noise. This article is about the difference.
What this looks like in practice
A pattern I see often in schools is that the data raises a question, but doesn’t yet point clearly to action. Left unaddressed, this is where data conversations tend to stall – they stay descriptive, rather than supporting change.
A school I worked with noticed something odd in their wellbeing data. Most domains were strong. But belonging was lower – particularly for Year 6. On its own, that finding didn’t explain much. But it raised a useful question.
When we looked more closely, a small number of survey items stood out – including “I look forward to coming to school.” That pointed us toward something more specific, but still incomplete.
The next step was not more analysis. It was more understanding. Teachers spoke with students. Leaders gathered informal feedback. The goal was to understand what sat behind those responses, moving from reviewing results to investigating meaning. This is the point where data stops being descriptive and starts becoming useful.
Tools like student surveys are most effective when they support this kind of focus. Not when they expand the number of things you feel responsible for tracking, but when they help you stay anchored to what matters most right now. The aim is not comprehensiveness. It is using the right data to answer the questions that matter.
The need to use more than one source
No single data source tells you what is really happening in a classroom. A student survey shows one perspective. A classroom observation shows another. Assessment data shows something else again. The mistake is treating any one of them as sufficient.
Triangulation – using multiple sources together – is what turns data into something you can actually act on. The value is not that you have more data. Without it, you are making decisions on partial information.
For example, students might report low clarity around learning goals, while classroom observations suggest those goals are being communicated. That gap is not something to resolve quickly. It’s a signal to look more closely. Are goals being stated but not understood? Are they visible but not meaningful to students? Without multiple sources, that question never surfaces.
A similar pattern can appear with engagement. A survey might suggest students feel highly engaged, while behavioural data shows increasing off-task behaviour or declining homework completion. Neither is necessarily wrong. They are capturing different aspects of the same experience.
Bringing those sources together sharpens the question. Are students interested in the learning, but struggling to sustain effort beyond the lesson? That distinction matters, because it leads to a different response than if both sources pointed to disengagement.
Effective data triangulation asks the following questions:
What does each source tell me?
What does it not tell me?
And what does the combination suggest?
Different types of information – and why they both matter
Different sources also give you different types of information. Numbers tell you where to look. A low survey score, a drop in attendance, a pattern in assessment data – these signal that something may need attention. But numbers rarely explain themselves. That’s where conversations, observations, and student voice come in. They help you understand what is actually driving the pattern you’re seeing. Both matter: one points you to the issue; the other helps you make sense of it.
Cutting through the noise
What schools choose to pay attention to determines what improves, and what doesn’t. Schools are not short on information; they are surrounded by it. The challenge is deciding what actually matters – and what can be safely ignored. Without that discipline, data collection expands faster than a school’s capacity to interpret it.
The solution is disciplined focus. Data use should be anchored to your current priorities. If your school is working on a specific instructional practice, the question is simple: which data sources will tell us most directly how that practice is landing?
Not every available metric is relevant to that question. Trying to monitor everything usually means nothing gets examined closely enough to change practice. This applies at every level of the school. At a diagnostic stage, a broader view makes sense. But once priorities are set, the work is to narrow the lens deliberately and consistently. This plays out most clearly at the level of practice.
At a teacher level, this is even more pronounced. No teacher can meaningfully improve multiple aspects of their practice at once. Progress comes from focusing on one or two areas, using data to inform that focus, and returning to it with a clear question.
At a team level, the same principle holds. The most productive data conversations are not those that review everything, but those that centre on a small number of shared priorities. When a team agrees on what matters, the conversation shifts from reviewing results to investigating practice.
Practical takeaways
Triangulate deliberately, not exhaustively. Use two or three sources that speak directly to your current focus. Look for convergence, but treat divergence as equally informative.
Use numbers to locate the issue. Use conversations and observations to understand it.
Start with your priority. Then decide which data is actually relevant to it. Resist the pull to monitor everything.
Focus at the level of practice. Support teachers to work on one or two areas at a time, using data to guide and refine that focus.
Build shared ownership. The strongest data conversations happen when teams agree on what they are trying to improve, and use data to explore that question together.
A final thought
Schools often already have the data they need. What distinguishes schools that improve is how deliberately that data is used: connected to clear priorities, interpreted through multiple sources, and tied to action. It is about deciding what to pay attention to, creating the conditions for teachers to engage with data meaningfully, and building a culture where data supports professional growth rather than judgement.
When this works well, it is rarely accidental. It is the result of clear priorities and deliberate choices about how data is used. Getting this right does not necessarily require more data – it requires sharper thinking about the data you already have.
A starting point: Identify one current school priority.
Then ask:
What data are we already collecting that speaks directly to this?
Are we drawing on more than one source?
Are we using both patterns and explanations?
Those questions are a more useful starting point than any new data collection process.
If you’re trying to solve a specific instructional challenge and want to understand what your data is actually telling you — or what to do next — that’s the work we support schools with.
References
Coburn, C. E., & Turner, E. O. (2011). Research on data use: A framework and analysis. Measurement: Interdisciplinary Research and Perspectives, 9(4), 173–206. https://doi.org/10.1080/15366367.2011.626729
Datnow, A., & Park, V. (2014). Data-driven leadership. San Francisco, CA: Jossey-Bass.
Marsh, J. A. (2012). Interventions promoting educators’ use of data: Research insights and gaps. Teachers College Record, 114(11), 1–48.
New South Wales Department of Education. (n.d.). Data types, strengths and limitations. Evaluation Resource Hub. https://education.nsw.gov.au/teaching-and-learning/professional-learning/pl-resources/evaluation-resource-hub/turning-data-into-evidence/data-types-strengths-and-limitations
Schildkamp, K., Lai, M. K., & Earl, L. (Eds.). (2013). Data-based decision making in education: Challenges and opportunities. Springer.
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.



Comments