New Tools, Same Endpoint: Why Technology Won’t Fix a System Designed Not to Change
- EDDS
- 2 hours ago
- 4 min read
There is a persistent belief in education—and in many sectors—that new technology will finally unlock what older approaches could not. Personalisation. Deeper thinking. Inclusion. Creativity. A more human system. The promise is compelling.
If we sketch the current dominant model of education, it reveals something more structural—and more uncomfortable—into which technology is now being integrated: a system shaped by a defined logic, driven by a set of pressures, and oriented towards a clearly defined, and notably narrow, endpoint.

If we step back, the current education system is engineered around a set of core principles:
standardisation
pacing and sequencing
performance measurement
accountability through metrics
competition and ranking
These are its operating principles, with endpoints anchored in high-stakes, standardised assessments—be those GCSEs and A levels as in the UK, or the SAT in the United States, the Gaokao in China, and national board examinations such as India’s CBSE—alongside international benchmarks like PISA (the OECD’s Programme for International Student Assessment, which compares 15-year-olds’ performance in reading, mathematics, and science across countries). It is into this environment that new technologies arrive.
AI tutors, adaptive platforms, analytics dashboards, automated marking systems—each arrives with the language of transformation. Yet once inside the system, they tend either to reinforce its existing dynamics or to be rapidly configured to fit them. Platforms such as Century Tech, for example, offer adaptive pathways that personalise learning sequences, but largely in relation to curriculum coverage and exam performance, which align closely with GCSE and A-level outcomes.
Tools like Arbor Education translate attendance, behaviour, and attainment into dashboards that support accountability and performance tracking, reinforcing the same metrics that structure the system.
Similarly, widely used platforms such as Sparx Learning optimise practice and homework completion through data-driven routines, but do so in ways that are tightly coupled to standardised assessment expectations.
In each case, the technology increases efficiency, precision, and scale, but ultimately continue to channel activity towards the same endpoints the system has long prioritised.
If personalisation, data, and automation are all pulled toward test performance, targets, and standardised assessments, how can we expect technology to deliver different outcomes when the system itself is fixed on such a narrow endpoint?
The Veneer Problem
This is the core issue: technology often acts as a veneer over deeper structural problems. It creates the appearance of progress while leaving underlying dynamics intact. Worse, it can legitimise those very dynamics.
If a system is designed to rank, sort, and exclude (for example, progression beyond GCSEs in England is structurally constrained by performance at that stage)—and new tools make that process more efficient, more scalable, more data-driven—then we haven’t disrupted the system—we have only intensified it.
So the real question becomes:
If a system is designed to produce a specific end point, how much should we realistically expect any tool to change that outcome?
Not much.
The Harder Truth: It Can Get Worse
There is a tendency to frame this as a failure of technology to live up to its promise. But that is too soft. The reverse is equally, if not more, important: these technologies can actually amplify what the system already does poorly. This is visible in several ways:
Narrow curricula can become narrower through digital optimisation
Teaching to the test can become hyper-targeted and continuous, increasingly mediated through continuous, device-based interaction
Surveillance can become ambient and normalised extending beyond the boundaries of the school
Students can be reduced further into data points, predictions, and risk scores with long-term negative consequences to their future life chances
Moreover, the feedback loop reinforces itself: the endpoint (GCSEs, A-levels, rankings, etc.) drives the system, and the system justifies the endpoint.
Technology doesn’t break the loop—it only tightens it further.
Scaling What We Already Value
In a system already organised around measurable outputs, efficiency, and scale, AI and edtech products align seamlessly with its underlying logic. Their value is quickly expressed in terms the system already recognises:
More content, faster
More assessment, automated
More feedback, standardised
More signals to feed the system
The issue, then, is less about what AI is capable of, and more about how those capabilities are taken up. When a system privileges output over meaning, replication over originality, and metrics over understanding, AI does not disrupt these priorities—it extends and intensifies them.
This shifts the focus of the conversation. The question is no longer whether a given technology is inherently “good” or “bad,” but whether the system into which it is introduced is equipped to use it in ways that align with broader educational aims.
From here, a more fundamental concern emerges:
why does the auditing and evaluation of what enters this system remain so limited?
Without such scrutiny, the introduction of new technologies risks becoming an exercise in scaling existing assumptions. Embedded within these tools are particular definitions of learning, specific conceptions of success, preferred forms of measurement, and often unexamined biases in both data and design. As these become operational at scale, they recede from view, and instead shape practice in ways that are increasingly difficult to detect, question, or challenge and even refuse.
If different outcomes are the goal, then the focus must turn to the system itself:
What is the system designed to produce?
What counts as success within it?
What pressures shape behaviour at every level?
How do technologies align with—or intensify—those pressures?
Until these questions are addressed, each new tool risks becoming a more efficient means of arriving at the same destination—or accelerating movement towards outcomes that warrant far greater scrutiny. If we are prepared to rigorously evaluate students, teachers, and schools, a final question remains:
Why is the same level of scrutiny not applied to the technologies, data, and assumptions that are now shaping the system itself?




Comments