Think of a street where children play, and the speed limit is 20mph.
How do we enforce that limit to try and keep children safe? We could ask every house on the street to constantly check on traffic and hopefully spot an issue at the exact moment it occurs. Or we could use a speed camera to watch over the cars and spot issues proactively.
Not only that, but to help keep everyone safe, we require the cars to adhere to all manner of safety requirements before they are allowed on the road. New innovations can be added to vehicles of course but we ensure they are safe before they are used.
It’s not fool-proof – we need enforcement if cars ignore the speed camera for example – but it’s a system that allows cars to drive on the road, and also allows car manufacturers to innovate and yet hopefully keep people in the street safe.
In EdTech, however, there is no ‘speed camera’ - neither at the design stage, nor out on the ‘street’.
This is forcing schools to have to try and act as those neighbours – spotting cyber security or data protection issues they have little hope of seeing or understanding – while trying to keep children safe.
Unfortunately, according to a government report earlier this year, they are mostly unable to do this as 41% of primary schools and 70% of secondary schools in the UK experienced a cyberattack of breach in the past 12 months.
This is a shocking statistic and underlines why the education sector needs to be aware of the threat posed by cyber criminals.
Much of the reason for this is the proliferation of EdTech in education, accelerated by the pandemic, which means there is a great incentive for cyber criminals to target EdTech platforms that now collate huge amounts of data on children from a young age.
So what can schools – and perhaps most importantly the EdTech providers they rely on – do about this?
To address this question, I recently conducted a study that first reviewed existing research between 2012 and 2022 investigating cyber security and EdTech in education and how debates in this area have been formed.
Strikingly, the main narratives around cyber securty in education focused predominantly on the end-user, less so on what industry providing the digital products should also be responsible for.
Some have called this ‘liability dumping’ – shifting the responsibility of cybercrime on to end-users. While end-users should build resilience, awareness, and the right skills in cybersecurity, they are not the only stakeholders in a highly digitalised educational ecosystem.
We can see this in much of the language used by the Department for Education’s guidance to schools too.
For example, the recently updated ‘Digital and Technology Standards’, which provide a baseline for schools and colleges to follow when it comes to their use of EdTech tools.
This outlines requirements such as “train all staff with access to school IT networks”, “check security for all applications downloaded”, set up contingency plans, conduct data protection impact assessments and so on, are standards that DfE expects schools to either have already met or do so “as soon as possible”.
That’s a lot to put on schools who are grappling with everything from financial pressures to sustainability initiatives or adhering to the new Keeping Children Safe in Education guidance, which contains also non-digital safety necessities alongside the responsibilities of actually teaching and learning.
What’s more, although the advice above is sound and worth doing if it can be incorporated into workloads, research has shown that 80% of applications in government and educational institutions use old codebases and have high flaw density (the number of confirmed bugs in a software application.
This is the highest rate measured and far higher than other sectors such as financial services, retail, and technology, while 23 per cent of these have high severity flaws.
In short, even if schools could do all the vetting needed, there would almost invariably be products that enter the market without the necessary security standards in place.
This opened up the second strand of my research – the standards and external validation, or lack thereof, governing how EdTech providers to ensure their products are secure and in doing so give schools more confidence in buying digital tools they know meet high security standards (among other equally important prerequisites, such as, whether the product benefits pedagogical processes).
However, as was made clear by a series of in-depth interviews with EdTech providers from UK and abroad which aimed to gauge what cybersecurity standards or frameworks they adhere to, there is lack of guidance for them in this space.
They cited the fact there are numerous overlapping standards such as CyberEssentials and NIST CSF that makes it hard for firms to know what to implement – let alone schools to look for as a trusted kitemark of cyber security standards from any provider.
While some firms show due diligence, the lack of strict mandates imposed on EdTech businesses to implement any cybersecurity controls means that for some, especially start-ups, cyber security frameworks tend to be seen as “tedious” and “bureaucratic”.
What’s more, in a fast-paced industry, the costs and resources required to meet cybersecurity standards are typically high, which makes it near impossible for early-stage companies to level up.
Moreover, one could argue that regulatory measures can stifle innovation which is often used as a reason to avoid any overly dogmatic standards for companies. However, when it comes to protecting children’s data this seems like a lopsided argument.
I spoke to several international cybersecurity standards organisations such as IASME Consortium’s CyberEssentials (in the UK), the National Institute of Standards and Technology Cyber Security Framework and the National Initiative for Cybersecurity in Education (in the US) about this situation too.
They generally acknowledged their frameworks don’t specifically address K-12 education which is why they are independently looking to update their frameworks.
However, it can be anyone’s guess what those updates will look like and moreover if key stakeholders such as teachers, students, parents, have any say in these new designs.
This is all quite gloomy perhaps – but there are signs of hope too.
Many EdTech companies acknowledged it would be a good idea to have a dedicated cybersecurity standard to help bring clarity to the market.
Partly this is driven by economy considerations – having verified cyber security credentials would make it easier to market to schools. It can also help offset the costly impact of any potential future cyber breach.
It would likely also lead to better products by ensuring new platforms and upgrades were not rushed out but instead developed carefully. Of course the big question is who would oversee these rules?
Many supported the proposal for setting up “a national institution or a government-supported private entity” which can createsecurity certification for EdTech products or a database for such secure-by-design products, which can facilitate EdTech procurement.
The other positive to this is that there is a growing awareness among the education community (teachers, students, and parents) of the risks ensuing from the rapid digitalisation of education.
This is putting more pressure on EdTech vendors and ensure they have right cyber security standards in place and will set those that do apart from those that do not.
Ultimately, while we may not see the creation of EdTech ‘speed camera’s immediately, it is clear there is a growing awareness from both schools and vendors that we need greater oversight of the tools and platforms used on millions of children both in the UK and worldwide.
As one US Edtech provider interviewed for the study said: “Marking one’s own homework is not ideal”.
This article was first published on The Times Education Supplement.
Comments