top of page
Search

Ethical AI for the 99%: why inclusion can’t be an afterthought

  • Editorial
  • Jun 13
  • 3 min read

In a world where a little over 3,000 billionaires exist (Bloomberg 2025) and over 240 million children with disabilities (UNESCO 2025) live in systems not designed for them, we must ask: who is AI really going to serve? As AI technologies race ahead, they risk embedding and even amplifying existing inequalities — especially in education.


The myth of the minority


’Minority’ is a misnomer. When we include children with disabilities, migrant youth, socio-economically disadvantaged learners, and LGBTIQ+ students, we’re not talking about a small subset. They are the majority — yet they remain marginalised by design.


  • In Europe, 65% of students with disabilities attend mainstream schools, but support is patchy (ESEP 2022).

  • In Asia, India’s NEET rate (youth Not in Education, Employment, or Training) is 27% — triple the regional average. China has 9 million left-behind children, facing unique educational and emotional challenges (ESCAP 2018)

  • Many Asia–Pacific exceed the SDG target of 13.9% (IOM2022)

  • Globally, LGBTIQ+ students report alarmingly unsafe learning environments, and systemic data gaps obscure the scale and impact of these inequalities (UNESCO n.d.)


Ed-tech and AI: solution or smokescreen?

AI in education (AIED) brings promises like saving teachers time, personalising learning and improving outcomes. Tools like predictive analytics, automated grading, and adaptive learning systems claim to revolutionise classrooms.

But do they deliver?


Evidence suggests otherwise. OECD’s PISA results (2023) show no significant improvements from digital tools. Besides the low outcomes in literacy and numeracy, a significant 43% of youth in the EU still have low or no digital skills (Eurostat, 2024).


These systems are not neutral. Many of them are shaped by corporate interests and embedded in complex infrastructures involving investors, shareholders; for the smallest start-ups - often the drive is to scale up in the hope it will attract further investment.

As AI weaves deeper into education systems, this illustration exposes the data pipelines feeding corporate interests, raising urgent questions: In a world of vast inequality, who does AI really serve — and who gets left behind? (Illustration: V Hillman)
As AI weaves deeper into education systems, this illustration exposes the data pipelines feeding corporate interests, raising urgent questions: In a world of vast inequality, who does AI really serve — and who gets left behind? (Illustration: V Hillman)

AI and children’s rights: a critical intersection

Children have rights to privacy, education, protection from exploitation, and freedom to develop independent thought and personhood, not to be manipulated, play and rest and more. These must be protected from algorithmic bias and digital surveillance — yet AI in classrooms often proceeds without evidence, understanding of how these systems work or even awareness of the potential short- and long-term risks.


Governance isn’t optional — it’s urgent

Ethical AI is not just about better design. Its about co-governance: shared responsibility between students, educators, policymakers, and communities.

We must ask:

  • What do students need to know about the AI tools they’re exposed to?

  • What should educators disclose about how AI is used in their teaching?

  • How can all stakeholders push for transparent, inclusive, and enforceable AI policies?

Governance isn’t just bureaucracy — it’s a defence of education’s core values: dignity, inclusion, sovereignty - and the public good.


Beyond that, governance doesnt mean that students and educators should fend for themselves — industry has a massive responsibility and obligation to uphold childrens rights and be held accountable for the impact of their technologies. Governance must go beyond local or institutional efforts to include meaningful regulation, ethical standards, and enforcement mechanisms for the corporate actors who shape AI systems.



Further reading and resources


 
 
 

Comments


bottom of page