
Protecting children’s privacy rights in the age of AI
When COVID-19 forced school closures in 2020, educators and parents rushed to implement digital/EdTech platforms to continue students’ learning from home. Since then, researchers and privacy advocates have uncovered the alarming reality that many education technology companies collect more student data than necessary, track children’s behavior, create detailed profiles, and in some cases sell the information to third parties. What began as an emergency response has evolved into an infrastructure for monitoring human rights violations that is embedded in the everyday educational experiences of an entire generation.
The rapid integration of AI into the classroom environment has fundamentally changed the way education operates. School systems are increasingly recognizing AI as essential preparation for students’ futures and are devoting significant public resources to these technologies. Governments and private organizations are increasingly positioning AI as essential to preparing students for an “AI future” and are often directing public funds toward AI initiatives. However, as human rights organizations and independent researchers have documented, the rapid adoption of AI in education is often occurring without adequate safeguards in place, exposing children and marginalized learners to serious rights violations.
It is important to recognize the opportunities that AI offers in advancing the rights to education and inclusion. AI can support the right to education as recognized by international law and embodied in documents such as the United Nations Convention on the Rights of the Child. When carefully designed, AI systems can tailor instruction to the needs of diverse learners, make adaptive content accessible to students with disabilities, and help teachers identify learning gaps early. For example, learner-centered AI could provide targeted support to students who are struggling with certain concepts, reducing dropout rates and promoting inclusion. Teachers can leverage AI tools to reduce administrative burden and free up more time for meaningful interactions with students. Research and policy frameworks, including the OECD Working Paper, highlight that AI can contribute to equity and inclusion if its implementation is accompanied by thoughtful policies that address access, bias, and transparency.
However, this huge potential for AI in education must be seen within the broader context of three significant human rights implications:
Violation of children’s right to privacy through systematic surveillance. Commercial Use of Student Data. Lack of transparency and accountability in how these EdTech systems operate.
In this article…
Privacy, surveillance, and data utilization
As classrooms digitize, EdTech’s promise meets growing concerns about the unintended byproduct of student surveillance. One of the most well-documented areas of harm is children’s right to privacy. A landmark 2022 study by Human Rights Watch (HRW) found that governments in 49 countries support or require EdTech products to systematically monitor children while learning online. HRW found that 89% (146 out of 164) of government-recommended online learning tools engage in data practices that endanger or violate children’s rights. In contrast, HRW also identified more than a dozen Ed Tech sites operating with zero-tracking technology in countries as diverse as France, Germany, Japan, and Argentina. These examples confirm that education platforms can thrive without compromising user privacy. The determining factor is simply whether the organization chooses to prioritize it. HRW’s investigation concluded that the government failed in its obligations to protect children’s rights to privacy, education, and freedom of thought during the rollout of the pandemic platform. This failure occurred despite children becoming more vulnerable and relying more on digital tools for learning amid a global crisis.
EdTech solutions that monitor students track their activities outside of school hours and transfer the data to advertising companies without real consent or disclosure. These products monitor or have the ability to monitor children, most often covertly without the consent of the children or their parents, and often collect personal data such as who the children are, where they are, what they are doing in the classroom, who their family and friends are, and the types of devices family members can use.
The rush for technical fixes outweighed rights considerations, creating a surveillance infrastructure that continues to this day. From a rights perspective, these actions violate multiple interrelated protections. They violate fundamental rights to privacy, violate the principle that all decisions affecting children must be based on the child’s best interests, and violate the right to education free of exploitation. Widespread surveillance during formative years can normalize continuous surveillance and shape how young people understand their relationship to privacy, autonomy, and authority in ways that extend far beyond the walls of school.
Misuse of student data by commercial actors
In 2022, researchers at Internet Safety Labs found that up to 96% of apps used in U.S. schools shared student information with third parties, and 78% of those shared this data with advertisers and data brokers. Given that children are a vulnerable group, their data, including biometric data, must be treated with the highest level of protection. International human rights law places the primary responsibility for protecting children’s rights on governments, even when the technology is developed and operated by private companies. However, many EdTech products include technology that tracks children’s online behavior in a variety of contexts, collecting detailed information about who children are, where they are, and how they are learning, while regularly sharing this data with third parties within the ad technology ecosystem, often without explicit consent or parental knowledge. This practice undermines children’s rights to privacy, access to information, and freedom of thought, and turns educational environments into sites for commercial data extraction.
Ad trackers built into education platforms send student data to a network of third-party entities, including marketing platforms, analytics companies, and data brokers. This information is compiled into detailed behavioral profiles used for commercial targeting. Children’s learning activities therefore generate commodified data streams that fuel an advertising ecosystem that is far removed from educational purposes. A striking example emerges in Brazil, where Estude em Casa, a public online learning platform in the state of Minas Gerais, exposed this troubling intersection of education and commercial surveillance. HRW documented that the website was used by children across the state and sent student activity data to third-party advertising companies through multiple ad trackers, third-party cookies, and Google Analytics “remarketing audiences.” This meant that children’s learning behaviors were directly reflected in the commercial advertising ecosystem, far beyond their intended educational purpose. After Human Rights Watch publicly highlighted these privacy violations in reports published in late 2022 and early 2023, the Minas Gerais State Education Secretariat removed all ad tracking from its platforms in March 2023, highlighting the urgent need for stronger safeguards to protect children’s digital privacy rights.
Lack of transparency and accountability
AI has gone far beyond its supporting role in education and is now being used at every level of the school system. Proponents justify this expansion by appealing to efficiency, safety, and personalized learning. Human rights concerns arise when these systems are mandated, operate without transparency, require extensive data collection, and demonstrate unreliable performance, especially when applied to young people who are unable to obtain meaningful consent for their use.
High-profile enforcement actions in the United States in December 2025 demonstrate how a lack of transparency and accountability by EdTech companies can deeply violate children’s rights. Federal and state regulators have finally taken action against education technology provider Illuminate Education after a 2021 cyberattack compromised the personal information of more than 10 million students, including grades, health details, and other sensitive records. The Federal Trade Commission and the attorneys general of California, Connecticut, and New York found that the company misled school districts about its cybersecurity safeguards, failed to remediate known vulnerabilities, and delayed notifying schools and families about the breach. The resulting settlement required increased security measures and deletion of unnecessary data, and imposed a $5.1 million fine. But the settlement provides little meaningful redress for affected students and families, shows how enforcement actions often occur only after harm has occurred, and how commercial actors are allowed to amass vast amounts of student data while externalizing the consequences of their failures to children, parents, and public authorities.
Moving forward: Building EdTech systems with rights-based AI
In 2026, as the integration of AI into education continues to accelerate, the need for an inclusive governance framework that protects human rights has never been more urgent. AI in education need not conflict with human rights principles, but current practices clearly do.
Aligning the implementation of AI in education with human rights standards requires fundamental reforms in both government and the private sector. International organizations are actively developing guidance for the responsible use of AI. As part of UNICEF’s AI for Children project, the 2025 Guidance on AI and Children sets out 10 requirements for child-centered AI, including regulatory oversight, data privacy, non-discrimination, safety, transparency, accountability and inclusion. These principles aim to ensure that AI systems uphold the rights of children and that technology is designed and managed to protect and benefit learners. These safeguards are essential to meeting the obligations of States and the private sector under international child rights and education law.
A rights-based approach requires reprioritization. Rather than casually experimenting on children by introducing technology that has no scientific basis into the classroom, we must think about what children need and what protections their rights need. Innovation must be measured not by technological sophistication or promises of efficiency, but by its demonstrated ability to improve the quality of education while respecting the rights and dignity of children. Without this change, AI risks becoming less a vehicle for educational empowerment and more a mechanism of greatest harm to children, who are already the most vulnerable and marginalized within the education system. For those of us who believe that children’s rights are fundamental, we must boldly challenge claims about the ‘potential’ of AI and demand concrete evidence and strong rights-based regulation to shape how these systems are developed – ensuring they are ethical, effective and respect children’s rights – and to address known and emerging risks.
Read more:
Source link
