The Dangers and Possibilities of AI in Schools
Before we embed AI into our education system, some questions need to be answered.
By Brad Littlejohn, director of programs and education at American Compass; and Jared Hayden, a policy analyst with the Institute for Family Studies Family First Technology Initiative.
Across American schools today, many students have stopped doing homework. They have stopped reading, writing, solving trigonometry problems, and memorizing the names of American presidents. They have ChatGPT for all of that. In this epidemic of AI cheating, some students have no scruples in skipping the work. Others tell themselves that these tools are actually helping them learn faster. And still other students hesitate at first but, when they see other kids getting ahead through liberal use of LLMs, figure they’d better join the race before being left behind.
Nations are no different. Last month, President Donald Trump signed an executive order for Advancing Artificial Intelligence Education for American Youth. Motivated in part by a sense that AI can somehow improve flagging learning outcomes, in part by the desire to keep up with China, the order seeks to “ensure that every American has the opportunity to learn about AI from the earliest stages of their educational journey through postsecondary education, fostering a culture of innovation and critical thinking that will solidify our Nation’s leadership in the AI-driven future.”
While the goals in the latter half of the sentence are admirable, it is unclear how they relate to the objectives in the first half of the sentence. Over the past two decades, the ubiquitous deployment of digital technology into every corner of the American classroom has coincided with a shocking drop in test scores and student mental health. Some think AI tools might reverse that trend, but at present we have no idea how, and the necessary top-to-bottom rethinking of our broken public education system will surely take more than the 90 days that this EO gives the Department of Education to formulate its plans.
The timing and methodology of this order are particularly curious given that American parents have recently mobilized against top-down efforts by ideologues and technology companies to impose new educational fads upon children in the classroom. Everything from Zoom-based classes to transgender bathrooms were foisted upon children without parental input, resulting in a wave of anger that coalesced into a movement for parents’ rights in education and school choice. It was this movement that helped spur President Trump’s reelection.
At the same time, we have seen a mass movement to get screens out of schools and turn them again into places for friendship, mentoring, reading, and conversation. Numerous states have passed laws banning smartphones from classrooms or school campuses altogether, in recognition that while technology can aid the educational enterprise, it can also profoundly undermine it. This movement has also begun to unearth the extent to which the proliferation of educational technology (EdTech) since 2010 was not the product of thoughtful educational policy but a clever effort by Silicon Valley to line its coffers at the expense of taxpayers—and the data suggests, of children’s mental development. We can only assume that AI companies will be eager to do the same.
Unquestionably, our education system was broken already before OpenAI came along, and it needs a radical rethink. Perhaps the strategic deployment of AI tools as tutors or examiners might be part of that. Certainly dedicated classes in AI literacy could help upper secondary students enter the world prepared to use these powerful tools wisely and effectively. But as previous administrations have found, when it comes to educational reforms, good intentions and good outcomes rarely go together. If AI is to be effectively integrated into American schools, it can only be done with the input and oversight of those who actually understand their children’s needs: parents.
Any attentive parent reading this executive order would have a whole barrage of questions for the White House: How are you going to keep my kid from being exploited or exposed to harmful content through these programs? How are you going to make sure these tools actually help my kids learn, rather than melting their brains? And perhaps most importantly, what do we mean by “learn”? What, after all, is the point of education?
Collateral Harms
AI poses no shortage of threats to kids’ well-being. This is hardly surprising when one considers that many of the companies developing AI tools today have been enabled by federal law to prioritize profit over kids’ online safety. Under existing law, these companies have little legal incentive to protect kids online and virtually no legal liability when they don’t. According to the federal Child’s Online Privacy Protection Act, for example, websites are technically not allowed to collect data from minors under the age of 13 without parental consent. However, the law lacks teeth and lets websites and platforms determine whether a user is a minor simply based on self-attestation, which can easily be falsified. Then there is the sweeping immunity a number of tech companies enjoy under Section 230 of the Communications Decency Act of 1996, which shields websites from any liability for what their users post on their platforms.
Such a landscape has incentivized irresponsibility. Without any real pressure from federal law, companies like Meta and Google have dragged their feet on developing necessary safeguards for minors. Consider Google’s Chromebook, which is commonly issued by schools. The laptop has long been notorious for having poor content filters and complicated parental controls, allowing kids to easily access pornography from their school devices. Or consider Meta’s Instagram, the use of which tanks mental health amongst teenage girls especially. It was only when parents and legislators became mobilized in recent years that these Silicon Valley behemoths, under the potential threat of legal liability, developed more robust child safety features.
The same evasion of responsibility and data harvesting is happening with their AI products. According to a recent Wall Street Journal article, Meta’s AI-powered chatbots, or “digital companions,” will engage in and even escalate sexual fantasy and romantic roleplay—sometimes with AI-generated voice notes based on the voices of famous actors and actresses—for users the company knows are underage. As in the case of pornography, if companies like Meta can get minors hooked on addictive sexual content, they are more likely to get lifelong return customers, cementing profits for years to come. Given that vulnerable populations are more open to AI romance and friendship, this will not be hard to accomplish. It should be no wonder, then, that on the precipice of the AI age, these companies are trying to secure the same immunity and hegemony they have already enjoyed as they race to develop AI tutors for kids under 13 and lobby Congress for a 10-year moratorium on state-level AI regulation.
Even companies that have developed AI specifically for the classroom have failed to create safe and age-appropriate tools. Take the EdTech company KnowUnity. KnowUnity is committed to “building the #1 global A.I. learning companion.” But in a recent Forbes exposé, researchers found that its AI tutor is far from safe. When asked how to reach a desired weight goal in a timeframe that no personal trainer or dietician would view as realistic, the bot will encourage teens to effectively starve themselves. When prompted for instructions to make fentanyl, the bot offers a recipe with ingredients measured to the tenth of a gram.
The problems extend well beyond the AI tools and their developers. In recent decades, cyberbullying has become a common feature of the school experience. Such bullying now includes the creation and dissemination of AI-generated “deep fake” nudes based on classmates’ social media photos. According to one report, one in eight teens aged 13 to 17 know someone who has been a victim of deepfake. Many schools have struggled to deal with these situations, leaving many victims without help. Unleashing powerful AI tools in the classroom without establishing the appropriate safeguards will leave school administrators flat-footed when dealing with these new problems and empower bullies to harm their peers in similar ways—or worse.
Any incorporation of AI into the classroom must guard against these known risks. Without robust developer requirements to ensure kids’ safety as well as mechanisms to hold tech companies accountable when their products fail to do so, the deployment of AI in K-12 public schools will only compound what is already a tech-devastated childhood in this country. Parents do not want that. Likewise, the administration should make every effort to keep the power in the hands of parents—not tech companies—to determine what is best for their kids when it comes to AI by giving them the freedom to opt out.
Assessment and Learning Outcomes
For over a decade now, tech companies have been convincing public school districts across the country to incorporate their untested, addictive, and extractive products. Slowly but surely, laptops, tablets, and interactive educational software have eroded what were once long-standing features in education, such as the physical book, in-person peer interactions, and teachers. In the school of the screen, performance has plummeted. Since the early 2010s, when digital devices became ubiquitous in American society, global test scores in math, science, and reading have been dropping, so much so that in 2022 the scores of the lowest performing students were lower than they had been for half a century.
Despite $30 billion of annual spending on EdTech in U.S. public schools, research from the international Organization for Economic Co-operation and Development (OECD) has repeatedly found that more technology is not necessarily better for students. As early as 2012, OECD was finding that, while moderate tech usage in the classroom could assist with learning outcomes, “students who use computers very frequently at school do a lot worse in most learning outcomes.” Another OECD study from 2024 found that although the use of advanced technologies in the classroom was only set to grow, “[m]ore advanced technologies in educational spaces will not, however, automatically translate to positive outcomes, as several factors shape the relationship between technology and learning.”
Parents and educators have witnessed a similar trend when it comes to kids’ use of their personal devices. Search engines and platforms like YouTube can deliver previously unimaginable amounts of educational information at the tap of a finger, but teens end up spending the majority of their time on their devices in a daze of distraction as they binge-watch Netflix shows, doomscroll social media, or share the latest memes. For this and other reasons, a number of states and school boards have passed policies that ban smartphones in school from “bell to bell.” Where these “bell to bell” bans have been enacted, cyberbullying has decreased, test scores have increased (especially amongst underachieving students), and mental health has improved. To be sure, the effects of smartphones are distinct from the effects of school-sanctioned EdTech. The general principle still holds: more or flashier technology does not automatically lead to better educational outcomes.
Like the screen or the smartphone, the same principle applies to AI. The mere incorporation of AI into the classroom will not automatically solidify American students’ “leadership in the AI-driven future.” Though it may increase output and efficiency of existing needs and problems, generative AI tools (such as ChatGPT or AI tutors) are much more likely to decrease critical thinking skills needed to drive innovation, as dependence on these tools tend to replace rather than reward effort in the classroom. AI is more likely to augment skills when its use is focused on things like complex problem solving or specialized research projects that, by necessity, require a degree of proficiency in other subjects or trade skills.
Like any technology in the classroom, if AI is going to lead to better learning outcomes (and that’s a big if), it will need to be incorporated carefully and deliberately. To do this, the administration must determine, concretely, what success looks like and establish standards and benchmarks by which educators and school districts will know they are moving in the right direction. For example, what kind of skills and proficiencies should students gain from their AI training? How do we measure success—what are its indicators? Should we expect an increase in SAT and ACT scores? And what benchmarks must be continually met to justify the use of AI in the classroom?
It is vital that the Trump administration seek to answer these and other questions—preferably before it decides to implement AI in classrooms across the country—if its efforts are to produce the desired outcomes and for the concerns of parents to be adequately addressed.
What Is Education For Anyway?
Answers to many of these questions are unlikely to be forthcoming because, quite frankly, we have long since lost sight of what education is for. Our civilization once understood that it was the task of forming a human person in skill and virtue, of leading them out of dependency and into independent agency, so they could be equipped for meaningful work and service within their communities. During the twentieth century, we treated education as little more than information transfer, with the primary purpose of infusing our children with technical know-how so that they could enter the industrial economy as productivity-maximizing tool-users who would help our nation achieve global dominance.
True, we had a vague holdover sense that a liberal education is meant conveying a broad array of general knowledge so that graduates would be “well-rounded” and “informed citizens.” But this vague holdover has not survived the digital revolution. If Google or Wikipedia or ChatGPT has all the answers, why waste time acquiring and then forgetting them? Thus we have been left with little more than the drive to instill “marketable skills,” leaving a vacuum of meaning that bewildered educators have sought to fill with various experiments in ethical reprogramming.
The deep irony of the past 40 years is that the marketable skills for which we have chiefly been educated are those of the knowledge economy, which is now rapidly being outsourced to AI. We pursued a “college for all” pipe dream in which the trades, manufacturing, and other blue-collar pathways were denigrated in favor of coding, marketing, and “critical thinking.” The last of these—a favorite term of educator Newspeak—has almost nothing to do with cultivating true humanity through the honing of rational faculties and moral judgment, and everything to do with equipping graduates to be rapid-response problem-solvers within a dynamic information economy.
But, of course, now AI can do that far better than we can. So what exactly are we educating for?
We honestly don’t know, so in the meantime we resort to upbeat place-holder slogans that sound like they were written by ChatGPT. Trump’s EO says:
“By fostering AI competency, we will equip our students with the foundational knowledge and skills necessary to adapt to and thrive in an increasingly digital society. Early learning and exposure to AI concepts not only demystifies this powerful technology but also sparks curiosity and creativity, preparing students to become active and responsible participants in the workforce of the future and nurturing the next generation of American AI innovators to propel our Nation to new heights of scientific and economic achievement.”
Translation: By fostering AI competency, maybe we can beat China.
The difficulty with this view of education as productivity maximization is that it is a fundamentally posthuman vision. The reality is that we have long since passed the point where machines are more productive and efficient than human beings at most tasks. Generative AI is rapidly and radically shrinking the range of tasks in which human persons will have a comparative advantage. If we continue to tell our twelve-year-olds that what we need is for them to produce results, they will turn to ChatGPT to write their assignments for them every time.
If, however, we wish to help them grow into the virtues that are and will remain uniquely human, embedding AI in every classroom may be the last thing we want to do. Indeed, it will be counterproductive even from a purely economic frame: the jobs of the future are going to be the ones that draw on personal, social, relational capital, which suggests that if we want to prepare our children for such a workplace, we should be de-digitizing the classroom as much as possible.
This need not entail a kind of educational Luddism. Unquestionably, AI literacy needs to be an integral part of our curricula going forward, especially in the high school years. As AI applications become ubiquitous throughout the economy and society, students will need to learn how they work, what they are good for, and where they should not be relied upon. They should be educated about AI dangers and warned against inappropriate use of these technologies (deepfake pornography is now a federal crime). They should be taught the difference between using AI tools to help them do their own work better—and passing off AI-generated content as their own work.
All of this suggests that, at the very least, we will need to create space in the curriculum for “AI computer lab” sessions in which students, in a carefully supervised environment, are trained in the capabilities of AI and practice using it to solve various problems and brainstorm new ideas. In the early decades of personal computing, school computer labs served a valuable role in equipping students to type, to master a word processor or a spreadsheet, and to try their hands at graphic design. Perhaps most intriguing is the possibility that the enthusiasm around AI could be channeled to revive the largely defunct shop class, with students learning to use their eyes and hands alongside advanced computers and robots to build, create, and repair. Note, however, that this is a fundamentally different approach to the role of technology in education than the executive order appears to envision, which is an integration of AI throughout the curriculum at every grade level. Given the moral and pedagogical disaster that has attended the Chromebook-ification of the American classroom, we should be highly skeptical of this approach.
That said, there are at least possibilities worth exploring here. To the extent that education does require a certain amount of information transfer, and granted the economic constraints of our current teacher-student ratios, perhaps it could be effective to outsource the lecture portion of some classes to AI tutors that could tailor their explanations to a student’s specific learning needs. Students that get the basic concept quickly could move on to explore deeper nuances, whereas students who struggle more could have the same material amplified and repeated.
A single teacher lecturing at the front of the classroom will always end up over-explaining to some students and under-explaining to others. Arguably, such strategic deployment of AI tutors could play a role in facilitating the kind of “flipped classroom” approach that many classical educators have found most effective, in which students take in new content outside of the classroom, and then use screen-free class time to discuss, debate, and deepen that knowledge. Done right, then, it is at least conceivable that the strategic adoption of AI could offer an opportunity to rethink and replace some of the most dehumanizing features of American education and create more space for humane learning.
This, it strikes us, is the most optimistic scenario for advancing American education through artificial intelligence. As things stand, however, it is not the most likely. And there is certainly no chance of it transpiring unless the initial questions above are given long and careful thought. Nor can we expect such a happy outcome without a bottom-up rethinking of the purposes of education, the role of the teacher, and the kinds of human beings we hope to see our students grow into.
Put bluntly, such a rethinking seems beyond the capacities of the Department of Education as it currently exists, or any federal agency. If it is to be achieved at all, it will only be through the creative collaboration and input of parents and front-line teachers—that is, those who actually know and care for each student as a human being, not as an expendable foot soldier in a technological arms race.