It’s 2025, and generative AI is suddenly everywhere. In less than a year, tools like OpenAI’s ChatGPT went from novelty to mainstream, reaching 100 million users just two months after launch – a record adoption speed reuters.com. Chatbots now draft emails, write code, even help with daily tasks. Unsurprisingly, this AI surge has swept into schools as well. From high schoolers asking ChatGPT to explain a tricky concept to teachers using AI to plan lessons, the education world has been caught in a whirlwind. This global moment demands our attention: we face an urgent need to act, reflect, and shape how AI is used in education before it shapes us.
OpenAI’s ChatGPT interface gained 100 million users within months of launch. Now, such AI tools are rapidly appearing in classrooms worldwide.
The AI Surge Hits the Classroom
Walk into a classroom today and you may find AI quietly at work. Students are already tapping into generative AI for help with assignments – by the end of 2024, about a quarter of U.S. teens said they have used ChatGPT for schoolwork, double the share from the year before pewresearch.org. Still more have tried it informally, using it to brainstorm ideas or get unstuck on homework. On the other side of the desk, teachers too are experimenting with AI. In fact, a recent poll found more teachers than students were using these tools, as teachers leverage AI to save time on routine tasks weforum.org. One high school teacher reported that a certain AI app (an AI assistant for teachers) enabled him to “plan a year’s worth of lessons” in a single summer – giving him back precious time weforum.org .
The initial reaction in many schools, however, was anxiety. Early on, some districts blocked access to ChatGPT, fearing it would enable rampant cheating or provide incorrect answers. New York City, the largest U.S. public school system, famously banned ChatGPT in January 2023 – only to reverse course a few months later. Schools Chancellor David Banks admitted the ban had been a “knee-jerk fear” response businessinsider.com and wrote that New York City schools would now “encourage and support our educators and students as they learn about and explore this game-changing technology”. Similar turnarounds happened elsewhere as educators realized the genie wasn’t going back in the bottle. Instead of outright prohibitions, the focus is shifting to setting ground rules and guidelines for AI use.
Meanwhile, a wave of new education technology (EdTech) tools has arrived, integrating AI into learning. Adaptive learning apps were already personalizing practice questions for students, but now generative AI can take it further – holding a dialogue with a learner or creating new content on the fly. For example, the non-profit Khan Academy introduced an AI tutor named Khanmigo, which has been piloted with over 55,000 students and teachers across more than 50 school districtsforbes.com. Khanmigo can coach a student through a math problem step-by-step or play the role of a historical figure in a conversation, under a teacher’s supervision. Early literacy is seeing AI help as well: elementary schools are trying AI-powered reading assistants (tools like Amira, EarlyBird, and Bamboo Learning) that provide one-on-one coaching to young readers and pinpoint their strugglesk12dive.com. Even popular apps outside school are jumping in – Duolingo uses OpenAI’s GPT-4 to let language learners practice conversation in real time. In short, nearly every corner of education is exploring how AI might enhance teaching and learning.
This rapid influx of AI brings excitement and possibilities, but also plenty of questions and tensions. Yes, an AI tutor can give a struggling student instant help at 9 PM when the teacher or parent might not be available. Yes, a teacher with an AI assistant can generate differentiated lesson materials for a class of 30 in seconds, a task that used to take hours. These are the hopeful promises – greater efficiency, personalization, and extended learning opportunities. On the flip side, there are real concerns. If a student’s essay is partly written by ChatGPT, what did the student actually learn? If an algorithm is guiding a child’s reading practice, what subtle biases or gaps might it have? Teachers worry about academic honesty, and many have resorted to AI-powered plagiarism checkers to detect AI-written work, even though none of them are reliable and questions the notion that at the base of education is relationships. Parents are left wondering how much “screen time” now includes an AI interlocutor and whether that’s healthy. School leaders must decide how to integrate these tools in a way that aligns with their educational values. All of this is happening at breakneck speed – which is why a proactive, intentional, responsible, thoughtful approach is so urgent. This isn’t just about keeping up with the latest tech fad; it’s about steering a profound shift in how children learn.
More Than Test Scores: What’s at Stake
The conversation about AI in education often starts with practical outcomes like efficiency and test scores. Can an AI make grading faster? Will it boost reading levels or math scores? Those are valid questions, but the stakes go much deeper. At heart, what’s really on the line are the human elements of education – the things that don’t show up on a spreadsheet. Here are a few of the key dimensions at stake:
Human connection and empathy: Education is fundamentally a human endeavor. Children learn through relationships – the encouraging smile of a teacher, the camaraderie of classmates, the feeling of being seen and heard. No AI, however advanced, can replicate the mentorship and emotional support that great teachers provide. Preserving the teacher-student bond and peer-to-peer learning is crucial, even as AI enters the mix.
Critical thinking, creativity, and integrity: One might assume that if AI can supply answers or write essays, students won’t need to think as much. This is a dangerous trade-off. In an age of AI, learning how to learn and how to question becomes even more important. We don’t want a generation that just accepts AI outputs uncritically. Schools must ensure students keep honing their ability to analyze, doubt, and create. UNESCO has warned of the danger that over-reliance on AI could “compromise the development of intellectual skills” in learners. In other words, if students outsource all thinking to a chatbot, their own cognitive muscles may atrophy. We need to teach students with AI, not just have AI think for them. That also means instilling values of academic integrity and original thought. It’s one thing to use AI for brainstorming or research – indeed, 54% of teens say it’s fine to use ChatGPT for researching a new topic pewresearch.org
– but it’s another to have AI do your assignment wholesale. Education has always been about more than getting the “right answer”; it’s about the process of reasoning and creating. AI must be integrated in a way that fosters critical thinking and creativity, not replaces it.
Equity and inclusion: Will AI in education be the great equalizer or will it widen existing gaps? This could go either way, depending on how we handle it. On one hand, AI tutoring could provide personalized help to students who might not otherwise afford a human tutor, potentially democratizing academic support. On the other hand, advanced AI tools might end up concentrated in well-funded schools or in the hands of students with personal devices and fast internet. There’s a real risk of a new “AI divide” emerging between those who have access to these tools and know how to use them, and those who don’t. Moreover, AI models themselves can carry biases based on the data they were trained on. UNESCO has identified bias as a key issue, urging that we “level the AI playing field” so that generative AI supports all cultural and linguistic backgrounds weforum.org . If we’re not careful, an AI that hasn’t been tuned for diversity could, for example, misunderstand non-standard English, under-serve students from certain communities, or reflect only a narrow worldview. What’s at stake is to make sure every child benefits from AI – not just those in advantaged settings – and that AI content reflects a rich diversity of perspectives. Equity in the age of AI means actively working to include underserved schools in the AI revolution and insisting on fairness and inclusivity in the technology itself.
Long-term societal impact: Education doesn’t just prepare individuals; it shapes societies. The students of today will enter a workforce and civic life where AI is commonplace. How we integrate AI in schools will influence the kind of thinkers and citizens they become. If we get it right, we could empower young people to use AI as a tool for creativity, problem-solving, and innovation in their future careers. If we get it wrong, we might produce graduates who are either blindly dependent on AI or, conversely, fearful and ignorant of it. Beyond jobs, consider the civic implications: discerning truth in an era of AI-generated information, or understanding the ethics of AI in society. The introduction of AI into learning environments raises big-picture questions about trust and agency. We must ask: What happens to a society’s collective knowledge and skills if automation takes over too much? Will we still have experts in critical fields if everyone leans on AI for answers? As one global education report put it, AI tools should never “undermine, conflict with or usurp us” – the transformation of education must be guided by a human-centered approach at every step
weforum.org. In the long run, the legacy of AI in education will be measured not by how many algorithms we deploy in classrooms, but by whether we managed to amplify our human strengths – curiosity, compassion, wisdom – while using those AI tools.
Shaping the Future: A Human-First Approach
So how do we move forward, knowing both the promise and the peril of AI in education? The key is to be proactive and human-first in our approach. Instead of letting technology “just happen” to schools, we – teachers, parents, policymakers, and students – should actively shape how these tools are adopted. Here are some ways we can guarantee AI serves learning goals and human values:
1. Establish clear guidelines and ethics. Many schools are still scrambling without a playbook – in fact, fewer than 10% of schools and universities worldwide currently have formal policies on AI useweforum.org. This needs to change fast. Districts and education ministries should develop guidelines on what is acceptable use of AI for students (e.g. using it for research or tutoring might be encouraged, but not for cheating on tests), how teachers can responsibly integrate AI into lessons, and how to handle issues like privacy and data security. Here in New Brunswick, we have clear guidelines and recommendations for both our Francophone and Anglophone sectors. Clear academic and ethical expectations – for example, requiring students to disclose if they used AI in an assignment, or forbidding AI use during closed-book exams – can help maintain fairness and transparency. It’s not about stifling innovation, but about setting boundaries so that AI is a help, not a hindrance, to genuine learning.
2. Teach AI and Data fluency – to students and teachers. We often talk about educating kids for the future, but here the future has arrived in the form of AI. It’s imperative to start teaching students about AI and data itself: how it works at a basic level, its strengths and limitations, and how to use it critically and ethically. AI fluency might include understanding that a tool like ChatGPT doesn’t “know” truth – it generates text based on patterns, which means it can also confidently produce incorrect information. Students should learn to fact-check AI and not treat it as an infallible oracle. They also need guidance on using AI as a support for learning (like brainstorming ideas or getting feedback on writing) rather than a shortcut to avoid learning. Lastly, this use needs to be aligned with the existing policy and guidelines. That is fluency, the capacity to navigate the rules while still utilizing the tools available with confidence. At the same time, teachers need training to harness AI effectively. Right now, teacher professional development is lagging; UNESCO noted that as of late 2023, only seven countries had ongoing training programs to help teachers use AI in the classroom weforum.org. That’s a startlingly low number. We should be investing in upskilling teachers, so they feel confident with these tools – whether it’s using an AI to generate adaptive quiz questions or to help them identify which students need extra help. A global survey of university students found 73% believe their institutions should provide AI training for faculty, and a similar share want courses on AI for studentscampustechnology.com. The demand for AI education is there; it’s on our education systems to supply it. When teachers and learners both have AI fluency, the power dynamic shifts – AI is no longer a mysterious black box but a familiar tool, like a calculator or a textbook, to be used wisely.
3. Prioritize human roles and relationships. A human-first approach means we continually ask: “How does this technology enhance the human elements of education?” Rather than aim to replace teachers, the goal should be to empower teachers. For instance, if an AI system can handle administrative tasks, grading, or generate draft lesson plans, that should free teachers to spend more time engaging one-on-one with students – mentoring, inspiring, listening. We should measure AI’s success in education not just by test scores, but by outcomes like student engagement, confidence, and the strength of the classroom community. Technology should serve as an assistant, while teachers remain the architects of the learning experience. Similarly, keep parents in the loop. AI shouldn’t create a wall between school and home. If a student is using an AI homework helper, parents should know and have conversations with their child about it – “What did the AI suggest? Do you think it was a good idea? How else could you have solved that problem?” This keeps the focus on learning growth and helps children reflect on their thinking process. In short, maintaining strong human relationships – teacher to student, parent to child, student to student – is non-negotiable.
4. Strive for equity and inclusion in implementation. We must be intentional to ensure AI doesn’t just become a new privilege for the few. That means policymakers and communities investing in access: devices, internet connectivity, and appropriate software for all students, not just those in wealthier districts. It also means choosing AI platforms that are accessible to learners with disabilities (e.g. tools compatible with screen readers for visually impaired students, or that offer language support for English language learners). School districts should ask tough questions of vendors: “How does your AI handle diverse dialects or cultural content? What data was it trained on? How do you mitigate bias?” If an AI reading tutor struggles with a student’s accent or an AI history quiz omits non-Western perspectives, those are problems we need to catch and correct. Inclusion also involves considering the range of student needs – AI could be a boon for students who need extra practice or those who want acceleration beyond the standard curriculum, but we should deploy it in a way that every student benefits according to their situation. An example of equity-focused use might be using AI translation tools to help non-English-speaking parents communicate with schools, or providing an AI math tutor for students who fell behind during the pandemic. Done right, AI could help bridge gaps by offering personalized support to those who need it most. But without conscious effort, it could just as easily widen those gaps. Our guiding principle should be: no student gets left behind in the AI era.
5. Involve the whole community in shaping AI’s role. The challenge of AI in education is too large and complex for any one group to handle alone. It’s not just a tech issue or just an education issue – it’s a societal issue. We need broad collaboration. Teachers, school leaders, tech developers, researchers, parents, and the students themselves all have perspectives that matter. Open dialogues, can surface hopes and concerns. Some of the best ideas for using AI creatively might come from a teacher who finds a novel way to engage struggling readers with a chatbot, or from students who come up with an honor code for AI use. Likewise, potential pitfalls (say, privacy worries or mental health effects) might be raised first by a vigilant parent or counselor. A recent UNESCO report calls for exactly this kind of multi-stakeholder approach, urging that educators, students, parents, and AI providers work together on “system-wide adjustments” to education in the AI age weforum.org. Such collaboration ensures that technical possibilities are weighed against human values at every step. It also helps build consensus and understanding – instead of a top-down mandate that people might resist, you get buy-in from those who helped craft the approach. In practical terms, schools could set up AI committees or task forces that include representatives from all these groups to pilot new tools and develop guidelines. By involving the whole community, we increase the chances that AI is integrated in a way that is culturally sensitive, ethical, and aligned with what we collectively want for our children.
Finally, we should remember that technology is a tool, not a destination. The excitement around AI’s capabilities should never eclipse our core educational mission: to raise well-rounded, thoughtful, capable humans. As we innovate, we must also pause and ask the deeper questions. Is this tool truly helping students learn and grow? Does it uphold our values of equity, empathy, and curiosity? Are we in control of it, or is it in control of us? It’s up to us to make deliberate choices. If we do nothing, AI will still seep into education – but in haphazard ways, driven by commercial interests or ad hoc decisions. If instead we engage with it proactively, we can harness AI to support educators and learners, while safeguarding what is most precious about education.
Conclusion
The age of AI in education is here, ready or not. This moment is akin to the arrival of the internet a powerful technology that can transform learning, depending on how we embrace it. What AI in education is really about is not flashy software or higher test scores; it’s about reaffirming our human priorities in the face of rapid change. It’s about making sure that humanity – our connections, our creativity, our sense of right and wrong – remains at the center of schooling, even as we welcome new digital helpers into the classroom and navigate this AI impact on the learning environment. We have a brief window now, while this technology is still young, to set the norms and habits that will last for decades. Let’s seize this chance. By acting with urgency and thoughtfulness, we can shape AI to serve education’s highest goals, rather than the other way around. In doing so, we’ll help our children grow up in a world of smart machines while keeping their hearts and minds wiser still. The future of education, powered by AI but guided by humans, is ours to create – and the work begins now.
Sources:
Pew Research Center – About a quarter of U.S. teens have used ChatGPT for schoolwork – double the share in 2023
K-12 Dive – Double the teens using ChatGPT for schoolwork
Business Insider – NYC public schools reverse ban on ChatGPT
Digital Education Council – 2024 Global AI Student Survey
World Economic Forum – UNESCO guidance on Generative AI in education
World Economic Forum – UNESCO: 8 guidelines for AI in education
Forbes – Khan Academy’s Khanmigo AI tutor pilot
OpenAI – Duolingo uses GPT-4 for AI conversation partner
This is exactly the conversation we need to be having right now!