Modeling Responsible Artificial Intelligence (AI) in the Classroom
Artificial intelligence (AI) in education has evolved so rapidly that teachers can’t catch up, creating a global AI crisis.
A majority (76%) of teachers in the U.S. and the U.K., for example, feel unprepared for navigating AI, despite roughly 60% using the technology. Higher education is also navigating AI challenges, with 40% of faculty across 52 institutions feeling like they are just beginning their AI literacy.
Students, on the other hand, are outpacing their teachers with AI, particularly generative AI, the technology behind video, image and chat generators. Seven in 10 teens report using at least one generative AI tool, with boredom, help with homework and translating languages being the top reasons teens turn to AI.
When using AI for homework, 41% of teen respondents did so with their teacher’s permission. This means more than half of the students using AI for homework are going behind their teacher’s back—a new challenge for educators and administrators.
The AI battle in schools has been well documented. The New York Times referred to AI as “a new headache for honest students” and 68% of middle and high school teachers report using AI detection tools. Additionally, student discipline for AI use has increased. During the 2022-2023 school year, 48% of teachers confronted a student about their AI use, and the following school year saw a 15-point increase.
What if a battle isn’t necessary?
Teachers, students and AI can coexist—and even be effective together—with education, transparency, ethical boundaries and purpose. Here’s how.
Why is responsible AI use necessary?
Ignorance fuels AI misuse, which is why becoming knowledgeable about the technology is the best way to navigate responsibly.
What does that look like for teachers? In simplest terms, using discretion and knowing the reason behind that discretion. These examples highlight why being mindful about AI use is necessary.
Bias
AI operates by turning data into algorithms, and it’s humans who provide that data. The problem with that model is that humans all have implicit bias, which affects how various groups interact with the technology.
In 2024, Stanford researchers studied large language models (LLM), which are AI models that receive pre-existing text to build these algorithms. This large batch of text is called the ‘pretraining corpus,’ and the researchers state that a “growing body of evidence indicates that language models pick up prejudices present in the pretraining corpus…” With the internet serving as the pretraining corpus for most AI applications, researchers believe online human bias has slipped into AI learning.
Climate concerns
It can be challenging for someone without AI knowledge to connect the technology with climate. The reality is that regular AI use makes a huge dent in our natural resources, particularly water.
AI carries out its duties thanks to large training centers that house the servers needed to run AI applications. These servers rely on water chillers or a water-based coolant mixture to prevent overheating. On a large scale, this is dangerous to the environment, as the biggest data centers can consume up to 5 million gallons per day—the equivalent of a small town.
Within the past two years, this energy consumption has doubled, and McKinsey estimates that global demand for data center capacity could more than triple by 2030.
Critical thinking
The more a brain depends on technology to solve problems, the fewer critical thinking skills it develops. Relying on AI, as opposed to using it as a tool, takes away the critical thinking needed to make problem-solving easier.
MIT researchers discovered the brain shows different neural connectivity patterns between a group of participants using LLMs and a group of participants using their brains. According to the study’s conclusion, “Regarding ethical considerations, participants who were in the brain-only group reported higher satisfaction and demonstrated higher brain connectivity, compared to other groups…”
These researchers determined LLMs’ impact on cognitive development, critical thinking and intellectual independence “demands a very careful consideration and continued research.”
Determining fact from fiction
AI needs fact-checking.
Bias is already a known concern in AI, which is why the information provided should never be taken as 100% factual. Supporting this is a 2024 Indiana University study that found people are more likely to fall for fake headlines when relying on AI to fact-check information.
How does this happen? Ella Atkins at Virginia Tech points to how AI applications communicate.
“AI, like people, can now speak with authority, so much that we are comforted in its accuracy,” Atkins said. “How do we as a species distinguish fact from fiction when both are presented so similarly?”
There’s a name for this generative misinformation: Hallucination. And, as AI advances, its hallucinations rise. In 2023, The New York Times reported that chatbots made up information anywhere between 3% to 27% of the time. Two years later, the statistics are different.
“On one test, the hallucination rates of newer A.I. systems were as high as 79 percent,” the publication reported in 2025. Other hallucination rates reported in the story were 33%, 44% 48% and 51%.
“The way these systems are trained, they will start focusing on one task — and start forgetting about others,” said Laura Perez-Beltrachini, a researcher at the University of Edinburgh who spoke to the NYT.
When is using AI acceptable?
Teachers have an audience ready to handle AI responsibility, with most teens (57%) saying it is not acceptable to use AI when writing essays. Teens also highly support visible warnings about the content’s accuracy and the potential for bias, with 73% wanting a label or watermark to show sources. These statistics show that youth, particularly high school students, take AI seriously.
This presents an opportunity for educators to leverage that curiosity and create good habits. According to Harvard lecturer Houman Harouni, educators must “help the next generation face the reality of the world and develop instruments and ways of navigating this reality with integrity.”
That being said, using AI in the classroom is acceptable if it helps students make connections, prepare for a career and navigate day-to-day technologies.
Consider using AI with students to:
Brainstorm
Brain fog and writer’s block can disrupt the creative process and cost time. Turning to an AI tool to help a stuck brain get unstuck saves time and sparks more creativity. Students can explore themes and scenarios by asking questions related to their work. The answers give students a springboard to build more original ideas.
Educational psychology professor Andrew Martin, PhD, cautions against relying on AI for all brainstorming purposes, however.
“The more you rely on generative AI to help you with your schoolwork, the less you might be inclined to meet up with friends in person or online after school to brainstorm around an essay.”
Connect the tech to real-life uses
AI is now an everyday experience, whether it’s the results at the top of a web search or a chatbot on a business’s website. The technology has permeated society to the point where students need to understand it to inform their choices as they become adults.
High school students, for example, can utilize AI to practice interviews, whether for a job or college admissions. The technology can review the exchange and provide feedback on how to improve, developing important skills.
Teachers, on the other hand, can utilize AI to brainstorm lesson plans and organize their thoughts, making planning more efficient. According to a Gallup poll, some teachers save up to six weeks a year from an AI boost.
Create personalized learning experiences
Students thrive when presented with personalized learning experiences, however, the nation’s recent teacher shortage has made it challenging for all teachers to meet each student’s learning needs.
AI can help close these gaps with practice work based on the areas where the student needs improvement. A teacher can upload the student’s work with a prompt to identify the weak areas and generate a plan to address them.
When educators have limited time and resources for individualized learning, AI can make learning more personal for teachers and students.
A global framework for making responsible AI choices
American educators aren’t alone in navigating AI with their students. The technology has grown so rapidly worldwide that teachers across the globe have the same question: How do I navigate this with my students?
The World Economic Forum recognized this crisis and provided an AI roadmap, outlining seven principles on responsible AI use in education.
Follow these to stay on track:
- Purpose
If AI is used, its purpose must be explicitly connected to education goals.
- Compliance
Teachers must understand compliance related to data security, student safety, privacy and data ownership.
- Knowledge
A teacher must have AI literacy before they implement the technology in their classroom.
- Balance
Educators must equally understand AI’s benefits and risks.
- Integrity
Emphasize values like honesty, trust, fairness, respect and responsibility. Address plagiarism risks present when using AI.
- Agency
AI should be viewed as a consultant and not a replacement for human processes.
- Evaluation
Review and update AI guidance to continually uphold the law and meet the district’s evolving needs.
In addition to the World Economic Forum’s guidance, the Washington Office of Superintendent of Public Instruction put together a robust guide, Implementing AI: A Practical Guide for the Classroom.
The guide includes implementation framework, a downloadable matrix to assist with integrating AI tools into assignments, structured approaches for developing critical thinking around AI and samples of classroom protocol, as well as a student AI code of conduct.
If you’re considering following your dream of teaching, Rutgers Alternate Route can offer you the support and training you need to succeed. Be sure to follow Rutgers Alternate Route on Twitter and sign up for Alternate Route’s monthly newsletter for more information and stories from the field of education.