AI tools can present opportunities to promote equity in education when they are implemented with thoughtful safeguards and strong oversight, minimizing associated risks. For instance, real-time translation tools can increase engagement for multilingual learners by breaking down language barriers. At the same time, tools like plagiarism detectors can carry unintended biases that may disproportionately affect the same learners. To ensure equitable outcomes, educators and administrators must critically assess the accuracy and inclusivity of AI tools. The extent to which these technologies advance equity—or deepen existing divides—will depend on the quality of leadership, the robustness of AI literacy, and the strength of guidance in place to govern their use.

Education systems should carefully evaluate students' access to AI tools, rather than take the approach of general bans. Considerations should be given to age restrictions, data privacy, security concerns, and alignment with teaching and learning goals, curriculum, and the overall district technology plan. Attempting to enforce broad bans on AI is a futile effort that widens the digital divide between students with independent access to AI on personal devices and students dependent on school or community resources. Closing the digital divide in an age of AI still begins with internet connectivity, device availability, and digital literacy.

 

Principles for AI in Education

Access Editable Doc

 Consider the following principles as you develop your AI guidance. Each principle includes questions to discuss and consider, a description, and a real-world example. Visit the Sample Guidance section for an illustrative example of a resource based on these principles.

1. PURPOSE:

Use AI to help all students achieve educational goals. 

Education leaders should clarify the shared values that will guide the use of AI tools, especially those that were not specifically created for educational contexts. AI tools should be applied to serve organizational goals, such as promoting student and staff well-being, enriching student learning experiences, and enhancing administrative functions. At the same time, new or revised goals may also emerge as workforce needs shift.

 

  • How does our guidance highlight the purposeful use of AI to achieve our shared education vision and goals? 
  • How do we reduce the digital divide between students with easy access to AI tools at home and those dependent on school resources?

  • How does our guidance ensure inclusivity, catering to diverse learning needs and linguistic and cultural backgrounds?

Discussion Questions

for Principle 1: Purpose

Using AI tools to promote equity in education requires both access and thoughtful implementation. Equity is also addressed in the other principles in this toolkit, such as promoting AI literacy for all students, realizing the benefits of AI, and addressing the risks.

Addressing Equity

“Attempting to enforce general bans on AI is a futile effort that serves to widen the digital divide…”

Example

​The Lower Merion School District, Pennsylvania, USA, states, "We believe in preparing students for the future. Our students will most certainly be engaging with artificial intelligence in years to come. As such, we view it as partly our responsibility to teach them how to use new technology and tools in ways that are appropriate, responsible, and efficient… Rather than ban this technology, which students would still be able to access off campus or on their personal networks and devices, we are choosing to view this as an opportunity to learn and grow.“

Age Restrictions and Parental Consent

Always review each AI tool’s user agreement for the most up-to-date information on age restrictions, terms of use, and required consents. This resource offers a helpful starting point—but does not substitute for local evaluation and legal review. School systems are responsible for determining appropriate use. This includes:

  • Reviewing user agreements regularly.
  • Aligning existing school policies on consent, privacy, and acceptable use.
  • Considering whether a tool's use is appropriate for the context.
  • Investing in AI literacy for educators and students.
  • Engaging legal counsel when setting or revising local policies.

Decisions about AI use should be guided by local values, legal frameworks, and a commitment to student safety and well-being, not by vendor defaults alone.

Ethical AI Use

The use of AI to pursue educational goals must be carefully aligned with the core values and ethics of the education system. This means identifying and mitigating the risks of AI in education so that the benefits may be realized (see Principle 4). Furthermore, students should learn about “the impact of AI on our lives, including the ethical issues it raises,” and teachers should be provided training to recognize misinformation.5 AI systems should be deployed in ways that support and maintain human decision-making in the process of teaching and learning.

Example

Peninsula School District, Washington, USA. AI Principles and Beliefs Statement.6 “Our unwavering commitment to Universal Design for Learning (UDL) shapes our belief that our use of AI should align with UDL's three core principles: diversified ways of representation, action/expression, and engagement. AI can facilitate presenting information in diverse formats, aligning with individual learners' needs.”

Information Accuracy: Addressing misinformation, disinformation, and malinformation

What’s the difference?

  • Misinformation is false, but not created or shared with the intention of causing harm.
  • Disinformation is deliberately created to mislead, harm, or manipulate a person, social group, organization, or country.
  • Malinformation is based on fact, but used out of context to mislead, harm, or manipulate.

(The Cybersecurity and Infrastructure Security Agency (CISA). (n.d.). Disinformation stops with you.)

When false information is used to harm: Deepfakes

Deepfake technology poses a growing threat within educational environments, potentially harming not only students but also administrators, teachers, and other staff members. AI-generated images, videos, and audio can be used to manipulate reality and used to create false accusations, impersonations, or fabricated media that can lead to serious consequences for individuals' emotional well-being, reputations, and even legal standing.  

Students need to learn how to recognize, resist, and report deepfakes, not only to protect themselves but also to understand the ethical implications of their use. In some cases, students may be the ones wielding deepfakes to exert social pressure, spread misinformation, or attempt to undermine adults or peers in subtle but impactful ways.

This issue cannot be addressed solely through bullying policies. Schools should incorporate structured, ongoing professional development for educators and age-appropriate AI literacy education for students. Understanding how these technologies work, their risks, and the broader implications of digital deception is critical to building safe and respectful learning environments.

Governments are beginning to respond to these challenges with targeted policies. In North Carolina, school systems are encouraged to update their bullying and cyberbullying policies to explicitly address deepfake content, including AI-generated explicit images, and to educate students about the risks of sharing personal media that could be misused. The Welsh government provides structured guidance on safeguarding students from the risks of generative AI, integrating responsible AI use into school policies, and offers specific strategies to protect learners online.

2. COMPLIANCE:

Reaffirm adherence to existing policies.

When implementing AI systems, key areas of technology policy to comply with are privacy, data security, student safety, data transfer and ownership, and child and youth protection. The Council of Great City Schools and the Consortium for School Networking (CoSN), in partnership with Amazon Web Services, have developed the K-12 Generative Artificial Intelligence (Gen AI) Readiness Checklist to help districts in the U.S. prepare for implementing AI technology solutions. The checklist provides a curated list of questions to help district leaders devise implementation strategies across six core focus areas: Executive Leadership, Operations, Data, Technology, Security, and Risk Management.

The Common Sense Media AI Ratings System provides a framework “designed to assess the safety, transparency, ethical use, and impact of AI products.”

  • What is the plan to conduct an inventory of systems and software to understand the current state of AI use and ensure adherence to existing security and privacy regulations?  
  • Does the education system enforce contracts with software providers, stipulating that any use of AI within their software or third-party providers must be clearly revealed to district staff and first approved by district leadership?

Discussion Questions

for Principle 2: Compliance

Current regulations relevant to the use of AI in education

    United States

    • FERPA - AI systems must protect the privacy of student education records and comply with parental consent requirements.  Data must remain within the direct control of the educational institution.
    • COPPA - AI chatbots, personalized learning platforms, and other technologies collecting personal information and user data on children under 13 must require parental consent.
    • IDEA - AI must not be implemented in a way that denies disabled students equal access to education opportunities.
    • CIPA - Schools must ensure AI content filters align with CIPA protections against harmful content.
    • Section 504 - The section of the Rehabilitation Act applies to both physical and digital environments. Schools must ensure that their digital content and technologies are accessible to students with disabilities.


    International

    • EU AI Act (EU) - A comprehensive regulatory framework that categorizes artificial intelligence systems by risk level and sets legal requirements to ensure their safe, ethical, and transparent use across member states.
    • GDPR (EU) - The EU General Data Protection Regulation provides strict data protection and privacy regulations for individuals in the European Union.
    • Data Protection Act (UK) - Governs the use of personal data in the United Kingdom.
    • PIPL (China) - The China Personal Information Protection Law protects student data privacy.
    • DPDP (India) - The Digital Personal Data Protection Act proposes protections for student data.

    Example

    Wayne RESA, Michigan, USA, created an artificial Intelligence website and guidance document with ethical, pedagogical, administrative, and policy considerations. “AI systems often need large amounts of data to function effectively. In an educational context, some uses could involve collecting and analyzing sensitive data about students, such as their learning habits, academic performance, and personal information. Therefore, maintaining student privacy is the primary ethical consideration. Even with consent, it is not appropriate to prompt public models with identifiable data because anything shared with a model, even if information is shared in prompt form, may be added to the model for future reference and even shared with other users of the model.”

    Generative AI introduces new challenges to copyright and intellectual property, particularly as it becomes harder to determine clear ownership of content used to train or generated by AI systems. Oregon’s guidance acknowledges this complexity and recommends that schools approach these issues with caution. Educators are encouraged to review licensing frameworks such as Creative Commons and stay informed through resources like the U.S. Copyright Office’s Artificial Intelligence Initiative. While not education-specific, this initiative offers valuable insight into how copyright laws may apply to AI-generated content. Given the legal uncertainties in this evolving area, school systems should engage legal counsel to help monitor developments and inform local policies, ensuring that staff understand their rights and responsibilities when using, sharing, or creating instructional materials with AI tools.

    Once you have aligned your AI Responsible use guidelines and policies and updated existing privacy, data security, and student protection policies, your procurement and approval policies may also need a refresh. A coalition of seven leading edtech organizations—1EdTech, CAST, CoSN, Digital Promise, InnovateEDU, ISTE, and SETDA—has introduced Five Edtech Quality Indicators to help schools evaluate AI and edtech products efficiently. The coalition is also developing an EdTech Index to display verified edtech product approvals. 

    The 2025 SETDA EdTech Quality Indicators Guide applies the Five EdTech Quality Indicators to support education leaders in evaluating and selecting high-quality educational technology tools. The guide includes adaptable questions for procurement leaders, examples of state leadership in edtech evaluation, and third-party validation resources to support informed decision-making. 

    Example

    Guidance from the U.S. State of Georgia recommends and offers resources for “vetting and adopting district and school level AI tools” and “establishing formal agreements with AI systems and tools” vendors (p. 9). Resources include a sample rubric for “evaluating and adopting AI tools” according to their “educational value, data privacy, usability, cost, scalability, vendor reputation, and age restrictions” (p. 10).

    3. KNOWLEDGE: 

    Promote AI Literacy.

    AI literacy refers to the knowledge, skills, and attitudes associated with how artificial intelligence works, including its principles, concepts, and applications, as well as how to use artificial intelligence, such as its limitations, implications, and ethical considerations.

    What is AI Literacy? 

    New Resource: AI Literacy Framework

    The European Commission and the Organization for Economic Cooperation and Development (OECD), with support from Code.org and leading global experts are developing an AI literacy framework for primary and secondary education.


    The framework defines what students should know and be able to do as AI evolves and shapes society, enabling them to benefit from, as well as lead and shape, the AI transition. It will include competencies that primary and secondary educators can integrate across subjects, so that AI literacy becomes a part of everyday classrooms.


    A draft of the AI literacy framework will be released in May 2025. We invite feedback from educators and stakeholders worldwide. The final version—shaped by global feedback—will be released in early 2026 and will include practical, high-quality examples of AI literacy. Visit teachai.org/ailiteracy to learn more about this work.

    • How does the education system support staff and students in understanding how to use AI and how AI works? 
    • What is the strategy for incorporating AI concepts into core academic classes, such as computer science?  
    • How is system wide participation in AI education and professional development being encouraged and measured?

    Discussion Questions

    for Principle 3: Knowledge

    Foundational concepts of AI literacy include elements of computer science, as well as ethics, psychology, data science, engineering, statistics, and other areas beyond STEM. AI literacy equips individuals to engage productively and responsibly with AI technologies in society, the economy, and their personal lives. 

    Schools can create opportunities for educators to collaborate and consolidate lessons learned to promote AI literacy across disciplines. In April 2025, the Center for Reinventing Public Education (CRPE) issued “an urgent call” for widespread support of AI literacy for educators and education leaders, arguing that “Adult AI literacy is the foundation to advancing scalable, sustainable AI strategies.”7

     

    AI4K12’s Five Big Ideas in AI provide K-12 guidelines for how AI works.

    AI literacy has benefits for a wide range of stakeholders and a variety of purposes. For example:

    • Policymakers will be better able to assess and mitigate risks, from data privacy threats to overreliance on automation.
    • Teachers will be more prepared to lead discussions on AI's ethical and societal impacts, including bias, privacy, and fairness, and promote its responsible use.
    • Students will be more likely to critically assess AI-generated content and discern between reliable outputs and potential misinformation.

    AI Literacy frameworks are beginning to emerge, and many guidance documents offer significant content shaping these frameworks. The AI Readiness Framework, developed by aiEDU, highlights “What students, educators, and district leaders need to know.” This free resource is offered for students and educators to use as they “develop AI Literacy and build AI Readiness.” For districts, the resource includes a rubric to use as they prepare schools for AI integration.


    The Digital Promise AI Literacy Framework is designed to provide educational leaders with a concise and comprehensive understanding of what constitutes AI literacy. The framework centers around three interconnected "Modes of Engagement: Understand, Evaluate, and Use". Underpinning these modes are core values emphasizing human judgment and centering justice. These values are operationalized through "AI Literacy Practices", which are actionable skills that demonstrate understanding and evaluation. The framework also identifies three types of use: interact, create, and problem solve. This interconnected approach highlights that robust AI literacy involves a concurrent and integrated engagement with understanding, evaluating, and using AI, guided by core values and demonstrated through specific practices.

    The European Artificial Intelligence Office maintains a Living Repository of AI Literacy Practices: A dynamic collection of real-world AI literacy initiatives from European organizations, compiled to support implementation of Article 4 of the EU AI Act. This repository showcases practices across sectors and organizational sizes, offering concrete examples of how to tailor AI literacy training by role, technical expertise, and contextual use. It’s designed to inspire cross-sector learning, highlight inclusive and risk-aware strategies, and ensure AI tools are used ethically, effectively, and in alignment with regulatory expectations.

    Example: In 2019, Gwinnett County Public Schools launched a K-12 AI literacy initiative that includes both discrete and embedded learning experiences across content areas through the lens of their AI Learning Framework. High school students have the option to participate in the discrete three-course AI pathway, which dives beyond literacy to rigorous technical learning for those students interested in an AI career.


    Image: Gwinnett County Public Schools AI Learning Framework

    Example

    The California Department of Education offers information regarding the role of AI in California K12 education. “Knowing how AI processes data and generates outputs enables students to think critically about the results AI systems provide. They can question and evaluate the information they receive and make informed decisions. This is of particular significance as students utilize AI in the classroom, to maintain academic integrity and promote ethical use of AI.” In addition, California’s legislature passed Assembly Bill No. 2876 in 2024 which requires “... the commission to consider incorporating Artificial Intelligence (AI) literacy content into the mathematics, science, and history-social science curriculum frameworks when those frameworks are next revised after January 1, 2025, and would require the commission to consider including AI literacy in its criteria for evaluating instructional materials when the state board next adopts mathematics, science, and history-social science instructional materials, as provided.”

    4. BALANCE: 

    Realize the benefits of AI and address the risks.

    Navigating AI in education requires a balanced approach—one that embraces innovation while maintaining a clear-eyed awareness of the risks. Rather than viewing AI as inherently good or bad, educational leaders can work toward practical strategies that shape how AI is used in ways that reflect our values, advance student learning, and preserve human judgment. This principle recognizes that realizing the benefits of AI depends on how thoughtfully it is integrated, governed, and supported.

    • Does our guidance describe and support opportunities associated with using AI and proactively mitigate the risks?

    Discussion Question

    for Principle 4: Balance

    One resource that wrestles with what a balanced approach to AI in the classroom entails is How to Address Artificial Intelligence in the Classroom (School of Education at the University of San Andrés in Argentina). The guide (available in Spanish and English) provides educators, policymakers, and school leaders with insights and practical strategies for integrating AI into classrooms responsibly, including using AI-generated texts for analysis, incorporating AI literacy into lessons, and adapting assessments to ensure genuine student learning. 

    The tables below outline areas of opportunity, related risks, and strategies to mitigate these risks in student learning, teacher support, and management and operations.

    Personalized Content and Review:  AI can help generate personalized study materials, summaries, quizzes, and visual aids, help students (including those with disabilities) access and develop tailored resources to meet their specific needs, and help students organize thoughts and review content.

    Greater Content Accessibility for Students with Disabilities:Assistive technologies like text-to-speech software, speech recognition systems, and AI-integrated augmentative and alternative communication (AAC) tools hold potential to improve learning experiences for students with diverse needs. accessibility.8


    Aiding Creativity: Students can use generative AI as a tool to spark creativity across diverse subjects, including writing, visual arts, and music composition. AI can suggest novel concepts or generate artwork or musical sequences to build upon. 


    Tutoring: AI technologies have the potential to democratize one-to-one tutoring and support, especially for students with financial or geographic constraints. Virtual teaching assistants powered by AI can provide round-the-clock support, help with homework, and supplement classroom instruction. 

    Critical Thinking and Future Skills: Students who learn about how AI works are better prepared for future careers in a wide range of industries. They develop computational thinking skills to break down complex problems, analyze data critically, and evaluate the effectiveness of solutions.

    Plagiarism and cheating can occur when students copy from generative AI tools without approval or adequate documentation and submit AI-generated work as their original work.

    Misinformation can be produced by generative AI tools and disseminated at scale, leading to widespread misconceptions.

    Bullying and harassment by using AI tools to manipulate media in order to impersonate others can have severe consequences for students' well-being.


    Overreliance on potentially biased AI models can lead to abandoning human discretion and oversight. Important nuances and context can be overlooked and accepted.9  People may overly trust AI outputs, especially when AI is seen as having human-like characteristics (i.e., anthropomorphization). 

    Unequal access to AI tools worsens the digital divide between students with independent and readily available access at home or on personal devices and students dependent on school or community resources.

    In addition to being clear about when and how AI tools may be used to complete assignments, teachers can restructure assignments to reduce opportunities for plagiarism. This may include evaluating the artifact development process rather than just the final artifact and requiring personal context, original arguments, or original data collection.

    Students should learn how to critically evaluate all content for misinformation or manipulation and be taught about the responsible development and sharing of AI-generated content.

    Staff and students should be taught how to properly cite and acknowledge the use of AI where applicable.


    If an assignment permits the use of AI tools, the tools must be made available to all students, considering that some may already have access to such resources outside of school.


    See Principle 1. Purpose and

    Principle 5. Integrity for more information.

    Student Learning

    Opportunities

    Risks

    Guardrails

    Tables: Opportunities, Risks, and Guardrails for Balanced AI Integration in Education 

    Teacher Support

    Societal Bias is often due to human biases reflected in the data used to train an AI model. Risks include reinforcing stereotypes, recommending educational interventions that are inappropriate, or making discriminatory evaluations, such as falsely reporting plagiarism by multilingual learners.


    Diminishing student and teacher agency and accountability is possible when AI technologies deprioritize the role of human educators in making educational decisions. While generative AI can present useful assistance in amplifying teachers' capabilities and reducing teacher workload, these technologies should be a supporting tool to augment human judgment, not replace it.


    Privacy concerns arise if AI is used to monitor classrooms for accountability purposes, such as analyzing teacher-student interactions or tracking teacher movements, which can infringe on teachers' privacy rights and create a culture of surveillance.10

    Opportunities

    Risks

    Guardrails

    Content Development, Enhancement, and Differentiation: AI can assist educators by differentiating resources, drafting initial lesson plans, generating diagrams and charts, and creating customized worksheets based on student needs and proficiency levels.


    Assessment Design and Analysis: In addition to enhancing assessments by automating question creation, providing standardized feedback on common mistakes, and designing adaptive tests based on real-time student performance, AI can conduct diagnostic assessments to identify gaps in knowledge or skills and enable rich performance assessments. Teachers should ultimately be responsible for evaluation, feedback, and grading, as well as determining and assessing the usefulness of AI in supporting their grading work. AI should never be solely responsible for grading.


    Continuous Professional Development: AI can guide educators by recommending teaching and learning strategies based on student needs, personalizing professional development to teachers’ needs, suggesting collaborative projects between subjects or teachers, and offering simulation-based training scenarios such as teaching a lesson or managing a parent/teacher conference.

    Ethical Decisions: Understanding how AI works, including its ethical implications, can help teachers make critical decisions about the use of AI technologies and help them support ethical decision-making skills among students.

    Select AI tools that provide an appropriate level of transparency in how they create their output to identify and address bias. Include human evaluation before any decisions informed by AI are made, shared, or acted upon.

    Educate users on the potential for bias in AI systems so they can select and use these tools more thoughtfully. 


    All AI-generated content and suggestions should be reviewed and critically reflected upon by students and staff, thereby keeping “humans in the loop” in areas such as student feedback, grading, and when learning interventions are recommended by AI.11


    When AI tools generate instructional content, it's vital for teachers to verify that this content is accurate and aligns with the curriculum standards and learning objectives. 


    See See Principle 3. Knowledge and Principle 6. Agency for more information.

    A National Level Guide for Applying Generative AI

    In April 2023, the United Arab Emirates Office of AI, Digital Economy, and Remote Work released 100 Practical Applications and Use Cases of Generative AI, a guide that includes detailed use cases for students, such as outlining an essay and simplifying difficult concepts.


    “The potential for AI is obvious, and educating our future generation is just the beginning.”


    – H.E. Omar Sultan Al Olama

    Management and Operations

    Opportunities

    Risks

    Guardrails

    Compromising privacy is a risk when AI systems gather sensitive personal data on staff and students, store personal conversations, or track learning patterns and behaviors. This data could be hacked, leaked, or exploited if not properly secured and anonymized. Surveillance AI raises all of the concerns above, as well as the issue of parental consent, potential harmful biases in the technology, the emotional impact of continuous monitoring, and the potential misuse of collected data.


    Discrimination is a main concern of AI-driven recruitment due to the potential for reinforcing existing biases. If the AI system is trained on historical hiring data that contains biases (e.g., preferences for candidates from certain universities, gender biases, or age biases), the system might perpetuate those biases in its selections.

    Evaluate AI tools for compliance with all relevant policies and regulations, such as privacy laws and ethical principles.


    AI tools should be required to detail if/how personal information is used to ensure that personal data remains confidential and isn't misused. 


    Use AI as a supplementary tool rather than a replacement for human judgment. For example, AI can be used to filter out clearly unqualified candidates, but final decisions should involve human recruiters.


    See Principle 2. Compliance for more information.

    Operational Efficiency: Staff can use tools to support school operations, including helping with scheduling, automating inventory management, increasing energy savings, conducting risk assessments, and generating performance reports.

    Data Analysis: AI can extract meaningful insights from vast amounts of educational data by identifying trends in performance, attendance, and engagement to better personalize instruction.


    Communications: AI tools can help draft and refine communications within the school community, deploy chatbots for routine inquiries, and provide instant language translation.


    Professional Development: AI can assist in talent recruitment by sifting through job applications to find the best matches and tailor professional development programs based on staff interests and career stages.

    Guidance from the Allen Institute for AI


    Though ethical AI research continues, current best practices exist:  

    • Developers can train AI models with diverse datasets, evaluate models for harmful bias and toxicity, and provide insight into a model’s intended uses and limitations. 
    • School administrators can train staff on AI, inform families about AI use, and monitor how tools are performing across demographics.

    Example

    The Code of Student Conduct of the Madison City Schools, Alabama, USA, integrates an Artificial Intelligence Acceptable Use Policy into section 4.8.14 of the “Acceptable Use Of Computer Technology And Related Resource” and recognizes specific risks of AI use:

    • “Students should acknowledge that AI is not always factually accurate, nor seen as a credible source, and should be able to provide evidence to support its claims.”
    • “All users must also be aware of the potential for bias and discrimination in AI tools and applications.“

    Looking ahead: As new research emerges on the applications of AI in educational settings, schools should rely on evidence-based methods to guide initiatives (see also Principle 7: Evaluation).

    5. INTEGRITY: 

    Advance academic integrity.

    While it is necessary to address plagiarism and other risks to academic integrity, AI simultaneously offers staff and students an opportunity to emphasize the fundamental values that underpin academic integrity – honesty, trust, fairness, respect, and responsibility.12 AI’s limitations can also showcase the unique value of authentic, personal creation.

    • Does our guidance sufficiently cover academic integrity, plagiarism, and proper attribution when using AI technologies? 
    • Do we offer professional development for educators to use commonly available AI technologies to support the modification of assignments and assessments?
    • Do students have clear guidance for citing AI usage, using it properly to bolster learning, and understanding the importance of their voice and perspective in creating original work?

    Discussion Questions

    for Principle 5: Integrity

    Existing academic integrity policies should be evaluated and updated to include issues specific to generative AI. Students should be truthful in giving credit to sources and tools and honest in presenting work that is genuinely their own for evaluation and feedback. Students should be instructed about properly citing any instances where generative AI tools were used. 

    Tips to Avoid Plagiarism

    The Oregon Department of Education suggests multiple strategies, including:

    • Rethink assignments and clarify what standards/skills are being addressed.
    • Create more opportunities for students to problem solve, analyze, synthesize, and share their thinking in classroom settings.
    • Embed formative assessment throughout in order to get a deeper sense of students’ writing over time.

    How do you cite generative AI content?

    Use one of the following resources:  

    • MLA Style - Generative AI 
    • APA Style - ChatGPT
    • Chicago Style - Generative AI

    Emerging guidance documents also recommend “rethinking” academic integrity in the context of AI. These approaches double down on the importance of maintaining standards of academic integrity, while acknowledging that we will not fully know the extent of AI use within the work that we see. Because of that perpetual uncertainty introduced by generative AI, maintaining academic integrity will require clarity about what academic integrity means. Proactively articulating the values of academic integrity and doing the work of defining those principles and operationalizing them in the context of AI is critical to prevent this potential erosion of integrity in our schools. 

    Rethinking Academic Integrity in the Context of AI: Illustrative Approaches from Colorado and Oklahoma

    Colorado: “Be open to rethinking academic integrity; include clear definitions of plagiarism and academic dishonesty while emphasizing how students can responsibly use AI as a tool, eliminating the need for a separate AI plagiarism policy. Additionally, providing basic AI literacy education is essential, linking it to data literacy and career readiness. It is also crucial to teach students how to verify AI-generated information for truthfulness, ensuring they can critically assess the accuracy of such information” (p. 19).


    Oklahoma: The Oklahoma State Department of Education guidance emphasizes the need to rethink traditional notions of plagiarism in the age of AI and adapt teaching methods and expectations:“As AI becomes an integral part of writing, from scholarly articles to news updates and emails, it is imperative to reconsider our traditional notions of plagiarism and academic integrity. As AI continues to shape education, labeling all AI-assisted work as “cheating” is shortsighted. Instead, educators must adapt their teaching methods and expectations to accommodate this new reality” (p.7).​

    Teachers should be transparent about their own uses of AI and clear about how and when students are expected to use or not use AI. For example, a teacher might allow the limited use of generative AI on specific assignments or parts of assignments and articulate why they do not allow its use in other assignments. 

    Be Clear About When and How to Use AI for Assignments

    Students can use AI tools for specific parts of their assignments, such as brainstorming or initial research, but the core content and conclusions should be original.

    AI tools are not permitted for the assignment, and all work must be the student's original thoughts and words.

    "You can employ AI tools for initial research and data analysis, but the main content, arguments, and conclusions should be your own."

    "Do not use AI tools for this assignment. All content must be original, and any use of AI will be treated as plagiarism."

    Description

    Students are allowed to utilize AI tools freely to assist in their assignments, such as generating ideas, proofreading, or organizing content.

    Example Instruction

    "You may use AI tools as you see fit to enhance your assignment and demonstrate your understanding of the topic."

    Restrictive

    Moderate

    Permissive

    Level of AI Use

    An AI Assessment Scale is a tool that can help communicate clearly to students the specific uses of AI allowed on different assessments ranging from no AI to Full AI use.14 The authors recently updated the AI Assessment Scale. The new scale maintains the five categories of AI use, extending to creative uses of AI in exploration.

    The AI Assessment Scale

    AI is used creatively to enhance problem-solving, generate novel insights, or develop innovative solutions to solve problems. Students and educators co-design assessments to explore unique AI applications within the field of study.

    You should use AI creatively to solve the task, potentially co-designing new approaches with your instructor. 

    AI Exploration

    5

    1

    No AI

    The assessment is completed entirely without AI assistance in a controlled environment, enduring that students rely solely on their existing knowledge, understanding, and skills.

     You must not use AI at any point during the assessment. You must demonstrate your core skills and knkowledge. 

    AI Planning

    2

    AI may be used for pre-task activities such as brainstorming, outlining, and initial research. This level focuses on the effective use of AI for planning, synthesis, and ideation, but assessments should emphasise the ability to develop and refine these ideas independently. 

    You may use AI for planning, idea development, and research. Your final submission should show how you have developed and refined these ideas. 

    4

    Full AI 

    AI may be used to help complete the task, including ide ageneration, drafting, feedback and refinement. Students should critically evaluate and modify the AI suggested outputs, demonstrating their understanding. 

    You may use AI to assist with specific tasks such as drafting text, refining and evaluating your work. You must critically evaluate and modify any AI-generated content you use. 

    Source: Perkins, M., Furze, L., Roe, J., MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS): A Framework for Ethical Integration of Generative  AI  in  Educational  Assessment. Journal  of  University  Teaching  and  Learning  Practice,  21(6). https://doi.org/10.53761/q3azde36

    3

    AI Collaboration

    AI may be used to complete any elements of the task, with students directing AI to achieve the assessment goals. Assessments oat this level may also require engagement with AI to achieve goals and solve problems. 

    You may use AI extensively throughout your work either as you wish, or as specifically directed in your assessment. Focus on directing AI to achieve your goals while demonstrating your critical thinking. 

    Teachers should not use technologies that purport to identify the use of generative AI to detect cheating and plagiarism. The accuracy of these technologies is questionable, leading to the risk of false positives and negatives. Their use can promote a culture of policing assignments to maintain the status quo rather than preparing students for a future where AI usage is ubiquitous.15

    “Adapt assignments, assessments, and grading, which could include features like a scaffolded set of tasks; connections to personal and/or recent content for longer out-of-class assignments; in-class presentations to demonstrate content learned, regardless of if or how AI supported that learning; appropriate citations of AI (like any other source) showing what was used and how; and some skill assessments designed to remove the possibility of AI support.”13

    Example

    Laguna Beach USD in California has produced a short video to assist in clarifying expectations regarding AI use on assignments that you may also find useful.

    Resources Addressing Academic Integrity

    6. AGENCY: 

    Maintain human decision-making when using AI.

    Any decision-making practices supported by AI must enable human intervention and ultimately rely on human approval processes. These decisions include instructional decisions, such as assessment or academic interventions, and operational decisions, such as hiring and resource allocation. AI systems should serve in a consultative and supportive role without replacing the responsibilities of students, teachers, or administrators.

    Any decision-making practices supported by AI must enable human intervention and ultimately rely on human approval processes.

    • Does our guidance clarify that staff are ultimately responsible for any AI-aided decision and that AI is not solely responsible for any major decision-making or academic practices? 
    • How does our guidance ensure that students retain appropriate agency in their decisions and learning paths when using AI tools?

    Discussion Questions

    for Principle 6: Agency

    Example

    Peninsula School District, Washington, USA. AI Principles and Beliefs Statement: “The promise of Artificial Intelligence (AI) in the Peninsula School District is substantial, not in substituting human instructors but by augmenting and streamlining their endeavors. Our perspective on AI in education is comparable to using a GPS: it serves as a supportive guide while still leaving ultimate control with the user, whether the educator or the student.”

    Early concepts of “humans in the loop” appear in many guidance documents. That same work is beginning to conceptualize the human role as more than a mere participant and to check on the information generated by AI tools. Human agency conceives of AI as a tool that humans use to accomplish their goal and meet their needs. This perspective better aligns with the purpose of education systems in the first place: to build the knowledge, skills, and dispositions that students need to play rewarding and productive roles in their communities.

    The European Commission resource, Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for Educators defines human agency as “the capability to become a competent member of society,” and the ability to “determine their life choices and be responsible for their actions. Agency underpins widely used concepts such as autonomy, self-determination, and responsibility” (p. 18). 

    Looking  ahead: In the future, AI policies should expect increased transparency from AI providers on how tools work and include statements like, “Our school system aims to work with AI tools that are transparent in how they operate, that provide explanations for outputs, and that enable human oversight and override. We will require all providers to make it clear when a user is interacting with an AI versus a human.”

    7. EVALUATION: 

    Regularly assess the impacts of AI.

    The US Office of Educational Technology’s Empowering Education Leaders: A toolkit for safe, effective, and equitable AI integration offers, Module 4: Understanding the Evidence offers a framework for educational leaders to evaluate AI-enabled tools and continuously refine AI education guidance. It outlines the four tiers of evidence defined by the Every Student Succeeds Act (ESSA), ranging from well-supported research to emerging interventions with a rationale for effectiveness. Leaders can use these tiers to assess AI products, ensuring they align with evidence-based practices before adoption. Resources like the What Works Clearinghouse or the Edtech Index that is in development can support informed decision-making, while research-practice partnerships offer opportunities for continuous evaluation and improvement of AI policies and procurement strategies (see also Principle 2: Compliance).

    Guidance should be reviewed and updated often to ensure it continues to meet the school’s needs and complies with changes in laws, regulations, and technology. Guidance and policies will benefit from feedback from various stakeholders, including teachers, parents, and students, especially as more is learned about the impact of AI in education. See suggestions for monitoring, ongoing support, and collecting feedback in Digital Promise’s Emerging Technology Adoption Framework.

    Example

    The Utah State Board of Education has a dedicated AI education specialist who supports policy decisions around artificial intelligence in education, and supports schools in developing their own frameworks and policies around AI. The AI education specialist is also charged with supporting internal and external professional development efforts around artificial intelligence across Utah. 

    Practical Resource

    AI & Ethics is a slide deck by Torrey Trust, Ph.D., offering insights on data privacy, AI-related biases, copyright, and the broader ethical implications of integrating AI into learning contexts. It is a practical guide for educators aiming to responsibly navigate AI adoption in their classrooms.

    The EU AI Act, Article 4,  mandates that providers and deployers of AI systems ensure a sufficient level of AI literacy among their staff and others involved in operating AI systems, considering their technical knowledge, experience, education, and the context in which the AI systems are used. This requirement became effective on February 2, 2025.

    Article 26 of Argentina’s Framework for the Regulation of the Development and Use of AI states that “AI training and education will be promoted for professionals, researchers and students, in order to develop the skills and competencies necessary to understand, use and develop AI systems in an ethical and responsible manner.”

    One of the major benefits of learning about AI is developing computational thinking, a way of solving problems and designing systems that draw on concepts fundamental to computer science and are applicable to various disciplines. Learning how AI works is an opportunity for learning computational thinking skills, such as:

    • Decomposition: AI often tackles complex problems. Understanding these problems requires breaking them down into smaller, more manageable parts.
    • Pattern Recognition: Machine learning relies on recognizing patterns in data. By understanding machine learning, students practice and develop skills in identifying patterns and trends.
    • Algorithmic Thinking: Learning about AI exposes students to algorithms, step-by-step solutions to a problem, from simple decision trees to more complex processes.
    • Reflection: As with any computational task, AI models can sometimes produce unexpected or incorrect results. Understanding and rectifying, or debugging, these issues are central to both AI and computational thinking.
    • Evaluation: AI often requires assessing different solutions to determine the best approach. This mirrors a key aspect of computational thinking, where solutions are tested and refined based on outcomes.


    As an example of a discipline engaging deeply in how AI might influence curriculum and instruction, refer to TeachAI and CSTA’s Guidance on the Future of Computer Science Education in an Age of AI

    “Teachers should not use technologies that purport to identify the use of generative AI to detect cheating and plagiarism.”

    Develop an Overall Vision:
    A Framework for Incorporating AI 

    Read More

    Inform Your Guidance:
    Principles for AI Guidance

    Read More

    View Sample Guidance:
    Guidance on the Use of AI in Our Schools

    Read More

    Review Existing Policies:      Sample Considerations for Existing Policies

    Read More

    Give a Presentation: The AI in Education Presentation

    Read More

    Engage
    Communication with Parents, Staff, and Students

    Read More

    Consider:
    Featured User Guides

    Read More

    Suggested Citation:  TeachAI (2025). AI Guidance for Schools Toolkit. Retrieved from teachai.org/toolkit. [date]. 

    Footnotes

    5. UNESCO. 2023. Guidance for generative AI in education and research. Paris, UNESCO. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000386693 (Accessed 22 September 2023.)

    6. Here is further information on how the Peninsula School District’s “AI Principles and Beliefs Statement” was created

    7. Center on Reinventing Public Education. Calming the noise: How AI Literacy Efforts Foster Responsible adoption for Educators. (2025, April 1). https://crpe.org/calming-the-noise-how-ai-literacy-efforts-foster-responsible-adoption-for-educators/

    8. Pérez Perez, F. (2024) AI & Accessibility in Education. Consortium for School Networking & CAST.  https://www.cosn.org/2024-blaschke-executive-summary/

    9. Dede, C. (2023). What is Academic Integrity in the Era of Generative Artificial Intelligence? (Blog) https://silverliningforlearning.org/what-is-academic-integrity-in-the-era-of-generative-artificial-intelligence/ 

    10. The White House: OSTP, Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, The White House, https://www.whitehouse.gov/ostp/ai-bill-of-rights/ 

    11.  U.S. Department of Education, Office of Educational Technology, Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, Washington, DC, 2023.

    12. International Center for Academic Integrity [ICAI]. (2021). The Fundamental Values of Academic Integrity. (3rd ed). www.academicintegrity.org/the-fundamental-values-of-academic-integrity

    13. Gallagher, H. A., Yongpradit, P., & Kleiman, G. (August, 2023). From reactive to proactive: Putting districts in the AI driver's seat [Commentary]. Policy Analysis for California Education. https://edpolicyinca.org/newsroom/reactive-proactive

    14.Perkins, M., Furze, L., Roe, J., MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS): A Framework for Ethical Integration  of Generative  AI  in  Educational  Assessment. Journal  of  University  Teaching  and  Learning  Practice,  21(6). https://doi.org/10.53761/q3azde36

    15.  Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., and Zou, J. (2023). GPT Detectors Are Biased against Non-native English Writers. Patterns 4,100779.

    The United States White House issued an executive order in April 2025, Advancing Artificial Intelligence Education for American Youth, that underscores the importance of AI literacy.

    Example

    Gwinnett County Schools in Georgia, USA, created an office dedicated to computer science and AI.  “The Office of Artificial Intelligence and Computer Science provides supports for the district’s K-12 Computer Science for All (CS4All) program as well as K-12 competition robotics. The office also supports the AI and Future-Readiness initiative in collaboration with other departments. Future-Readiness emphasizes the learning needed for students to be ready for their futures. As advanced technologies continue to impact the workplace and society, GCPS students will be future-ready as informed users and developers of those technologies.”

    © 2025 TeachAI

    Back to Top
    Give Feedback
    Back to Toolkit Home