It can also inform national and state/regional guidance and is designed to be downloaded as a reference during guidance development and customized to specific contexts. See Sample Considerations for Existing Policies for language that can be added to existing responsible use, privacy, and academic integrity policies.
This is an example of a resource an education system might provide schools.
This document guides our students, staff, and school communities on the appropriate and responsible use of artificial intelligence (AI), particularly generative AI tools, in classroom instruction, school management, and systemwide operations. Generative AI has potential benefits for education and risks that must be thoughtfully managed.
Artificial intelligence refers to computer systems that are taught to automate tasks normally requiring human intelligence. "Generative AI" refers to tools, such as Bard, Bing Chat, ChatGPT, Mid-Journey, and Dall-E, that can produce new content, such as text, images, or music, based on patterns they've learned from their training data. This is made possible through "machine learning," a subset of AI where computers learn from data without being explicitly programmed for a specific task. Think of it as teaching a computer to be creative based on examples it has seen! While generative AI tools show great promise and often make useful suggestions, they are designed to predict what is right, which isn't always right. As a result, their output can be inaccurate, misleading, or incomplete.
This guidance applies to all students, teachers, staff, administrators, and third parties who develop, implement, or interact with AI technologies used in our education system. It covers all AI systems used for education, administration, and operations, including, but not limited to, generative AI models, intelligent tutoring systems, conversational agents, automation software, and analytics tools. This guidance complements existing policies on technology use, data protection, academic integrity, and student support.
The following principles guide the appropriate and safe use of AI and address current and future educational goals, teacher and student agency, academic integrity, and security. We commit to adopting internal procedures to operationalize each principle.
We use AI to help all of our students achieve their educational goals. We will use AI to help us reach our community’s goals, including improving student learning, teacher effectiveness, and school operations. We aim to make AI resources universally accessible, focusing especially on bridging the digital divide among students and staff. We are committed to evaluating AI tools for biases and ethical concerns, ensuring they effectively serve our diverse educational community.
We reaffirm adherence to existing policies and regulations. AI is one of many technologies used in our schools, and its use will align with existing regulations to protect student privacy, ensure accessibility to those with disabilities, and protect against harmful content. We will not share personally identifiable information with consumer-based AI systems. We will thoroughly evaluate existing and future technologies and address any gaps in compliance that might arise.
We educate our staff and students about AI. Promoting AI literacy among students and staff is central to addressing the risks of AI use and teaches critical skills for students’ futures. Students and staff will be given support to develop their AI literacy, which includes how to use AI, when to use it, and how it works, including foundational concepts of computer science and other disciplines. We will support teachers in adapting instruction in a context where some or all students have access to generative AI tools.
We explore the opportunities of AI and address the risks. In continuing to guide our community, we will work to realize the benefits of AI in education, address risks associated with using AI, and evaluate if and when to use AI tools, paying special attention to misinformation and bias.
We use AI to advance academic integrity. Honesty, trust, fairness, respect, and responsibility continue to be expectations for both students and teachers. Students should be truthful in giving credit to sources and tools and honest in presenting work that is genuinely their own for evaluation and feedback.
We maintain student and teacher agency when using AI tools. AI tools can provide recommendations or enhance decision-making, but staff and students will serve as “critical consumers” of AI and lead any organizational and academic decisions and changes. People will be responsible and accountable for pedagogical or decision-making processes where AI systems may inform decision-making.
We commit to auditing, monitoring, and evaluating our school’s use of AI. Understanding that AI and technologies are evolving rapidly, we commit to frequent and regular reviews and updates of our policies, procedures, and practices.
Our school system recognizes that responsible uses of AI will vary depending on the context, such as a classroom activity or assignment. Teachers will clarify if, when, and how AI tools will be used, with input from students and families, while the school system will ensure compliance with applicable laws and regulations regarding data security and privacy. Appropriate AI use should be guided by the specific parameters and objectives defined for an activity. Below are some examples of responsible uses that serve educational goals.
✍️You may want to specifically reference your existing technology use, academic integrity, and student support policies here.
✍️You may want to adjust the paragraph above to denote who will be responsible for setting boundaries of acceptable use in classes and assignments based on your schools’ norms.
Tutoring: AI technologies have the potential to democratize one-to-one tutoring and support, making personalized learning more accessible to a broader range of students. AI-powered virtual teaching assistants may provide non-stop support, answer questions, help with homework, and supplement classroom instruction.
School Management and Operations
🧩 Sample language to consider when reviewing your Responsible Use Policy: Always review and critically assess outputs from AI tools before submission or dissemination. Staff and students should never rely solely on AI-generated content without review.
As we work to realize the benefits of AI in education, we also recognize that risks must be addressed. Below are the prohibited uses of AI tools and the measures we will take to mitigate the associated risks.
Bullying/harassment: Using AI tools to manipulate media to impersonate others for bullying, harassment, or any form of intimidation is strictly prohibited. All users are expected to employ these tools solely for educational purposes, upholding values of respect, inclusivity, and academic integrity at all times.
Overreliance: Dependence on AI tools can decrease human discretion and oversight. Important nuances and context can be overlooked and accepted. Teachers will clarify if, when, and how AI tools should be used in their classrooms, and teachers and students are expected to review outputs generated by AI before use.
Plagiarism and cheating: Students and staff should not copy from any source, including generative AI, without prior approval and adequate documentation. Students should not submit AI-generated work as their original work. Staff and students will be taught how to properly cite or acknowledge the use of AI where applicable. Teachers will be clear about when and how AI tools may be used to complete assignments and restructure assignments to reduce opportunities for plagiarism by requiring personal context, original arguments, or original data collection. Existing procedures related to potential violations of our Academic Integrity Policy will continue to be applied.
Unequal access: If an assignment permits the use of AI tools, the tools will be made available to all students, considering that some may already have access to such resources outside of school.
Societal Bias: AI tools trained on human data will inherently reflect societal biases in the data. Risks include reinforcing stereotypes, recommending inappropriate educational interventions, or making discriminatory evaluations, such as falsely reporting plagiarism by non-native English speakers. Staff and students will be taught to understand the origin and implications of societal bias in AI, AI tools will be evaluated for the diversity of their training data and transparency, and humans will review all AI-generated outputs before use.
Diminishing student and teacher agency and accountability: While generative AI presents useful assistance to amplify teachers' capabilities and reduce teacher workload, these technologies will not be used to supplant the role of human educators in instructing and nurturing students. The core practices of teaching, mentoring, assessing, and inspiring learners will remain the teacher's responsibility in the classroom. AI is a tool to augment human judgment, not replace it. Teachers and staff must review and critically reflect on all AI-generated content before use, thereby keeping “humans in the loop.”
Privacy concerns: AI tools will not be used to monitor classrooms for accountability purposes, such as analyzing teacher-student interactions or tracking teacher movements, which can infringe on students’ and teachers' privacy rights and create a surveillance culture.
School Management and Operations
Compromising Privacy: The education system will not use AI in ways that compromise teacher or student privacy or lead to unauthorized data collection, as this violates privacy laws and our system’s ethical principles. See the Security, Privacy, and Safety section below for more information.
Noncompliance with Existing Policies: We will evaluate AI tools for compliance with all relevant policies and regulations, such as privacy laws and ethical principles. AI tools will be required to detail if/how personal information is used to ensure that personal data remains confidential and isn't misused.
✍️ You may want to reference your academic integrity policies here.
🧩 Sample language to consider when reviewing your Academic Integrity Policy: AI tools may be used for brainstorming or preliminary research, but using AI to generate answers or complete assignments without proper citation or submitting AI-generated content as one’s own is considered plagiarism.
💡 For more resources on adjusting teaching and learning to uphold academic integrity:
While it is necessary to address plagiarism and other risks to academic integrity, we will use AI to advance the fundamental values of academic integrity - honesty, trust, fairness, respect, and responsibility.
Additional Recommendations for Advancing Academic Integrity
We will implement reasonable security measures to secure AI technologies against unauthorized access and misuse. All AI systems deployed within the school will be evaluated for compliance with relevant laws and regulations, including those related to data protection, privacy, and students’ online safety. For example, providers will make it clear when a user is interacting with an AI versus a human.
Staff and students are prohibited from entering confidential or personally identifiable information into unauthorized AI tools, such as those without approved data privacy agreements. Sharing confidential or personal data with an AI system could violate privacy if not properly disclosed and consented to.
💡 For more information to inform ethical AI procurement:
✍️ You may want to reference relevant data privacy and security regulations and policies.
This guidance will be reviewed annually, or sooner, to ensure it continues to meet the school’s needs and complies with changes in laws, regulations, and technology. We welcome feedback on this policy and its effectiveness as AI usage evolves.
[Last updated: MM/DD/YY]
End of Example Resource
AI Guidance for Schools Toolkit © 2023 by Code.org, CoSN, Digital Promise, European EdTech Alliance, and PACE is licensed under CC BY-NC-SA 4.0