More than two years into the advent of generative artificial intelligence (AI) in K-12 schools, many state departments of education are issuing guidance or policies for responsible school and student use of AI. A helpful map from AI for Education shows that half of U.S. state departments of education have issued guidance on the use of generative AI in K-12 schools (and there has also been some at the district levels). The states whose departments of education have issued guidance include: Alabama, Arizona, California, Colorado, Connecticut, Delaware, Georgia, Hawai’i, Indiana, Kentucky, Louisiana, Minnesota, Mississippi, New Jersey, North Carolina, North Dakota, Ohio, Oklahoma, Oregon, Utah, Virginia, Washington, West Virginia, Wisconsin, and Wyoming.
A Good Start: What Recent Guidance Says About Data Privacy
All twenty-five states mentioned (or provided resources that included mention of) data privacy or data privacy principles in their guidance. For a detailed analysis, see FPF’s resource – Summary of State AI Guidance for Schools listing the language used by each state for a closer look). Multiple states mention data privacy and the guidance typically falls into the following areas:
- Compliance with Federal and/or State Laws: about 20 states reference regulations such as FERPA (Family Educational Rights and Privacy Act), COPPA (Children’s Online Privacy Protection Act), CIPA (Children’s Internet Protection Act), IDEA (Individuals with Disabilities Education Act), and/or other local laws as the baseline for acceptable data handling and privacy practices.
- Data Minimization Principles: about 12 states stress the importance of avoiding inputting PII (Personally Identifiable Information) into AI systems.
- Data Collection and Retention: about 16 states mention or address data collection, use, sharing, and/or storage practices, with an emphasis on limiting data retention and ensuring data is only collected for specific educational purposes.
- Data Security: about 21 states list data security concerns as a focus, with some calling for AI systems to adhere to security best practices, including encryption, authentication, and authorization to prevent unauthorized access.
- Transparency and Parental Consent: about 10 states mention the need for transparency surrounding AI policies: both vendor transparency and school administrators’ transparency with parents and students in how AI tools used at school collect and use data.
- Vendor Contracts and Third-Party Tools: about 9 states stress the importance of vetting AI vendors and ensuring that contracts with third-party AI providers are aligned with data privacy standards, with some including model language.
- AI-Specific Bias Risks and/or Ethical Considerations: about 13 states mention ethical concerns associated with data privacy and AI, particularly around the potential misuse of data and the creation of biased algorithms.
- Professional Development and Guidance: about 8 states highlight the need for (or provide resources for) professional development, support, or training for educators on the responsible use of AI tools, including protecting student data privacy.
- Accountability and Regular Review: about 3 states emphasize the importance of ongoing reviews of policies and agreements given the evolving nature of AI.
Next Steps: Tips for Policymakers for Increasing Guidance Effectiveness
The data privacy principles listed above are integral to responsible, safe, and ethical data privacy practices, and state education departments’ inclusion of them in their guidance on the use of generative AI in K-12 schools is an encouraging start. Even more can and should be done to increase the effectiveness of state guidance when it comes to data privacy considerations. Whether bolstering existing guidance or shaping new guidance, policymakers can provide school leaders more helpful and substantive direction by keeping in mind that the best guidance is:
- Specific. The most effective guidance is seamless and clear for school leaders to understand and implement. An overwhelming majority of the existing guidance surrounding data privacy related to AI use in K-12 schools is superficial, with many states saying little more than perfunctory statements about the importance or risks of data privacy associated with AI and/or the necessity of following existing privacy laws. If state guidance is to highlight, for example, the need for things such as “establishing strong safeguards” or “keeping student privacy as a primary consideration,” detailing what those strong safeguards should be or how to uphold student data privacy as a primary consideration would dramatically increase guidance utility for school leaders. States that provided slightly stronger guidance included more specific directives to assist schools with taking the next step. These included details such as language for contractual requirements with AI vendors, data handling protocols, training programs, and clear policies on data collection, retention, and security. Even further specificity would be more beneficial to schools and districts.
- Actionable. School leaders need actionable guidance that gives a concrete roadmap for the use of generative AIt. While reviewing or drafting guidance, policymakers should ask: what would it mean in practice if school administrators were to do as the guidance suggested? For example, imagine if the guidance indicated that “student personally identifiable information should be protected when using generative AI tools.” To implement this guidance in their schools, school leaders would need to know how to protect that information, they would need to have a policy on it, they would need to train and educate staff and students on that policy, the staff and students would have to adhere to that policy, and the school would have to enforce it. Actionable guidance that details a roadmap or implementation instructions helps school leaders minimize guesswork and provide clear steps they can take.
- In Context. The most effective guidance will provide direction in the context of generative AI. Many aspects of student data privacy have been considered for over a decade with the use of education technology (“edtech”) products in schools, and many state and federal laws already regulate the use of student data in the age of edtech and the internet. Guidance that is the most helpful to school leaders will go beyond repeating data privacy principles that have already been stressed in the context of edtech and will provide meaningful direction in the context of AI.
Including student data privacy considerations in existing state guidance is an encouraging first step towards safeguarding student data privacy in the age of generative AI. By creating specific, actionable directives in the context of AI, policymakers can strengthen the effectiveness, utility, and helpfulness of their guidance on data privacy for generative AI use in K-12 schools. In doing so, they can make navigating the new and evolving reality of generative AI in schools less intimidating and more straightforward for school leaders.
Endnotes:
1 (AL AZ CA CO CT DE IN KY NC ND MN MI OH UT WA WV WY)
2 (AZ CA DE GA HI IN NC NJ OR UT WA WV)
3 (AL CA CO CT DE HI LA MN NC NJ OH OK UT WA WV WY)
4 (AL AZ CA CO DE GA IN KY LA NC ND NJ MN OH OK UT VA WA WV WY)
5 (AL CO DE GA LA NC OH OK WV WA)
6 (AL CO DE GA LA NC UT WA WY)
7 Take a look at FPF’s resource for Vetting Generative AI Tools for Use in Schools, including the checklist and accompanying policy brief.
8 (AL CO DE GA LA MN NJ OH UT VA WV WI WA)
9 (AL DE GA LA MI, NJ, OH WV)
10 (CO GA LA)
11 e.g. “Data privacy, security and content appropriateness should be primary considerations when adopting new technology.” Minnesota guidance.
12 e.g. “All AI application usage should adhere to state and federal privacy laws” Kentucky guidance