Examples of country strategies and frameworks on generative AI in education.
Since the public release of generative artificial intelligence (GenAI) tools in late 2022, education systems across OECD countries have expanded or updated their earlier strategies addressing artificial intelligence (AI) in general, with some recent policy documents or initiatives specifically about GenAI. Across countries, policy responses converge around three dimensions: the development or update of long-term strategies and of practical guidance and guardrails; the development of initiatives that address perceived GenAI challenges; and curriculum integration, literacy, and professional development. Examples of the two latter are included in Boxes 1.1 and 1.2. This Annex focuses on countries’ strategies and guidance. Most OECD countries entered the GenAI debate with pre-existing national AI or digital strategies. Since 2023, many have updated these frameworks or issued education-specific documents that explicitly address generative tools. The most common immediate policy response to GenAI has been the development of national or system-level guidance. These documents typically focus on ethical and responsible use, academic integrity, data protection, and the roles and responsibilities of teachers and students. Of the 23 countries that responded to a European survey in 2025, OECD and accession countries (Belgium (Flemish Community), Croatia, Czechia, Finland, France, Hungary, Ireland, Italy, Norway and Türkiye) reported that generative artificial intelligence was formally addressed in their system’s existing or planned strategies, and 9 were developing or planning to develop guidance or policies to address the use of generative artificial intelligence in education (Greece, Latvia, Lithuania, Luxembourg, the Netherlands, Portugal, Slovakia, Slovenia and Switzerland). The report notes that, in the European Union, national strategies are increasingly aligning (and aligned) with the EU AI Act, even though its educational implications are still under review. Most countries have clarified accountability, human oversight and transparency requirements for GenAI use in education. Across systems, restrictions on GenAI tend to be targeted rather than generic. Some countries regulate specific tools or contexts (e.g. assessment), while others rely on device-use policies that indirectly limit GenAI access. Overall, guidance documents emphasise enabling informed professional judgement rather than imposing blanket bans.
Australia: the Framework for Generative Artificial Intelligence in Schools highlights six principles, including diversity of perspectives, non-discrimination, privacy and data protection, and human oversight. Unlike earlier AI strategies, it addresses classroom uses of text- and image-generating systems.
Japan: the government provides guidance on generative AI in schools that explicitly warns against inputting personal or sensitive data into generative AI systems, reflecting early recognition of LLM-specific data reuse and retraining risks in education.
United Kingdom (Wales): the document on Generative artificial intelligence in education: opportunities and considerations is explicitly focused on the use of generative AI, complemented by safeguarding guidance addressing AI-specific riskssuch as deepfakes and synthetic media.

Comments
Post a Comment