Developing creativity with generative AI.

 



This section is an interview between Ronald Beghetto, Professor at Arizona State University (United States), and the OECD Secretariat. After defining creativity, Beghetto presents his approach of building AI tools to experience creativity as well as the tools he developed. He argues for a slow use of generative AI in which teachers, students (and humans more broadly) remain in charge of their ideas and use generative AI to achieve a personal goal.


OECD: You are an expert on creativity and how t foster it in education. What do you see as the main principles?

 Ron Beghetto: The way I see creativity is very simple: Creativity is a potential we all have, but not something we possess. We possess the capacity and potential to do something creative, but whether this is the case is usually judged after the fact. We never know in advance whether the process or outcome will be creative. The definition generally used in the field is that creativity requires something to be both new and meaningful or useful. It is not just originality, but originality constrained by criteria, objectives, and meaning. Generating a lot of wild solutions is just meaningless originality. Creativity must also address or solve a problem or task. For example, if you are a cook and you combine ingredients in a completely novel way but the dish is inedible, that is not creative. It has to be tasty, edible, and appealing. Creativity is a blend of originality and appropriateness, personally meaningful or meaningful to your audience. In education, the great advantage is that we are very good at specifying criteria and constraints. We just have to open up the process so that people can meet those objectives in different and unexpected ways. That introduces uncertainty. Structured uncertainty is key. If everything is predetermined – what the problem is, how to solve it, and what the answer looks like – then we have engineered creativity out of education. But if you provide structure by saying, “this is what we want, but how you do it is up to you,” that creates space for creativity. On the teaching side, part of fostering creativity is helping educators become comfortable with the uncertainty of not knowing how students will reach objectives. You need to be clear about the criteria and then let students find their own paths. Core principles are: 1) be comfortable with uncertainty; 2) provide necessary structure and support without predetermining everything; 3) balance predetermined criteria and openness; and 4) recognise that domain knowledge is essential. Students who are creative in dance or music may not be in science, and vice versa. They must have knowledge and experience in a domain to produce something new and appropriate.


OECD: When OpenAI released ChatGPT, you quickly designed some GenAI tools to support different aspects of the creative process. Could you tell us about it? 

For me personally, when the “ChatGPT moment” happened, I was able to get research access via an API key, so I could build my own tools powered by GPT models as early as 2022. My first thought was: this is pretty interesting… There was this little playground area, where you could test ideas and then build something. I had been working for a while with educators on protocols to support possibility thinking, usually in a human-to-human context. I wondered whether this tool could be trained to serve as a digital facilitator, especially if you do not have partners for possibility thinking. The problem was that I did not know how to code in Python. I had learned BASIC, the programming language, a long time ago, but that was about it. So, I spent a weekend working with ChatGPT itself, just asking it to teach me how to build a Python app, which it did. Remarkably, I had a functional app within a day or two, something that would have taken me years if I had been trying to learn from scratch via YouTube videos. Because I had a very specific goal and some domain knowledge, I knew exactly what I wanted for my bot: not just to provide answers, but to interact with users in a more Socratic way. That experience was pretty amazing. I quickly started using ideas and knowledge from my work and the field to build standalone tools that could be free to use. That was a big realisation for me: I was building something very different from how I saw most people using ChatGPT at the time. The interface looks like a search engine, so it almost predisposes people to type in a question and get an answer. These models are designed to do that. This, I think, sets people on two divergent pathways. One is where the tool becomes a rich partner in possibility thinking, something that augments and can be steered in ways anchored in good principles for supporting creative thinking. This is what Vlad Glăveanu and I call a “slow AI experience,” where the system always asks for more context, because context engineering is far more effective than prompt engineering alone. The second path is “fast AI”, with people using it in a one-off way, typing in a question and running with the first polished response they get. Early on, I noticed (and I am increasingly convinced) that education is at a critical inflexion point between these two possible futures.


OECD: Tell us more about those two paths. What have you observed in your research and teaching? Ron Beghetto: Let’s start with the second path, that of “fast AI”. 

To me, this path leads to overdependence on AI, where students and teachers essentially become digital puppets. There is actually some empirical evidence starting to show this, especially with students, but I think it is happening with teachers as well. You can imagine a student who has an assignment deadline looming, they have a few ideas for an essay, but just before the deadline, they paste in the instructions and a few thoughts, and have ChatGPT or another tool produce the essay for them. Maybe they tweak it, maybe they don’t. There are reports that some students use AI-generated content without any modification. For instance, Anthropic’s Claude released a usage report looking at a million users with EDU emails – presumably mostly students, but probably some faculty as well. They found that nearly half were using it in this directresponse way: asking questions and receiving answers. Some were even explicitly requesting the AI to produce text that would not be detected by plagiarism tools. But I think educators are also becoming digital puppets. For example, an educator with 160 papers to grade might think, “I’ll just see what ChatGPT can do. Here are my criteria; here’s the feedback I usually give.” And soon you end up in this absurd, detrimental space where AI is speaking through students to another AI speaking through teachers. Just sitting with that idea is rather grim and dystopian. Yet this is happening, at least part of the time. The other approach – “slow AI”, the one I advocate for – is helping educators and students learn to work creatively and responsibly with AI to become more dynamic thinkers. It is about using AI as a partner in possibility thinking as if it were just a new perspective, like turning to a colleague. In that way, it is fine if it is not completely accurate, because you should never trust any single source uncritically. You should check different perspectives. That, I believe, can be really powerful. But it requires slowing things down. You must start with your own thinking, then, just as you would with a colleague, get some feedback, bring it back to yourself or your team, and work through it. This is the difference between having AI do the work or thecreative thinking for us, and working with it to augment our thinking. 

OECD: From your own experience, how would you encourage teachers and learners in exploring the slower path? 

Ron Beghetto: What I have increasingly realised is that educators and students need to learn to build with generative AI, just as I did. I think that is the most effective way. There is a lot of rhetoric about AI literacy, which is fine, but it tends to be superficial. “Use it ethically, beware of bias”, and so on. All true, but you do not really understand it unless you try to build something yourself. There is a “vibe coding moment” emerging, enabling people to start building tools. But you need a clear goal, prior content knowledge, and a sense of what you want to build. In autumn 2024, I started a course with doctoral students, who therefore had some domain expertise. We began with: “What kind of AI assistant could you build to support your professional goals?” I taught them the process of using these tools to build something for their work, or for other educators or students. I call it the “build to learn, learn to create” approach. You build first, and then you start to see the strengths and limitations of your product. It was remarkable what this group produced – most students had never built any AI tool before, maybe one had tinkered a little bit, but nothing more. But because they had clear goals and knew what they wanted to achieve, they built tools that they are now using in their dissertations or professional practice. Then I thought, why not open this up to undergraduates and teachers? So, since autumn 2025, I have been teaching two courses: one for undergraduates of all majors and one for graduate students. I have also been running workshops for teachers, showing how this approach can be used in a more principled way – a slower AI approach where you teach the AI to respond in a Socratic way. Almost obnoxiously Socratic, in fact: always asking questions, seeking context, supporting the maintenance of human ideas and agency – never simply giving direct answers, but suggesting possibilities: “What if you tried this?” or “What if you tried that?” Keeping ownership of ideas with the human.



OECD: How can teachers make the most of generative AI to foster creativity – especially when they are usually averse to uncertainty? And are the principles different for students? 

Ron Beghetto: I think the principles are essentially the same for teachers and students. We have primarily been working with teachers, because their role is critically important, particularly when working with younger students. Many of these tools have minimum age requirements in their terms of service. You should not simply turn students loose with them. Teachers need to be part of the process, to be in the loop. First, teachers have to be comfortable with the uncertainty of not knowing exactly how to use these tools. Many teachers have been experimenting, but many still do not see themselves as creative. Many people in general, including teachers, tend to think that kids are more creative than adults. That belief is problematic. They think kids are freer and play more. But again, they are conflating creativity with pure originality. Yes, young people often come up with all sorts of wild ideas. As you grow older, you learn the constraints and realities of the world. But, again, creativity is constrained originality: it must be appropriate for the task and grounded in knowledge. Teachers are actually well-positioned to guide that, but they need to understand creativity properly and be clear about why they are using generative AI. So teachers must have clear purpose and goals, use their own experience and domain knowledge, and be open to uncertainty and different perspectives. Let’s take practical examples. Sometimes, you have a lesson you have taught for years, and it does not work very well. You want to change it and make it more creative, but you are too close to it, too familiar. A simple heuristic is to make the familiar unfamiliar. You are playing with the tensions between structure and uncertainty, familiarity and unfamiliarity. Because generative AI tools are dialogic (they can have meaningful conversations with you), you can say: “I don’t know how to do this; here is what I am thinking.” But you still maintain control: “These are my goals; this is my context.” If teachers are not willing to build tools themselves, they at least need to learn how to interact with AI in a way that slows the process down. That means having clear goals, pushing back, just as you would with a colleague, and providing detailed context. For example: “This is what I want to do; here are my materials; here is how I expect the interaction to happen.” That is an aspect of context engineering, moving beyond prompt engineering. And you can say to the AI chatbot: “Share possibilities, not answers. Preface them with ‘what if’ so I remember this is just one perspective.” I think this is where it starts: teachers modelling this careful, reflective use. Second, I think teachers need sustained experience with these tools before introducing them to students. In my courses, I demonstrate examples of the tools I have built, but I tell students: “Don’t build these same tools; build something that addresses a problem or need that you identify.” For students, making the best, or most creative, use of student-facing applications relies on similar precautions. Most students are already using AI, often as a kind of companion, including for social and emotional support. It can be persuasive, sometimes too persuasive. For example, a student might think: “I like writing poetry, but this thing writes better poetry than I ever could. I’ll just have it do it for me.” We do not want that. Or: “This advice sounds very reasonable.” But you must remain critical. This is just one voice. Get other perspectives, including from humans you trust. So, again, the following principles apply: embrace uncertainty, ground your work in knowledge and clear goals, be open to different perspectives, and constrain the process so outputs are relevant and feasible

OECD: Tell us a bit about the different tools you have built with generative AI. 

Ron Beghetto: On my website, readers can find short videos showcasing a few examples of the bots I’ve built with generative AI. I even had AI narrate the videos, along with my own narration. One tool I developed is for the AI Possibility Lab. It is an ecosystem of tools I use in my classes and beyond, with students, teachers, and educational leaders. All my AI-solutions are built around a simple pedagogical framework: first, prioritise human-to-human dialogue, to clarify why you even want to use AI. And second, if you are stuck, then turn to generative AI tools. The Possibility Lab has a facilitator agent that knows and connects with all the other tools. You can say: “This is a problem I’ve been working on” or “I don’t even know how to think about this.” The facilitator will ask for context and suggest the most suitable tools to use. There are tools to help you become aware of possibilities (e.g. using analogies); explore those possibilities in depth (testing assumptions, considering scenarios); refine possibilities (thinking through unintended consequences); and plan and implement new ones (setting goals, monitoring progress, developing full projects).

Another tool is the Lesson Unplanning Bot. It helps teachers take over-planned, predetermined lessons – the kind you hate teaching – and breathe creative lifeinto them. It helps you unstructure the plan, introduce structured uncertainty, and reimagine the lesson. And yet another tool is the Legacy Project Bot. This one helps students develop creative projects that make an impact in their schools or communities, like addressing food waste or designing a safe after-school space. These three examples are based on my work and other relevant scholarship. They are grounded in my definitions of creativity. Importantly, all three are designed to empower and maintain creative agency, rather than surrender it to the machine.


OECD: Let us talk about the emerging empirical evidence. There are studies comparing creativity outputs where people are allowed to use generative AI or not. One shows that individual outputs (judged by human raters) are typically more creative when AI is used as a help to provide a first idea, but there is less collective originality among those who used GenAI. What do you make of that? 

Ron Beghetto: My hunch is that, yes, these tools can augment creativity. I know it from experience. But you cannot forget the knowledge and experience of the user. They can bring less experienced users up to a certain level. But without deeper knowledge, you do sometimes get homogenised outputs, and less diversity than if you were working with a highly skilled creative collaborator. I think if someone already has good ideas and can judge what the AI produces, rejecting what does not make sense and keeping what does, they can certainly be more creative. There is also evidence that even experts sometimes dismiss AI contributions that could be valuable. Or conversely, audiences sometimes rate AI outputs as superior to human ones. Evidence is still emerging, but the same criteria apply: do not be too dogmatic or you might overlook something creative. Build on domain knowledge, be open to uncertainty, and show flexibility,

OECD: And what about their accuracy? 

Ron Beghetto: Humans hallucinate too. Humans say inaccurate things. Creativity sometimes thrives on “hallucinations”, and there may be something worth pursuing there. But I would not rely entirely on generative AI tools for factual answers. I use them to support new thinking. The human must do the factchecking and empirical testing.




OECD: Beyond text, what do you think about generative AI tools that produce music, video, images? Can we also use them in creative ways? Will they replace human creativity? 

Ron Beghetto: Again, it depends on mindset and orientation. If you approach them with no clear question or purpose – “Just do this for me” – they can indeed replace your creativity. Or they simply become overwhelming. That is another reason why you should always start with a project or goal, not simply: “I have a deadline, please do this for me.” Sometimes, of course, that will happen. But ideally, you approach them thinking: “I need some feedback or examples.” I would typically use different generative AI tools: ChatGPT, Gemini, Claude, and open-source models. Each has a slightly different “personality.” I set the ground rules and provide context. Then, I treat them like a panel of colleagues. I present the same problem to each one, I share my initial thinking, and I compare perspectives. If one says something interesting, I might take that and ask another one to build on it. Or ask: “Poke holes in this idea: how might it fail?” That is, I think, the most powerful use: as a panel of different perspectives, always with you in control. And yes, sometimes you will want to add music or visuals. But you must remain the one deciding when and why. These tools can accelerate and augment what you can already do, and take you further, just like working with any skilled collaborator. They hold a lot of “knowledge” so they can speed up learning. But you have to crosscheck everything, just as you would with human sources. We should absolutely not limit their use to higher education. Younger students are already using them anyway. They just need to learn to use them in a principled and responsible way, checking, questioning, and developing critical thinking. And remember that this is evolving rapidly. What we are discussing now will soon be out of date. This is not like any other subject or technology I have seen in my life. The acceleration is unprecedented.

OECD: What is your view on the future? 

Ron Beghetto: The big threat is a crisis of meaning in education. If education is just about delivering inert content for students to reproduce, machines will do that better. And if students become digital puppets – “do this for me” – and teachers also outsource their feedback, education loses its purpose. That is why philosophers have always said education must be meaningful, experiential, purposeful. Otherwise, people will say: “Leave the inert knowledge to the machines - I’ll just get the answer when I need it.” I think we are living in an important moment. I am actually quite optimistic, but we must be honest about the risks. This is a very different moment, not just another new technology. It is one thing to think about it as a productivity tool in industry. But in education, which is about learning, it is quite a different thing. And when you are a digital puppet, you are not really learning, and that is the crisis. Education has moved slowly for a long time, but perhaps this will accelerate some much-needed reflection about what it is for

Comments

Popular posts from this blog

(Day 2) Beyond the Algorithm: Navigating the Future of Artificial Intelligence - 49th Annual UNIS-UN International Student Conference.

(Day 1 - Part 2) Beyond the Algorithm: Navigating the Future of Artificial Intelligence - 49th Annual UNIS-UN International Student Conference.