In October’s Lead the Change (LtC) interview Bernardo Feliciano’s discusses his work through the AITeach Co-design Lab at UMass Lowell; this work brings educators, researchers, and technologists together to co-create strategies and tools for teaching in this age of AI. The LtC series is produced by Elizabeth Zumpe and colleagues from the Educational Change Special Interest Group of the American Educational Research Association. A PDF of the fully formatted interview will be available on the LtC website.
Lead the Change (LtC): The 2026 AERA Annual Meeting theme is “Unforgetting Histories and Imagining Futures: Constructing a New Vision for Educational Research.” This theme calls us to consider how to leverage our diverse knowledge and experiences to engage in futuring for education and education research, involving looking back to remember our histories so that we can look forward to imagine better futures. What steps are you taking, or do you plan to take, to heed this call?
Bernardo Feliciano (BF): Currently I am working with colleagues to build a co-design lab that brings together educators from very different contexts to develop approaches to teaching and learning in a world where generative AI is a reality. The lab is called the AITeach Co-design Lab @ UMass Lowell. (The hyperlink goes to one of many one-pagers we have been developing for partners representing different disciplines and sectors).
In the AITeach Co-design Lab, as collaborators we aim to create a structured space where we as a diverse group of educators, researchers, and technologists co-develop practical tools, strategies, and prototypes that respond to the reality of generative AI in education. The intention is not only to design usable products but also to study how to structure co-design itself to help schools navigate AI’s challenges and opportunities. In our co-design sessions, educators, researchers, and technology build spaces where we can address challenges in education and AI that are too complex for any one actor to solve (Snowden & Boone, 2007; Senge, 1990). The Lab functions as a structured environment where we can bring our problems of practice, iterate on small pilots, and use those cycles to build local capacity rather than waiting for top-down policy.
As an adjunct professor, I am also teaching a class on family and community engagement with schools. These roles constantly remind me that people bring distinct personal, professional, and institutional histories into every space. For me, futuring is less about projecting a single vision of “Education with a capital E” and more about the relational, actor-to-actor work of helping people shape their futures from the personal, professional, and institutional histories they inherit. That’s the direction my work is taking me.
The way I approach this is by convening diverse groups around developing tangible projects. The process matters as much as the specific product, whether it’s a research article, curriculum binder, a chatbot teaching/learning companion prototype, or a strategy for helping parents connect to schools. What is essential is how people can communicate their histories, connecting, adapting, negotiating, and reworking them to address problems in the present into a viable future. The varied personal and institutional histories participants bring are neither external resources to be tapped nor barriers to be overcome, but active materials in our negotiation of effective, situated teaching and learning. Innovation emerges as members work through these histories, adapting them in relation to one another to meet particular needs. I may not care whether my own work is labeled research, practice, or a mix of both, but as co-designers we must respect each other’s perspectives, even as those perspectives shift through negotiation. AI brings this into focus. At its core, AI is an immense bank or reservoir of the past, trained on and providing access to what is already known or has already been done. The future is not contained in the AI itself—nor can it be left to AI to imagine for us. The future comes from how we draw on that past to build something meaningful with and for the people in front of us. We explore generative AI as both a design partner and an object of study. Co-designers prototype tools like tutoring agents or parent communication bots, while also interrogating what it means to teach with, against, or around AI in everyday classrooms.
Of course, I have to use my own history, experience, and learning as a researcher, teacher, administrator, entrepreneur, and non-profit professional to leverage the network of histories that generative AI offers. But more than before, I can inform, contextualize, and connect the convening and teaching I do now with the work of so many more people and peoples (to some extent) who came before.
LtC: What are some key lessons that practitioners and scholars might take from your work to foster better educational systems for all students?
BF: One lesson is that teachers cannot be treated as passive implementers of someone else’s design. Too often, educational change is imagined as developing a curriculum or program in one place and distributing it everywhere. That assumes context does not matter and is peripheral rather than integral to learning and teaching. Our relationship to knowledge is always relational and always contextual.
Education has always lived in the complex space where cause and effect are only clear in hindsight (Snowden & Boone, 2007). Simon (1973) describes these as ill-structured domains existing in a state of dynamic heterogeneity in which diverse elements and relationships continually shift, preventing stable equilibrium and requiring ongoing adaptation (Pickett et al., 2017). Ill-structured problems cannot be solved by importing outside solutions but only by negotiation among those struggling with them. I do not believe that educational change—or improvement—comes from a fixed product or process delivered with fidelity. It is an ongoing process of learning through which people shape what they inherit—choosing what to keep, what to adapt, what to reject, and what to forget. It is a process I have found universally involves dynamics of local alliances, conflicts, and negotiations. The lesson I take from this is that if you want to improve schooling, you have to engage with the people who are doing the teaching and learning.
“We explore generative AI as both a design partner and an object of study . . . If you want to improve schooling, you have to engage with the people who are doing the teaching and learning.”
Working on my dissertation underscored this point. I wrote about using one-on-one meetings in a researcher-practitioner partnership to organize co-designing a computer science (CS) curriculum for middle schools. My experience brought home to me that there is no such thing as “shared understanding.” What emerges is never a single, final agreement but alignment good enough to act together, sustained through negotiation as perspectives shift. For example, teachers and researchers sometimes differed on how much detail a lesson plan should contain. Some wanted highly specified steps, others only broad outlines. Rather than force uniformity, we kept both versions and moved forward. That flexibility allowed the work to continue without pretending the difference had been resolved.
My work with different kinds of organizations has shown me how funding and infrastructure shape what is possible. This point is kind of obvious but still seems to bear repeating. Creativity and goodwill are not enough without sustainable and intentional support. For example, in the CS Pathways partnership, we shifted from MIT App Inventor to Code.org’s App Lab during remote learning. That solved one problem but created new ones around district procurement and accounts, showing how infrastructure shapes outcomes. In our recent Lab kickoff meeting, one participant noted that even when AI-enabled data tools existed, district procurement rules blocked their use — showing how funding and infrastructure filter what is possible.
At the same time, I saw that students’ and teachers’ own histories can be powerful resources for change, if we work out how to support them as they need to be supported. In one part of the CS Pathways project, students framed their app design around civic issues in their community, such as neighborhood safety and access to resources. Their lived experiences pushed the curriculum beyond abstract coding skills into work that mattered locally. This reframed computer science as a civic as well as a technical practice and shaped how we sequenced and supported instruction in those classes.
LtC: What do you see the field of Educational Change heading, and where do you find hope for this field for the future?
BF: In my experience, the field often moves toward building monoliths: “the system,” “the conceptual framework,” “the workforce,” “education technology.” Instead of these monoliths, we need to work with lesson plans and pacing decisions that make up “the system,” the overlapping frameworks that guide practice rather than a single “conceptual framework,” the varied teacher and student histories that constitute “the workforce,” and the specific tools and artifacts, from binders to chatbots, that become “education technology.” Monoliths can make things easier to talk about but also risk obscuring the negotiations and translations that are inseparable from those very systems. These relational dynamics are not add-ons. They are the system itself, as much as the actors are (Latour, 2005). As in the earlier example of teachers’ differing preferences for lesson plan detail, the system took shape through the negotiation itself, not through a fixed agreement imposed from outside.
“Relational dynamics are not add-ons. They are the system itself.”
I would like to see the field shift toward paying closer attention to the actor-to-actor interactions and dimensions. That is where change takes shape: when people with different histories and contexts negotiate how to carry those histories forward. I see promising work moving in this direction: Playlab.ai’s participatory approach to AI tool-building, Victor Lee’s co-design of AI curricula with teachers, Penuel and Gallagher’s (2017) and Coburn et al.’s (2021) and others’ emphasis on research–practice partnerships , and Bryk et al.’s (2015) improvement science cycles. The Cynefin co-design principles we are enacting in AITeach — probe, sense, respond — are themselves evidence of a field moving toward valuing negotiation and adaptation over fixed models (Snowden & Boone, 2007).
This is also where I find hope. In my dissertation research, I have seen how a small change in the structure of a meeting can reshape how colleagues relate to one another. Having a teacher go first in one-on-one meetings shifted the dynamic, allowing their concerns to set also frame a negotiation rather being a response to requirements. I have seen middle school students reframe ideas in ways that exceeded what I could have planned, such as attempting to build an app to help students and teachers share resources more effectively in school. Students translated apps they were familiar with into tools for their own purposes, which required reimagining instruction around their designs rather than trying to make pre-existing apps seem interesting. This approach may cause an instructional headache but least it provided an authentic motivation for learning an aspect of coding.
Some might call this the interest or work “micro-level,” but I avoid that term because it suggests hierarchies and fixed layers. I prefer to describe it as the translational dimension: the ongoing work of shaping futures from inherited histories by deciding what to keep, what to adapt, and what to let go.




























































