Most institutions are stuck between two positions on generative AI: blanket prohibition or uncritical adoption. Neither serves instructors or students particularly well, and neither is a strategy.
A workable approach has to address academic integrity, pedagogical opportunity, privacy, and practical implementation – grounded in what the technology actually does, not what vendor demos suggest. That’s harder than it sounds, because generative AI changes quickly and the institutional decisions you make now will have consequences that outlast the current hype cycle.
What an engagement looks like
AI strategy development. Working with institutional leadership to develop a coherent approach to AI in teaching and learning – covering policy, pedagogy, professional development, and infrastructure. This is built around your specific context, values, and capacity, and it addresses questions that generic frameworks tend to skip: Indigenous data sovereignty, accessibility implications, the difference between institutional AI tools and consumer AI tools, and what happens when the technology shifts again.
Practical AI prototyping and evaluation. I build working prototypes, not slide decks. If you want to understand what an AI-powered course assistant, a curriculum mapping tool, or an automated feedback system could look like at your institution, I can build and test a functional prototype using current tools. It’s one thing to discuss whether AI tutoring could work – it’s another to put a working prototype in front of instructors and see what they actually do with it.
AI literacy professional development. Workshops and hands-on sessions for faculty and instructional designers. Practical and pedagogically grounded, with honest discussion of limitations and risks – not overviews of what ChatGPT can do. Typically half-day to full-day sessions, standalone or as part of a broader engagement.
I’ve built working AI tools for teaching and learning contexts – including a Brightspace Course Coach application that connects a local LLM to institutional course data via the Brightspace API. I understand these systems at the code level. I also work on questions of Indigenous data sovereignty in AI contexts, which shapes how I think about institutional AI policy in ways that go beyond the standard compliance checklist.
The technical questions here are genuinely interesting. The harder questions are about power, privacy, and purpose.