Digital Transformation (Dx) in higher education has largely been framed as an organizational and IT challenge: enterprise systems, cloud migration, data governance, workforce development. That framing makes sense for the people leading those efforts, and the work is genuinely important. But it can leave a gap for those of us whose daily work is about teaching and learning. What does digital transformation look like when the starting point is pedagogy, not infrastructure?
I’ve been running this site on Hugo for several years now, and for most of that time I used other people’s themes and tweaked things around the edges. A few months ago, I decided to build my own theme from scratch with Claude Code. The theme is called Typeset.
Why build a custom theme?
The short answer: I got tired of fighting someone else’s decisions. Every borrowed theme comes with assumptions about what a blog is and what it should look like. Eventually those assumptions start getting in the way.
A mid-sized research university is developing its institutional response to generative AI. There is no single “AI project” - instead, there are dozens of overlapping conversations happening at different levels: individual instructors experimenting with ChatGPT in their courses, departments writing local policies, a provost-level working group drafting institutional guidelines, and national disciplinary organizations publishing position statements. Some faculty are enthusiastic, some are anxious, and most are somewhere in between.
Leading change in a university involves many kinds of participation at once. In any given week, the same person might chair a committee with real decision-making authority, serve in an advisory role on another, pilot something new in their own teaching, and sit in a conference session absorbing ideas they hadn’t considered before. Each of those is a different kind of engagement, and each contributes to change in a different way.
The 2026 Oscars™ happened, and a questionable “AI Production Studio” and “AI Talent Agency’s”1 “AI Actor” tried to use the occasion to convince The Academy™ that replacing human actors with AI is actually good, and that they just need to fully embrace it in order to unlock their full potential. The argument was presented in the form of a soul-less, unartistic, AI-generated “music video” that was basically autotune cranked to 3000 or something, belting out lyrics that had the emotional impact of a corporate press release. If this is “AI Art”, real artists have nothing to worry about.
On the plus side, your humble protagonist has finally figured out how to break out of “the only thing he blogs about is how he uses Obsidian.” Unfortunately, it’s because I appear to be firmly in the middle of a bout of “the only thing he blogs about is how he vibecodes some half-baked idea into a usable thing.”
Last night while watching the Olympics highlights, I was playing around with Claude Code to see if I could implement something I’ve been thinking of for quite awhile. What if students had an application that connected to the LMS (Brightspace in our case) and pulled all course materials, info, calendar, assignments, discussions etc. into a local database, and what if a local chatbot was able to interact with that database to guide a student as they learn? A socratic agent, coaching them without giving them answers. Prompting them as they engage with the course.
Nope. There have been enough of those lately. Recent posts about art, intuition, semantic ablation, cognitive debt, cognitive shortcuts and atrophy. They get at lots of the nuance hidden between “AI is literally SATAN” and “I, for one, welcome our new AI overlords!”. Mostly, (generative)AI is kinda useful for some things, is extremely problematic for many reasons, and isn’t going away no matter how much anyone wants it to.
I’ve been using RSS readers for over 20 years. Most of that time has been spent using the excellent NetNewsWire application, but I’ve used others (including Google Reader, Fever˚, etc.).
After reading Terry Godier’s post on RSS readers being stuck in the email metaphor, I wanted to experiment with some ideas for a “non-email” metaphor for a feed reader interface. The most interesting and useful version of this that I’ve used was the “Hot” view from Shaun Inman’s Fever˚ application. What would it look like to integrate something like that into my NetNewsWire database?
Thanks to gentle nudges from Alan, I’ve wrapped up another 365photos “daily photo for a year” project for 2025 and started one for 2026. I’d marked this on Mastodon, but hadn’t mentioned anything here on my blog. Maybe I was waiting to see if I’d actually go for another round? 18 days into 2026 and it looks like I’m still doing it, so…
I started back in 2007, mostly as a way to learn to use my then-new, then-fancy Canon Rebel XT. I carried it in my backpack everywhere and got some interesting shots. I learned to use it pretty well and learned to see differently. Apparently, something clicked because years later I was able to get into a PhD program that required an “artist’s portfolio” and I got in on the strength of my photography. Who knew? #365photos was the beginning of that.
Microsoft keeps renaming things to be various versions of “copilot”. A snarky toot pointed out that there were now 4 MS products called Copilot. I thought there were more, so I asked Copilot (MS365 Copilot™). It thinks there may be at least 7 different Copilots? But its LLM may be out of date…
Prompt:
How can I keep track of all of the things that Microsoft now calls “Copilot”? Is there a map or guide?
I built an AI/LLM-powered “related notes” plugin for Obsidian. It seems to work, but who knows? There’s a video tour, and the code is in a GitHub repository.
I’ve been using Obsidian for a few years now, and have always wanted a good “Related Notes” plugin to help me find things that overlap with what I’m working on - based on the content itself, not metadata or links. Haven’t had any luck so I kind of gave up.