2024 week 40


⚙️ Work

I’m trying a new thing at work, actively sharing links to things that I publish here. I’ve always tried to compartmentalize, and have my blog as a separate thing that I just assumed most people at work wouldn’t see. I’ve shared both the NotebookLM post and the Conflicted post with my team and with leadership groups that I work with. So far, it’s been well received. Here’s hoping I don’t write anything that’s too career-limiting…

In the TI Supervisors Network, we’re reading “What Makes a Leader”, by Daniel Goleman. For kicks, I fired up Ollama and had llama3.1:8b generate a summary. Which it was able to do, after only being given the title of the article and the author’s name:

Can you give me an overview of the article “What makes a leader?” by Daniel Goleman?

Less than a minute later, and it spat out a surprisingly useful high level summary, I then asked it to elaborate on a few points. And it did. As a study aid, this would be amazing - assuming the source material is in the LLM somewhere.

I read the whole thing and made my own notes - but the GenAI “notes” provide an effective starting point.

Teaching & Learning

  • Tyson Kendon @ UCalgary News: Teach me something. Tyson reflects on his experience teaching a systems administration course, “panicking”, and winding up with a highly engaged and impactful learning experience for both him and his students. Authentic and “joyful” learning can provide a way to authentically engage students, while simultaneously addressing issues of academic integrity. And what an amazing team, with every single team member thinking deeply about teaching and learning.

  • Alex Usher @ HESA Podcast: Centers for Teaching and Learning with Mary C. Wright. A podcast (with 2 actual humans, I assume), with a full transcript. Mary Wright’s book on centers for teaching & learning has a US-centric focus (note the spelling of Center), but the podcast touches on some themes that might apply more broadly. I think we’re already doing almost all of what’s described at the TI, except for the “creation” part, which I assume means “media production” or something.

  • Gardner Institute: High-Impact Online Teaching And Learning Practices Collection (Direct link to the Collection PDF)(via Natasha Kenny & Peter Felten on LinkedIn)

    The High Impact Online Teaching and Learning course (HIOTLP) provided the opportunity to approximately 200 faculty and educational professionals as they transitioned to online learning during the COVID-19 pandemic. Participants in HIOTLP participated in an online course and were provided with relevant resources and promising practices.  After the course was finished participants were asked to summarize their findings and share them with the Gardner Institute. We are here to share those submissions with you all.

AI

  • Kwantlen Polytechnic University: Generative AI - a KPU T&L resource cite for Generative AI (via Tom Woodward)

  • York University: YU AURA Agents - a collection of GenAI tools for use by instructors and students, built on top of MS Copilot.

  • George Veletsianos @ The Conversation: 5 questions schools and universities should ask before they purchase AI tech products (from April 15, 2024, but still relevant. via Clint Lalonde)

  • Ethan Mollick @ One Useful Thing: AI in organizations: Some tactics1

    Mollick talks about the tactics of “Crowd” vs. “Lab”. I think the best approach is a combination of both - we need a Lab to act as an incubator, and we need to use that work to enable the Crowd to continue exploring and innovating and informing the work of the Lab. We have the early start of a Lab, with CAIELI (although this is more of a Hub than a research lab). We have a diffuse Crowd, with people like Soroush Sabbaghan and Sarah Eaton (and many others).

    One of the things I’m working on is sketching out how our work in the TI could provide leadership for a Lab (for learning technologies and pedagogies in general, not just AI), building on our community connections.

  • Marc Watkins @ The Chronicle: Make AI Part of the Assignment.

    …focus on the second step: how to introduce a bit of intentional friction into your students’ use of AI and find ways for them to demonstrate their learning when using the technology.

  • Meta Research: How Meta Movie Gen could usher in a new AI-enabled era for content creators. More demos at their Meta Movie Gen page. This. Stuff. Is. BANANAS.

    meta movie gen demo reel. meta (2024).

    I’m not sure why this shocked me - it’s just a more-visual representation of the LLMs we’ve seen for 2 years now. But there’s something about seeing video this believable being produced by text prompts that breaks my brain. Rising Sun at least made it look like it was difficult to doctor video, requiring a lot of time, effort, and money to do it well. In 1993, anyway. This? Typing a few words into a textbox and waiting for a video to be spat out? That feels different…

  • Josh Nudell @ Noodlings: Sokrates vs the Machine.

    I won’t tell creatives how to do their work, but I abhor all of these ideas for academic writing. If writing is thinking, as I repeat to my students multiple times a semester, then offloading any part of that thinking to a machine is anathema to the process. ChatGPT can include a list of topics that would appear in a generic essay on your topic, but it cannot articulate what should be in that paper because it doesn’t have a clear argument. You might be able to finely-tune the prompts to develop an workable essay from what the machine spits out, but that requires that you already know the material extremely well and while this would be a neat trick it still misses the point that a good essay is a clear articulation of what a person thinks. So, too, with drafting, outlining, and even using it to kickstart the process. Yes, the blank screen can be a challenge, but this is why outlining and free-writing exists. In other words, developing a strong process both creates a stronger final product and obviates the need for ChatGPT altogether.

    This connects with the bit at the top of this post, where I used llama3.1 to build a starting point of notes for an article. And it connects to what I wrote in the I Remain Conflicted post as Problem 3: Welcoming Our Robot Overlords. Even if (and it’s a big if) GenAI was ethically and effectively able to provide a starting point for creative work, the very act of offloading the blank page to anyone/anything else (GenAI, other people, anything outside of your own head) short circuits the creative process.

I was digging through my archives, and the oldest post tagged with “artificial intelligence” is this one, from 2010. It still rings true, and is why GenAI and LLMs have the potential to be useful. Not in spitting out videos of surfing koalas, but as acting as an interface to software that actually does stuff. Steinberg’s website is offline now - it looks like he was killed in an e-bike accident back in 2020.

The Web We Make

  • Craig Hockenberry @ furbo.org: Slop is Good (via Jeffrey Zeldman)

    The human component of the web won’t change. People will need answers that they can trust. Folks on the web are also resourceful; they always have been.

    Something new will fill the gap and give people what they need and want. And my guess is that the open web, personal reputation, and word of mouth will be key components of that thing.

    This is why I’ve been blogging for 22 years and counting. We make the web (and society) by actually doing it. And old-school web-heads will recognize both of the names in this link as being OG builders of the web.

    And, connecting with Steve Steinberg’s website going offline after he died, the web we make will slowly succumb to entropy as individual nodes on the network get knocked offline. This website will eventually go away - hopefully not for a long, long time, but it’s inevitable. I’ve built it in a way that will let it coast for as long as possible after I eventually stop updating it, but at some point in the hopefully-distant future, the domain registration will lapse, my web hosting renewal will fail, and poof.

Speaking of the web we make, Matt Mullenweig just went “nuclear” in picking a (new?) fight with WPEngine. Automattic demanded the web host pay $32M annually for using WordPress trademark. After previously saying people were free to use “WordPress” and “WP” for anything they wanted. But, “anything” is defined now as “anything that Matt approves of.”2 He then told the entire staff of Automattic to basically love it or leave it. You never go full Elon. Generous severance packages were offered, but basically saying “get on board or get out of the way (and never come back!)”2 isn’t a good look for someone who is the face of the open source software that powers almost half of the web. It’ll be interesting to see what the 8.4% of Automattic’s staff who walked away will do now… Matt ends his post with the definitely Musk-like:

As the kids say, LFG!

Cringe. He “feels much lighter.” Good for him, now that he’s gotten what boils down to oaths of fealty from any remaining staff. That’s a pretty blatant power move, even for someone who’d baked their first name into the name of the company.

Managing

🍿 Watching

★★★☆☆ Rings of Power (season 2). The most expensive show in the history of shows, riddled with tacky ads, incredible special effects, and the occasional “say the line!” bits. On the plus side, unskippable commercials mean snack breaks are a thing again…

🧺 Other

AI Tinkering

I’m trying out Anthropic’s Claude.ai tool. So far, I’m impressed with how it generates code and answers questions. I wanted to see if it could build a simple weather dashboard. Yup, in maybe 15 minutes of tinkering, live data from OpenWeatherMap, and live webcams from the City of Calgary and others.

Claude doesn’t feel as natural to use as ChatGPT-4o - I don’t know why, but using ChatGPT feels almost like talking with a colleague, and using Claude feels more like talking to a machine? And Claude has usage throttling, even for its “professional plan”, so working with it can take much longer as you’re told to come back in a few hours.

Weekly posting

I have slowly come to appreciate this weekly format a bit more - it forces me to slow down rather than quickly posting things. I’ll often come back and reword a bit after thinking about it. Or I’ll add things or even remove them3. The point of these isn’t “breaking news!”, it’s “this is stuff I’ve been thinking about this week.” I was worried that I’d lose motivation to post actual things outside of these, but that doesn’t seem to have happened yet.

🗓️ Focus for next week

  • Planning session to start the process of redesigning our learning technology support model
  • Meetings
  • Learning Technology Forum
  • AI community session in our faculty of education
  • AI community conversation
  • Academic GenAI working group

  1. Although Substack is problematic for its free-speech absolutism and platforming of nazis and the like, some people still insist on using the platform. Mollick can and should move to another platform for OUT… ↩︎

  2. I’m paraphrasing this part. ↩︎ ↩︎

  3. I just deleted a couple of links to news about the continued enshittification of Alberta because I just can’t anymore. They win. The UCP are counting on this, but if I’m going to maintain some sense of sanity I have to just let them freewheel until the next election and then hope enough people in this province are paying attention and kick them to the curb. ↩︎


comments powered by Disqus