⚙️ Work
I kind of hit a wall with fatigue around institutional structures and expectations. Oof. That’s all I’ll write about it here. I have some reflecting, thinking, and planning that I need to do.
🔗 Links
Taylor Institute
- Anna Pletnyova & Lorelei Anselmo: Using AI for Accessibility and Inclusion. Another great resource (web page and infographic) from our team at the TI.
- Erin Kaipainen, Lisa Stowe, Christine Martineau, & Elyse Bouvier: Revisioning the Experiential Learning Framework (pdf) - an outstanding document building on work done through our Office of Experiential Learning.
Design
Nick Heer @ Pixel Envy: Heart and Soul and Efficiency
Nick writes about his experience with digital tools not having “character” and how that might inform the urge to do non-digital things like use film cameras etc. I think one of the challenges with our digital tools isn’t that they’re clean and perfect (or at least striving to be) - it’s that they’re institutional. The interfaces aren’t ours, aren’t human. They’re produced, by abstract others, with no connection to us. They aren’t ours.
This goes for the tools we use for teaching and learning - the LMS is a perfect example of this. There’s no character, no wabi-sabi, no sense of connection with the tool. We are users of tools produced by others, which frames their use as a challenge of efficiency.
At D2L’s Fusion conference back in 2015, Welby Altidor (at the time, Creative Director at Cirque du Soleil) gave a keynote. It was a great presentation about his approach to innovation and technology and transdisciplinary teams, but the bit I still think about almost daily was an almost-throwaway-comment he made, about having every CdS project build in an element of punk rock. Something that doesn’t necessarily fit with the polished and produced vibe of the project, but that imbues it with character, with life, with humanity. Just for the fun of it. To make it their own.
And then I think, almost daily, about how institutional processes and structures actively work against that kind of thing. And how it can take 11 months and counting for these processes to click a checkbox to enable a tool they’re already paying for. Punk Rock? Character?
Future possibilities
NSA releases copy of internal lecture delivered by computing giant Rear Adm. Grace Hopper (via Jennifer Ouellette @ Ars Technica)
Watch both parts of this 1982 lecture by Rear Admiral Grace Hopper. It’s on the level of Doug Engelbart’s Great Demo. And she was 76 years old at the time. Amazing.
42 years later, and we still have the same challenges - but at the scale of TB or PB rather than punchcards.
And a great bit in part 2, on Management vs. Leadership.
AI
Jill Barshay @ The Hechinger Report: An AI tutor helped Harvard students learn more physics in less time (via Stephen Downes)
An intro to physics prof used a guided ChatGPT “tutor” to introduce topics to students. Students appeared to be more engaged (and maybe learned more?) when using ChatGPT. Downes points out that this is only “noteworthy” because it’s Harvard. I think the salient bit isn’t Harvard, or even ChatGPT. It’s:
In this experiment, the “at-home” sessions with PS2 Pal were scheduled and proctored over Zoom.
Students were more engaged with a tutor (AI or otherwise) (while? because? despite?) they were being proctored - watched by someone over Zoom - while they did it. This is my shocked face. And any comparisons made to other courses taken by the participating students are meaningless, unless those courses also had Zoom-proctored tutoring. Curiously, the description of the Zoom proctoring isn’t in the paper - it’s only in Barshay’s article about the paper, which also included information learned by interviewing the lead author. I don’t know how they roll at Harvard, but that should have been disclosed in the Methods section and dealt with in the Discussion section of the paper. There’s no mention of “Zoom”, “proctor”, or even “observe/observed” anywhere in the paper. They did disclose who produced the videos used in the lessons, but nothing about the proctoring…
To verify the active learning emphasis of the class, we asked students, at the end of the semester, “Compared to the in-class time in other STEM classes you have taken at Harvard, to what extent does the typical PS2 in-class time use active learning strategies (i.e. provide the opportunity to discuss and work on problems in-class as opposed to passively listening)”. The overwhelming majority of students (89%) indicated that PS2 used more active learning compared to other STEM courses.
AKA “Compared to my other classes, I spent much more time actively working on problems in this course where someone scheduled that for me and then watched me while I did that.” Anyway. The full paper is:
Kestin, G., Miller, K., Klales, A., et al. (2024, preprint). AI Tutoring Outperforms Active Learning, 14 May 2024, PREPRINT (Version 1) available at Research Square [https://doi.org/10.21203/rs.3.rs-4243877/v1]
Ethan Mollick: Scaling: The State of Play in AI. Includes some charts and data showing that LLMs are getting bigger, requiring more processing to build, and more energy, doubling processing requirements every ~6 months. No mention of the carbon impact of that process. GPT-3 produced over 500 metric tonnes of carbon just to build the LLM, and that’s now an outdated and small model. Add the datacentre costs to run the LLMs after they’re built…
Vancouver Community College: Guidelines for Generative AI in Teaching and Learning (via Tannis Morgan)
Daniel Kosta @ AI For Education: Developing AI Guidance for Schools and Districts. Guidance aimed at K12 schools and districts, but their suggestions look appropriate for higher education as well.
Edward Zitron @ Where’s Your Ed At: The Subprime AI Crisis (via Bruce Davie)
The prices are not decreasing, the software is not becoming more useful, and the “next generation” model that we’ve been hearing about since last November has turned out to be a dud. These models are also desperate for training data, to the point that almost every Large Language Model has ingested some sort of copyrighted material.
Speaking of ingesting copyrighted material…
Benj Edwards @ Ars Technica: Landmark AI deal sees Hollywood giant Lionsgate provide library for AI training
Runway plans to develop a custom AI model using Lionsgate’s proprietary content portfolio. The model will be exclusive to Lionsgate Studios, allowing filmmakers, directors, and creative staff to augment their work. While specifics remain unclear, the partnership marks the first major collaboration between Runway and a Hollywood studio.
Jason Koebler @ 404 media: Project Analyzing Human Language Usage Shuts Down Because ‘Generative AI Has Polluted the Data’ (via Erik Likness)
Why wordfreq will not be updated
the developer is stopping their project because text slurping tools like wordfreq are now mostly used to harvest data to train LLMs and this is turning into some kind of ouroboros that they want no part of.
SoTL-ish
OTESSA: Call for Papers: Special Issue on Mapping Terminology, Theory, Policy, and Resources Across Key Areas of Impact In Educational Technology (via George Veletsianos)
I’m thinking of submitting something based on the framework that came out of my dissertation - given the 1,000-word limit, likely a brief version of Chapter 8. I think it fits with the theme of “mapping terminology and theory”, given it’s explicitly mapping connections between HCI and SoTL, but who knows? Previous submissions to other journals didn’t go anywhere because it doesn’t fit neatly.
🧺 Other
Bloggity
- I’m thinking of migrating comments from Disqus (hosted comments, some privacy issues) to a self-hosted application via comment-sidecar It’s just PHP and MySQL (MariaDB?) so it should be a pretty minimal install - and it can import comments from Disqus… I really don’t like having external dependencies - BUT - even though it’s a relatively simple application, having something that hasn’t had an update in 4 years also makes me uncomfortable…
🗓️ Focus for next week
- Meetings - AI strategy, team 1:1s, a couple of projects, and a consultation.