2024 Week 36


Leadership

Experiential Learning

Flanagan, K., Stowe, L., Martineau, C. ., Kenny, N., & Kaipainen, E. (2024). The Land and the A.I.R.: Revisioning Experiential Learning on a Canadian Campus. Experiential Learning and Teaching in Higher Education, 7(3). Retrieved from https://journals.calstate.edu/elthe/article/view/4149

Colleagues from the Taylor Institute collaborated on this article, describing our renewed Experiential Learning Framework:

Grounded in the principles of holistic pedagogy inherent in Indigenous ways of learning, we propose a renewed definition of experiential learning – learning by doing, being, connecting and reflecting. This paper introduces the A.I.R Framework (Authentic experience, Intentional design, Reflection), which is a flexible model for high-quality, inclusive experiential learning that is adaptable to both curricular and co-curricular contexts. We also provide a visual tool for portraying and describing experiential learning in terms of the primary focus or purpose of the experiential learning and the environment in which the experiential learning occurs.

The “continuum of experiential learning” figure on page 68 is a great overview of the types of EL.

The journal, for some reason, requires you to create a (free) account to access the pdf of the article. Which is bizarre, but whatever. If you need the article and don’t want to give them your details, let me know and I can send you a pdf.

AI and Creativity

  • Ted Chiang @ The New Yorker: Why A.I. Isn’t Going to Make Art (paywall detour) (via James Gleick)

    If an A.I. generates a ten-thousand-word story based on your prompt, it has to fill in for all of the choices that you are not making. There are various ways it can do this. One is to take an average of the choices that other writers have made, as represented by text found on the Internet; that average is equivalent to the least interesting choices possible, which is why A.I.-generated text is often really bland. Another is to instruct the program to engage in style mimicry, emulating the choices made by a specific writer, which produces a highly derivative story. In neither case is it creating interesting art.

  • Meghan Herbst @ Wired: NaNoWriMo Organizers Said It Was Classist and Ableist to Condemn AI. All Hell Broke Loose (archived version) (via Tyson Kendon)

    NaNoWriMo organizations published a statement saying that banning AI is ableist and therefore they won’t ban its use. Meanwhile, authors respond with versions of:

    (Thorne) calls out NaNo for ignoring the public sentiment around AI and filling their statement with “politically correct language so that you can’t argue their stance.”

  • Benj Edwards @ Ars Technica: Generative AI backlash hits annual writing event, prompting resignations

    NaNoWriMo, known for its annual challenge where participants write a 50,000-word manuscript in November, argued in its post that condemning AI would ignore issues of class and ability, suggesting the technology could benefit those who might otherwise need to hire human writing assistants or have differing cognitive abilities.

  • Riley MacLeod @ aftermath: You Don’t Need AI To Write A Novel

    There are of course degrees between writing 50,000 shitty words yourself and generating 50,000 shitty words with AI; I’m not going to say that every possible use of AI in the creation of a first draft is equally reprehensible. But NaNo’s statement dresses the latest fad technology up in the language of social justice, desperately trying to give AI capabilities it doesn’t have or trying to solve problems it itself creates.

    and

    Writing, and all art, are deeply human things, even when the art you’ve made doesn’t meet your hopes or is objectively bad. The joy in making art is in making it; it is, incidentally, also one of the ways you get better at it. The AI grifters who argue that the technology is a valuable shortcut to democratizing artmaking misunderstand what art is for, what it means to make it, and why people engage with it.

But at least LLMs are good at summarizing content, right?

  • Kyle Orland @ Ars Technica: Australian government trial finds AI is much worse than humans at summarizing - a study conducted in early 2024, to compare LLM-summarized and human-summarized content.

    But they used now-outdated LLMs (Llama2-70B and Mistral-7B) rather than current or frontier models like ChatGPT-4o or Claude or Gemini or or or. So, this could just be a case of “old, self-run LLMs aren’t as good as humans at summarizing”. The study started in January 2024, so they could have had access to more current LLMs. I’m guessing it was something more like “we got ethics approval to run the study, and in our application (submitted several months before the study can begin) we had to describe how we would analyze human-generated data so we went with something we could control and run locally. Also, we didn’t want to use a third-party-hosted LLM because that would complicate this process and delay ethics approval.”

Thankfully, GenAI will lead the way:

I look forward to learning about the ways that chatbots will help solve homelessness in the state of California.

AI and Higher Education

  • U15 Canada: Navigating AI in Teaching and Learning Values, Principles and Leading Practices (via Sarah Eaton)

    This document aims to provide guidance for institutions as they navigate the integration of generative AI tools. The guidance focuses on the implications for education as a first priority area. As members of U15 Canada, a network of research-intensive universities in Canada, we are committed to aligning within an ethical framework that builds and maintains trust in support of the adoption and application of these tools. This will guide the development of emerging practices that support our shared values. Given the pace of change in this area, this document must be evergreen ensuring it remains relevant in this rapidly evolving field.

  • UNESCO: AI competency framework for students (via Soroush Sabbaghan)

    outlining 12 competencies across four dimensions: Human-centred mindset, Ethics of AI, AI techniques and applications, and AI system design. These competencies span three progression levels: Understand, Apply, and Create. The framework details curricular goals and domain-specific pedagogical methodologies.

Fonts

  • Departure Mono (via John Gruber). This font just feels right to my eyes. I know that the old-school bitmapped fonts aren’t great for readability. But I also know that these are the typefaces that are burned into my brain and they just feel right. I’m trying it out as my website’s font for a bit (after previously trying a version of the old Chicago font and removing that after realizing it made text so much harder to read).

🧺 Other

This week’s ChatGPT-4o experimentation included enhancing my photo gallery pages2 to include fullscreen display, keyboard navigation, and parsing filenames into human-readable titles. Again, something I could have done with a bunch of searching and trial and error, or just used a massive javascript library to automatically handle this, but the interactive/iterative approach to working with an LLM to do this just feels so much more productive.

And I realize now how much I talk about things feeling certain ways - perception and gut responses are important aspects of how we experience stuff, and discounting that for something “objective” doesn’t feel very authentic.

I also messed around with combining tags in Hugo, using code generated by ChatGPT. I now have a YAML file of tag synonyms and I think the tags page and associated tag pages now list content from combining tags. So, where there used to be separate tags for something like “northern voice” and “nv06” and “northern voice 2006” (repeat through 2010, when I stopped going to Northern Voice…), now there’s a single “northern voice” tag listing everything in one place. Or photos for my daily photo project are listed all under the 365photos tag page, despite leap-year photos actually using 366photos. This is all done in Hugo templates, so I didn’t have to edit or search-and-replace tags in the 9,272 items in the Hugo content directory. And I can revert to original tags just by swapping out the template files. The hardest part was trying to figure out how convince it to actually do the tag comparison/combination case-insensitively, but it looks like it’s working now. It takes a little longer to build the site (~30 seconds instead of ~15 seconds), but that’s still quick enough to not be an issue.

I tried doing the same thing in Claude.ai, but it failed pretty miserably after generating a much more complicated and buggy solution.

And I finally (finally) got on the bike again, after another (another) hiatus. It was my indoor bike, to keep the ride short and not pegged at Zone 5, but it was a start - a quick set of intervals at Medicine Lake in Jasper (filmed last fall, before the latest round of fires).

🗓️ Focus for next week

The fall semester started this week, so meeting season is ramping back up. Meetings to plan D2L content retention, learning technology platforms, and microcredentials. 2 community sessions (learning technology forum and AI conversations), and the President’s Reception for long-service employees.


  1. which, yeah, having a group with a name like that feels pretty far from Startup Mode. Which, of course we’re not a startup and never have been / never will be. But the trappings of tiered layers of management are pervasive. ↩︎

  2. which are just simple php files that I copy into a directory full of images, which then dynamically generate a page to display whatever’s in the folder. I copy the file into the index.php file of whatever directory I want to gallery-ize, and it’s done. ↩︎

comments powered by Disqus