Hugo is the static site generating content management system that I use to publish this website. It works really well, and has some deep functionality that I’m not even touching. For instance, it can parse data files while generating the site - including JSON and XML - and can use the content of those files to display information on web pages.
I was going to follow somerecipes that I found online, but they involved converting the OPML file into JSON to be read by Hugo. I didn’t want to do that if possible. So, time to roll my own solution using built-in functionality in Hugo…
In recent years, insurance companies have offered incentives to people who install dongles in their cars or download smartphone apps that monitor their driving, including how much they drive, how fast they take corners, how hard they hit the brakes and whether they speed. But “drivers are historically reluctant to participate in these programs,” as Ford Motor put it in a patent application that describes what is happening instead: Car companies are collecting information directly from internet-connected vehicles for use by the insurance industry.
Inspired by Benj Edwards’ post on Ars Technica this morning, I wanted to try out the latest version of DALL-E that’s integrated into ChatGPT 4 Turbo™, to see what it could create to visualize potential future learning spaces.
I started off with a basic “Create an image of a video game depicting students exploring classrooms and informal learning spaces. Cinematic, 8K, studio lighting.”
🤖 - AI-Generated Content via DALL-E
Interesting. Chalkboards, but sure. Doesn’t look very video-game-y, though. “make a version of that image that’s pixellated as if rendered by an Amiga 1000.”
This post was originally going to be part of a series exploring the topic, but if I take that approach I’ll probably never actually make time to write each post in the series, so here’s an omnibus “I need to think about this stuff and writing a blog post is the best way to formalize my thinking.”
We’ve all been trying to figure out generative AI since ChatGPT was launched almost 1 year ago. (yes, it’s been less than a year since ChatGPT was originally launched on Nov. 30, 2022. yes, it feels so, so much longer than that.) Much of my work over the last year has involved conversations with team members and university leadership about the nature and implications of generative AI tools, most prominently ChatGPT.
I reached out to folks in SAPL to see if they were interested, and got connected with Matthew Parker. We talked about this, and decided it was worth exploring further. He teaches ARCH 700 - a work-integrated learning course for senior architecture grad students to work with client organizations to research and design architectural solutions. We pitched using the Taylor Institute for Teaching & Learning as a “client”, with students working with us to design possible future learning spaces.
My dissertation explored the connections between the design and analysis of video games and our design and understanding of teaching & learning. Much of that work was shaped by a conversation I had with one of my supervisors:
Figure 1.2: One of the conversations that shaped the direction of this dissertation. AI-generated character portrayal provided by Stable Diffusion.
I had a random thought, triggered perhaps by the tinnitus that is constantly eeeeeeee-ing in the background of everything. What if there was a Mastodon bot that just replied to every toot that mentioned it, with a string of eeeeeee’s of the same length as the message?
The second semi-random thought was that I had no idea how to even start to build such a thing.
The third obvious-in-2023 thought was that I’d bet ChatGPT could help with this somehow.
I’d intended to quickly write this post to reflect on the session, but it’s stayed in my drafts pile for a couple of weeks now so I’m going to share what I can remember. I’ll likely be misremembering some of the details of the session, but this should hit the highlights at least.
I’ve been seeing a lot of energy online about bringing the old web back, or bringing the humanity back to the web, or just trying to make some art, dammit. So, here’s my part. This blog is my corner of the World Wide Web. Of the non-corporate, non-monetized, non-advertised, non-user-tracked, human-scale online experience. I haven’t been blogging, partially because I’ve been holding back due to Not Having Anything Profound to Share™. But that’s not how the blogosphere works, so I’m going to make an effort to post more. Maybe I’ll start up the weekly recaps or something, too. Who knows? I have been pretty good about the daily photos thing though…
In Obsidian, I use a folder of notes called “Collections”. Inside Collections are various folders that act as buckets of info, in a quasi-Zettelkasten notes-as-personal-wikipedia kind of thing.
My Collections currently include (in alphabetical order):
Articles
Committees
Ideas
Institutions
Organizations
People
Profiles
Topics
Vendors
Some of these have subfolders to organize notes into smaller buckets. For example, Topics is organized by folders for:
General Tech
Higher Education
Information Technologies
Learning Spaces
Learning Technologies
Misc
Pedagogy
Processes
TI Projects
UCalgary
The folders change and I add to them and reorganize as needed. I’ve only been using Obsidian for about 4 months now, so I figure things will continue to settle as I get deeper into it…
Rambling blog post alert: there isn’t a simple, straightforward way to tell the story of how I use Obsidian. This is going to be a bit of a winding post as I start to describe my setup and workflow. And there will be gaps because a) I don’t have time to write an omnibus description of this and b) you don’t want to read that anyway. I’ve been meaning to write this for awhile, but kept getting stuck by the scale of what was needed. So, forget that, here’s a first and incomplete blog post to get it started…