⚙️ Work

  • Still sick. Not wanting to be “that guy that keeps coughing and blowing his nose in the office”, I took some sick time and worked from home. I’m not contagious anymore, and need to be in the office next week. Hopefully my respiratory system cooperates.
  • While preparing for our conference presentation, we collected some links to a couple of videos to support the description of the course. Jana (one of the students in the course) had published one of her video game demos as a video on YouTube - it’s a remarkable example of using video games to foster reflection on architecture, experience, and teaching & learning.
  • The 2024 University of Calgary Conference on Post-secondary Learning & Teaching. It’s always an excellent conference. I had to skip the pre-conference day because I didn’t want to make anyone sick, but the 2 days of the actual conference were online so I was able to participate fully.
  • Our presentation went well, but we got the last slot on the last day of the conference. Friday afternoons are tricky, as everyone starts disappearing. Not a huge turnout as a result, but it was recorded and we’ve already gotten feedback from someone who watched the recording that afternoon. I will try to write up a blog post version of the presentation at some point.

Sonification

  • Hold the line. The sonification starts at around the 11:00 mark in the podcast episode. That’s a lot of Zippo lighters, and it sounds like the soundtrack from a Christopher Nolan movie combined with something out of HBO’s Chernobyl. (via Jason Kottke and Grant Potter)

    Hold the Line is a piece of sound art generated by data from Canada’s 2023 wildfire season. Every single fire that was reported by the Canadian Interagency Forest Fires Centre between 1 April and 30 November is represented by a click sound, with each real-world day playing out over 2.5 seconds of sound. A bass note drops at the start of each new day.

  • bert hubert: Tracker Beeper - a sonification of every time a browser sends data to Google. Sounds like a Geiger counter gone amok. Not great. Not terrible. Actually, this is terrible. (via Jan Wildeboer and a boost from Alan Levine)

    I’ve installed googerteller via brew.sh to try it out1. It’s pretty silent on my site, but it’s surprising just how LOUD it gets on websites that I use regularly. Yikes!

AIfication

  • Ruffolo et. al. (2024). Design of highly functional genome editors by modeling the universe of CRISPR-Cas sequences. They trained an LLM on genomic data, and released a tool that lets you feed that into CRISPR. What would a genAI hallucination look like, when instantiated as a living organism? (via Howard Rheingold)

  • Benj Edwards @ Ars Technica: Microsoft’s Phi-3 shows the surprising power of small, locally run AI language models. Microsoft released Phi-3-mini - a 2.3 GB LLM that performs about on par with ChatGPT 3.5. The key difference appears to be that it wasn’t trained on the entire cess pool of the internet, but rather on a select set of texts. Garbage In / Garbage Out. It’s easily installable via Ollama, and launched in Terminal via:

    ➜ ~ ollama run phi3

    Which gets interesting, when I can do things like this with an LLM installed and running on my own lightweight M2 MacBook Air, without needing third-party cloud services or million-dollar GPU farms:

    > write a 1-paragraph summary of the web page at https://arstechnica.com/information-technology/2024/04/microsofts-phi-3-shows-the-surprising-power-of-small-locally-run-ai-language-models/

    Microsoft’s Phi 3 represents a significant breakthrough in the field of AI language models, showcasing remarkable performance capabilities despite its compact size. Developed as part of Microsoft’s PluralSight architecture, which intelligently merges transformers with RNNs, Phi 3 is an energy-efficient model that requires minimal computing resources while offering superior natural language processing compared to traditional AI models like Google’s TensorFlow and Facebook’ Writes. This innovation paves the way for deploying advanced AI capabilities directly on edge devices such as smartphones or IoT gadgets, enabling businesses to enhance their user experience without large-scale infrastructure investments. Moreover, it heralds a shift towards decentralized processing with increased privacy and security considerations at the data source.

    And it can access local files without having to upload them who-knows-where for someone to do who-knows-what with them. I’ve done test prompts to have phi-3 summarize a PDF document in my ~/temp directory, and to provide feedback on how to improve that document. (While also knowing that it’s not actually providing feedback, nor actual suggestions for improvement, but rather just smushing together chunks of UTF-8 in plausible ways.) At least it offers a potentially useful way to experiment with a text-based LLM without feeding the beast.

    It looks like this will be Apple’s approach to implementing on-device LLMs…

  • Venturebeat: Apple releases OpenELM: small, open source AI models designed to run on-device

Albertafication

I think I need to just share ‘Berta stuff without commentary because this stuff is maddening.

🍿 Watching

  • ★★★★★ 1917 (2019). Wow. Brutal. The impossibly long handheld tracking shots2 were incredible and made it a much more human-scale, personal, and immersive experience. I didn’t keep track, but for much of the film the camera was following Will in a 3rd-person perspective, ala video games. This felt like a gimmick at first, but definitely pulled me into the scene - moving along with Will and Tom, rather than watching from carefully-blocked fixed cameras.

🧺 Other

Bloggy stuff

I spent some quality time futzing around with my blog while quarantining. Turns out, search had broken so I reverted to the working version of the search javascript from the Hugo Book theme. I’d tried to update to a current version of Flexsearch, but that apparently didn’t go too well. Reverting seems to have fixed everything. The fun thing about client-side javascript searching is that the .js files get cached by the browser. I figure I’m the only regular user of search on my site, or of my site in general, so a forced cache clear did the job. And, build errors in Hugo indicated that several YouTube videos that had been embedded over the years were no longer available so I set those posts as Draft. The old internet is slowly evaporating…

Tech stuff

In preparation for my presentation this week, I figured I’d gamble to see if a new USB condenser mic could be delivered in time. I’m trying to avoid the extra-roomy and keyboard-noisy built-in microphone. The gamble didn’t pay off - the mic arrived a couple of hours after the presentation had ended. But, it was cheap ($35 for a condenser mic!) and I’ve got it handy on my desk at home for next time…

Searchy stuff

I’m still using Kagi. Their Billing report shows I’ve made 138 searches since April 10. Less than 10 searches per day, which would fit under the Starter Plan (I’m currently paying for the Professional Plan, with unlimited searches and access to the AI tools). I’m not sure if I’ll keep paying for the pro plan, or downgrade to starter at the end of the billing term.

Paul Pival reminded me about the Kagi + Wolfram partnership. I’d noticed that some searches produced quick answers without having to list results. It’s because of integration with Wolfram|Alpha. DuckDuckGo and Wolfram|Alpha have been partners for a few years now, so this isn’t new - but still useful. I think I had overlooked the integration in Kagi search results because I’d gotten used to seeing it in DuckDuckGo results so it just became background.

Search results have been good enough. A couple of times, I wasn’t able to find what I was looking for. Not sure what happened there - if Kagi is using Bing and Google indices via their APIs, it should still be able to find everything. But, most of the time, I found what I needed quickly and with less clutter than DuckDuckGo and Google.

Note-taking stuff

I tried using the Rocketbook notebook during the conference keynotes. It was not a great experience. Writing on plasticy “paper”, with a special pen that I don’t like (I prefer to use a Zebra Sarasa 0.7mm black, and Rocketbook requires a Pilot FriXion). The app worked well enough - I got 2 PDFs of pages (1 for each day of the conference), and they get imported into Obsidian well enough. But I can’t tag the pages, or link notes, or paste text or URLs or screenshots, or embed any media, or refer to any other notes. So, basically, all of the functionality I’ve come to depend on. But with a pen I don’t like, on paper that feels wrong. Aside from that, how was the play, Mrs. Lincoln? I’ll probably try it a bit more to see if it clicks for me, but I think I’ll primarily keep using my laptop for taking notes.

🗓️ Focus for next week

  • Meetings.
  • D2L reps are coming to campus for a planning session.
  • Tannis & co. are coming to visit the TI!
  • A couple of medical things. No big deal, but this seems to just keep going.

  1. nothing like giving sudo access to a command line tool I just heard about. what could go wrong? (Technically, I only gave sudo access to tcpdump and its output was piped to non-sudoed teller, so it should be fine.) ↩︎

  2. Having seen some “making of” videos, these shots were even more impressive - mixes of handheld, passing to crane, passing to drone, passing back to handheld, with invisible CGI filling in some gaps. incredible. ↩︎