Linkblog - 2026-04-26

← All weeks

Week of Apr 20 - Apr 26, 2026

An attack on teaching and learning centers | Bryan Alexander

Colleges and universities should close teaching and learning centers because they threaten good teaching,  That’s the argument of a recent Chronicle of Higher Education column, and I wanted t…

Bryan responds to an incredibly dumb take on university TLC’s - that they cause harm and prevent instructors and faculties from engaging with pedagogy. Which is a stupid, stupid take, and one that would only be possible from someone with only a superficial understanding of what TLCs do, how instructors and faculty leaders work with us, and how our primary reason for being is to support/enable/foster/encourage instructors and faculties to deeply engage with pedagogies within their disciplines - and not as a source of truth or remediation.

Screenshot

Tags: person:Bryan Alexander, higher education, teaching & learning centres

Added: 2026-04-25 10:34


BEWARE SOFTWARE BRAIN | The Verge

The people do not yearn for automation.


Selected text

Even taking the time to consider how much of your life is captured in databases makes people unhappy. No one wants to be surveilled constantly, and especially not in a way that makes tech companies even more powerful. But getting everything in a database so software can see it is a preoccupation of the AI industry. It’s why all the meeting systems have AI note takers in them now.

(how much of “AI will replace universities” is just a variant of Software Brain, thinking that we just need a big enough LMS with enough data and APIs and hey presto AI solves everything…)

Screenshot

Tags: via:John Gruber, person:Nilay Patel, AI, automation

Added: 2026-04-23 15:01


Data Centred: the Shifting Landscape of Canada’s Digital Infrastructure

Using proprietary asset-level data tracking data centres across their full lifecycle—from announcement to construction to activation—we provide the first comprehensive mapping of Canada’s data centre landscape. Canada’s operational base remains modest, but the announced and under-construction pipeline is nearly an order of magnitude larger. This expansion is spatially concentrated and increasingly rural: Alberta accounts for over 90% of planned capacity despite a grid emissions intensity nearly five times the national average, raising questions about emissions trajectories and stranded asset risk.

Carlo, A., & Rolheiser, L. (2026, March 24). Data centred: The shifting landscape of Canada’s digital infrastructure (Schulich School of Business Real Assets Research Paper Series, forthcoming). SSRN. https://doi.org/10.2139/ssrn.6464099

Screenshot

Tags: via:Stephen Childs, AI, datacentre, hosting, infrastructure, article

Added: 2026-04-22 14:14


Zoom and Tools for Humanity advance trust in the age of AI through new integration | Zoom - Zoom

Today, Zoom announced a partnership with Tools for Humanity to integrate World ID Deep Face into Zoom Meetings, enabling real-time verification that meeting participants are human to strengthen trust in live communications.

Surely, integrating Sam Altman and Crown Prince Bonesaw’s Magic Eyeball Scanning Orb into Zoom will be a good thing for privacy…

Screenshot

(Zoom, a company trying to rebrand themselves as an AI Company™, blocks the service that generates screenshots of web pages because they don’t like LLM bots. I’m leaving the broken screenshot up though…)

Tags: Zoom, privacy, identity, World

Added: 2026-04-22 09:21


The LLM Fallacy: Misattribution in AI-Assisted Cognitive Workflows

The rapid integration of large language models (LLMs) into everyday workflows has transformed how individuals perform cognitive tasks such as writing, programming, analysis, and multilingual communication. While prior research has focused on model reliability, hallucination, and user trust calibration, less attention has been given to how LLM usage reshapes users’ perceptions of their own capabilities. This paper introduces the LLM fallacy, a cognitive attribution error in which individuals misinterpret LLM-assisted outputs as evidence of their own independent competence, producing a systematic divergence between perceived and actual capability. We argue that the opacity, fluency, and low-friction interaction patterns of LLMs obscure the boundary between human and machine contribution, leading users to infer competence from outputs rather than from the processes that generate them. We situate the LLM fallacy within existing literature on automation bias, cognitive offloading, and human–AI collaboration, while distinguishing it as a form of attributional distortion specific to AI-mediated workflows. We propose a conceptual framework of its underlying mechanisms and a typology of manifestations across computational, linguistic, analytical, and creative domains. Finally, we examine implications for education, hiring, and AI literacy, and outline directions for empirical validation. We also provide a transparent account of human–AI collaborative methodology. This work establishes a foundation for understanding how generative AI systems not only augment cognitive performance but also reshape self-perception and perceived expertise.

Screenshot

Tags: via:Doug Holton, AI, article

Added: 2026-04-22 09:19


AI, A Mirror that Amplifies

The replacement critique misses what AI actually does to thinking.


Selected text

Put two people in front of the same model with the same assignment. One types “write me 800 words on rhetorical friction in classical education” and ships whatever comes out. The other spends forty minutes in an argument with the machine, pushing back on a weak claim, asking for the counterargument, rejecting a tidy metaphor because it flattens something important, noticing that the third paragraph is doing work the second one should be doing. Same tool. Different outputs. The difference is not the software. The difference is the person.

Screenshot

Tags: via:Stephen Downes, AI, person:Tim Moon

Added: 2026-04-20 09:47