D'Arcy Norman, PhD

This fall marked 30 years that I’ve been working in edtech. I’d done some edtech projects before then, but Fall 1994 was when I started doing it for a living. 30 years is a long time, simultaneously forever and gone in a flash. Instead of writing a 50,000 word series of posts documenting minutae, I took some time to reflect on some of the major themes and changes that have defined my career so far. (the minutuae are already documented on my Projects page, in my Ancient archive, and in my CV.)

Themes

From a high level, I have been involved with developing, implementing, and supporting 6 major classes of technologies. They’re interconnected, but to grossly oversimplify:

  1. Media. Digital media. Interactive media. Multimedia. Hypermedia. Hypercard. Macromedia Director. Flash. Video. Images. Audio. Text.
  2. Networks and The Web. Webservers. Online courses. Blogging. Wikis. Taking networked hypermedia to global scale.
  3. Learning Objects and Learning Object Repositories. Databases of metadata about media published on the WWW, trying to make sense of (and naively trying to impose order on) all of the networked content.
  4. Learning Management Systems. Teaching & learning content and interactions. In databases. On The Web.
  5. Videoconferencing. The shift to synchronous online communication.
  6. Generative AI. 🤷

30 years - Technologies over timeThemes over time, 100% stacked area

(Interactive) Media

What if people could access high quality content produced and curated by experts so they can learn and develop skills? What if the tools gave learners control over the experience rather than following a broadcast model?

When I began developing the series of psychomotor skills CD-ROMs for the Faculty of Nursing in 1994, video was still predominantly analog and linear. Computers were just getting powerful enough to play postage-stamp-sized digital video, and authoring platforms like Macromind Director provided new tools to control video playback. Digitizing video (from tape) required expensive high-end computers and hard drives. With careful optimization, they could capture video at something like 5Mb/second, in chunks approaching 2 gigabytes. Which, in 2024, seems laughable. The laptop I’m writing this on has an SSD that runs almost 1,000 times faster, and Microsoft Outlook - an app for checking email - is 2.5GB.

Those early multimedia projects required careful planning - every second of video had to justify its existence due to storage limitations. We’d spend hours optimizing video compression settings to find the sweet spot between quality and file size (and what computers could reliably handle). Today, instructors casually record hour-long presentations in 4K without a second thought. But those early constraints taught valuable lessons about intentional design and the importance of breaking content into meaningful chunks - lessons that remain relevant even in an era of unlimited storage.

Networks and The Web

What if content - and people - weren’t siloed? What if we could connect content from different sources, and help people to connect with each other?

The shift from floppy disks to CD-ROMs to web delivery wasn’t just technical - it completely changed the relationship between content creators and their audiences. Suddenly, updating content didn’t require pressing new discs and shipping them to users. The web made continuous improvement possible. But it also introduced new challenges: How do you design for unknown screen sizes? Unknown connection speeds? Unknown user contexts? These questions forced us to think differently about content design and delivery.

I was part of a team that built the first for-credit online course in Western Canada (or so we were told - who knows?) - hand-rolled HTML, custom interfaces, nascent digital media (Real Player?)1. Still essentially custom-built - it took a team of 4 people to produce and maintain a course - but new tools like NetObjects Fusion and Dreamweaver were making that more sustainable.

Maybe more importantly, publishing content on the web meant that you may not know who was accessing it, or what they were finding valuable. This meant a shift from producing bespoke experiences to “let’s just put stuff online and see what happens”

Learning Objects and Learning Object Repositories

But The Internet is chaos! It’s impossible to know what to trust for use in a course! We need tools to make sense of it all! People need help publishing high quality content and describing it with rigorous metadata!

The learning objects movement represented an attempt to bring software engineering principles to educational content. The promise was compelling: create once, reuse many times, just like software libraries. The reality proved more complex. Context matters more in education than we initially assumed. A perfect learning object for one instructor might need significant adaptation for another. We learned that reusability often comes at the cost of effectiveness. And the term “Learning Object” became so genericized that it became meaningless.

I built the first working learning object repository in Canada, one of the first anywhere. It began as a rough prototype of the concept, and grew into something that formed a core of several major repository projects. It then became the foundation of the Pachyderm authoring platform, which was used to produce online exhibits for museums and institutions around the world - learning objects in action.

But the promise of the reusable learning object just didn’t work out (and the death of Flash meant that our bet on that technology was misguided, and all Pachyderm-authored content disappeared when Flash was finally killed in 2020).

Metadata is expensive - in time, resources, processes, friction - and people just don’t think in metadata. Stephen Downes suggested that a repository, and its collection of metadata about content, wasn’t actually necessary - all we needed to do was to publish content on the web and let search engines find it and index it. Heresy! But, he was absolutely right. Attention Is All You Need, 20 years ahead of the curve.

Learning Management Systems

Now that we have people and content connected over networks, what if we provide a framework that makes it more useful for online teaching and learning?

When I started, online learning meant hand-coding HTML and FTPing files to webservers. The rise of Learning Management Systems promised to democratize online teaching by providing a structured framework that any instructor could use, regardless of their technical skills. But that promise came with tradeoffs that we’re still grappling with today.

I have built 2 learning management systems - one based on rich multimedia (and then bridged the gap from CD-ROM delivery to online courses hosted within Macromedia Pathware) and one built on top of learning objects - and implemented another that is used daily by 40,000 people.

Early LMS platforms like FirstClass and WebCT showed what was possible - integrated tools for content, discussion, and assessment. Instructors could focus on teaching rather than technology. Students got a consistent experience across courses. Authentication and access provisioning were simple, and there were tools for documenting and sharing grades. It seemed like a win for everyone.

But the real story was more complex. LMSs tend to shape pedagogy in subtle ways. Their design implies a certain model of teaching and learning - one based largely on content delivery and assessment. Tools that come bundled with the LMS get used; other approaches often get left behind. “Click here to add content” is seductive in its simplicity, but it can lead to “shovelware” - taking face-to-face materials and simply dumping them online. “I uploaded a PDF of my lecture Powerpoint - I’m teaching online now!

Instructors who had been doing interesting things with wikis, blogs and other web tools felt pressure to move everything “inside the system.” The arguments were always reasonable - privacy, security, support, ease of use, standardization. But something was lost in that transition. The messiness and serendipity of the open web gave way to the uniformity of the walled garden.

That said, LMSs solved real problems. They made online teaching accessible to many more faculty. They gave students a more coherent experience. They reduced the technical burden on everyone. The question was not “should we use an LMS?” but rather “how do we balance standardization with innovation?

Looking back over 30 years, I’ve seen this tension play out repeatedly as we swing between locked-down consistency and open experimentation. The most successful instructors found ways to make the LMS a foundation rather than a cage - providing core services while remaining open to integration with other tools and approaches.

Videoconferencing

And what if these networked connections were synchronous?

Videoconferencing tools have been getting better - and are much more capable than the Connectix QuickCam running over AOL that I had to fly to Delaware to get working for a project back in 1996 or so. Or the iSight camera that Apple had to give developers in 2003 in order for them to use videoconferencing.

That Delaware trip to set up a QuickCam perfectly encapsulates early videoconferencing: everything was harder than it should have been, and the quality was pretty crappy. The technical hurdles were enormous, but we explored it because the potential value was clear. Seeing faces and hearing voices adds a human dimension that text alone can’t match. It took a pandemic to finally push videoconferencing into mainstream education, but the groundwork had been laid years before.

We’d been using Elluminate for online synchronous classes - a java-based online classroom platform developed by a company based in Calgary. It worked well enough, but when the java security model changed, suddenly it became completely unreliable. We needed to identify an alternative and shift to it asap. So, we implemented Adobe Connect (deploying it on a server hosted on campus), and took a semester to start shifting courses over to it. That was the fastest major transition we’d seen at our institution, and it was mostly successful.

Then, almost a decade later, COVID happened. Suddenly, the Connect server’s 500-simultaneous-students capacity wasn’t even close to sufficient, and self-hosting a platform that could scale wasn’t an option. So we implemented Zoom as a new platform - and took 24 hours to go from “we need a campus license” to “here’s the UCalgary Zoom platform, integrated with D2L” and everyone was using it in their courses. Now, everyone had access to a videoconferencing platform that, despite what personal opinions I have of it, would have felt like science fiction to the guy that had to fly to Dover to configure a webcam.

The biggest takeaway for me was that platforms - the edtech stack - had become reliable, scalable, and easy to implement. It had also become incredibly expensive, so the main challenges to successful edtech platforms now boil down to 1) money and 2) pedagogy (but mostly money).

Generative AI

And what if we took all of the other stuff, threw it in a blender, and saw what might happen?

In many ways, generative AI represents the convergence of everything we’ve worked on for the past 30 years. It combines media creation, networked distribution, content management, and synchronous interaction (with a chatbot) in ways we could barely imagine in 1994. The questions we’re wrestling with now echo earlier debates: How do we balance accessibility with quality? How do we maintain pedagogical effectiveness while embracing new capabilities? How do we support faculty in adapting their teaching to leverage new tools?

I have no idea where genAI tech will go, and what our collective responses to it will be. But, looking back at how we’ve adapted to previous themes, I’m guessing there will be a push to impose structure on the chaos, to contextualize it for teaching and learning, and to “democratize” it somehow.

Roles

How I’ve worked has also changed pretty dramatically over these 30 years, in three distinct eras. Initially, I as primarily a programmer - a maker of things. That wrapped up at the end of the CAREO project, when I shifted entirely into providing consultation for instructors. Then, I started to shift more into leadership (of teams and projects) and strategy (helping to figure out what we need to do).

30 years - roles over timeRoles over time, 100% stacked area

My career path mirrors the evolution of educational technology itself during these decades. In the ’90s, we had to build everything from scratch - there weren’t existing tools that met our needs. As the field matured, the challenges shifted from technical to pedagogical. How do we effectively use these new tools? How do we support faculty in transforming their teaching? Eventually, the questions became strategic: How do we make sustainable choices about technology? How do we balance innovation with stability? Each role brought new perspectives on the fundamental challenge: using technology to enhance learning.

One thing I’ve noticed is that the sense of creativity, curiosity, exploration, and playfulness that defined the multimedia and early Web eras feels to have been subdued over the last decade or so. Part of that is due to the maturity of software and tools. Part of it is the neoliberal erosion of institutions. Part of it is a product of a shift from a creator mindset to one of a consumer. Edtech is now commoditized, productized, sold to us. It promises efficiency and scale - so that we can better respond to things like restricted operational budgets - to do more with less.

The role of edtech in higher ed seems to have been reduced to selecting tools, integrating them, and supporting their adoption and use. Which, sure, necessary. But not creative. Not curious. Not explorative. Not playful. I want to focus the next stage of my career on shifting my institution back toward embracing curiosity, creativity, exploration, and play.


  1. and not enough bandwidth to play it reliably so we developed a hybrid CD-ROM solution so we could mail CDs to students ↩︎

Comments powered by Talkyard.