Remembering CAREO

Today is a memorable day. It’s the day that CAREO, the learning object repository we built at The University of Calgary, is being officially decommissioned. Unplugged, mothballed, and put into storage. It’s been a wild rollercoaster ride for these 6 years, but that ship has sailed. Back in 2001, when CAREO was first created, there was a need for a concrete prototype of a repository. Other available software didn’t quite do what we had in mind, and it was relatively easy to just go ahead and build some software to test out some ideas.

I was just coming out of the First Dot Com Bubble Burst, having just gone down with the ship at an eLearning company in March 2001. So I had some free time, and wanted to learn something new. I was asked if I could put together a working repository, and naively said “sure”. I’d never built any server software, but wanted to play with WebObjects. This was the perfect opportunity. The project picked up a shiny new PowerBook G4 (400MHz! Holy cow!) for the repository to be built on, and I got to work from my home office. Things went well, and I was using EOModeler to implement the nearly-final IMS LOM metadata specification as a set of 80-or-so tables with joins all over the place. Oy.

The first alpha version went live from my home office, served up over my home cable internet connection, and using DynDNS to make it available. Worked like hot damn. The hardest part of the whole process was in learning to stop thinking so hard and just let WebObjects work its magic.

Soon, CAREO was scaled up, features added, and content contributed. A server was acquired (a shiny new XServe rev 1.0). It was a self-contained, standalone repository. Others started to show some interest, and I had the pleasure of working with Brian Lamb, the Learning Objects Discoordinator at UBC’s OLT. We set up a copy of CAREO on a UBC XServe. Gerry Paille set up a copy of CAREO at Peace River. There was a copy at the University of Ottawa (they actually got significant funding to run their copy – much more than we ever saw to cover the building of the thing in the first place…).

Over the first 3 years of CAREO’s life, there was a flurry of development activity. I added features such as wiki pages and threaded discussions tied to each learning object. A “theme engine” was built so people could customize the look and feel of the repository application interface. A custom “SciQ” K-12 science repository was built, and used in the Alberta science curriculum. I added RSS feeds for the “newest objects”, “top objects” as well as for user-defined searches. Support for Trackbacks from other software was implemented, letting people add context to the learning objects via weblogs or other trackback-enabled services.

The custom relational database for storing metadata was replaced with a multifunction generalized XML-in-MySQL store written by Julian Wood, along with adoption of the JUD XML-RPC API. A repository using an API to connect to the separate data store. People could use the ALOHA client application to manage their learning objects – adding metadata, uploading media, etc… and CAREO would pick it up automatically because it was talking to the same abstracted metadata store through the same API.

A bridge to the EduSource network of learning object repositories was built, making it possible to search from one repository and find learning objects scattered throughout the network through a custom inter-repository API. That API and network cost a lot of money and time. And didn’t work as well as Google.

I spent a fair amount of time experimenting with native XML databases to store the LOM metadata – BlueStream XStreamDB and eXist have both matured so much over the years. A Sherlock search widget (this is pre-Dashboard) was built to let people search the repository from their desktops. Installers were built to make it easier to get your own copy of CAREO on your own server.

Heady times. And most of the work was done with surprisingly little financial support. We were able to do a hell of a lot with what we had, though.

Then, things pretty much stagnated. Development stopped, as we focussed on higher priority (i.e., funded) projects. Other software matured to the point where it was difficult to justify maintaining a custom repository application. If I were to start from scratch now, I could deploy a more fully-featured repository powered by Drupal without having to write any code.

Over the years, I’ve been asked several times by people investigating learning object repository software to implement at a national level. Each time, I said that although the source code for CAREO is available, it would be a much more effective use of resources to just go ahead and use Drupal. Work with the larger community, and don’t write (or sustain) code that you don’t absolutely have to.

CAREO was important, back in 2001-2004, as a prototype. As a sandbox for trying out some of these concepts. As a place to easily host metadata and content and try the repository model. From that perspective, I think it was a huge success. Without CAREO, I would likely still be saying that we need centralized institutional repositories to tightly manage resources.

But, because of CAREO, I now know that we don’t need repositories at the institutional level. Personal repositories are much more powerful, effective, and manageable. They’re called blogs, maybe you’ve heard of them? And small pieces, loosely joined. Want to manage photos online? Use Flickr. Videos? Use YouTube/GoogleVideo/etc… We don’t need a monolithic institutional repository.

RIP CAREO

And now, it’s Halloween 2007. And we’re about to decommission our CAREO server here at UCalgary after 6 years. The software has been acting up, and it’s just not worth the time and effort to figure out what’s gone krufty. So it’s time to put it out of its misery. Farewell, CAREO. Thanks for the good times. I’ve learned a LOT about software design, information architecture, and metadata. More importantly, I had the pleasure to meet and work with a LOT of awesome people, all working on similar projects because they believe (as I do) in the greater good. Sure, we were naive, but we meant well. And now, hopefully, people will learn from our successes, failures, and mistakes, and not be doomed to repeat them.

Hotels and Price Gouging

We’re working on a project with some folks at the CHR, and they are travelling to a conference to present their courses and talk about the process. Part of that presentation will be a live demo of the Moodle-powered site and some of the cool Breeze content we put together for them.

The hotel (which shall remain unnamed for now) sent them a sheet, asking what technical services they would like for their 1 hour presentation. Included in that sheet was this portion, listing the costs per service:

starwood price gouging

I had to resize it to fit here, so it’s a bit hard to read, but the basics are:

  • Internet connection (wired): $350
  • Internet connection (wireless): $350
  • Telephone: $175

If that’s not the definition of price gouging, I don’t know what is. That’s insane. Their internet charges, according to these rates, would be over $10,500 per month. And that’s in Canadian money, not that whimpy US stuff!

I could almost see how they could justify these rates if the conference was some hodey-dodey high flying billionaire’s club meeting, or maybe a Web 2.0 pre-bubble-bursting lovefest. But this is a medical education conference.

If I had to pay $350 to have an internet connection during a presentation, I just plain and simple wouldn’t do it. But these folks have committed to giving a live demo, and the only way to do that is to grab some ankle and ask for more.

First thoughts on Leopard

Others will write more profound and deeper posts describing what’s so freaking cool about MacOSX 10.5 Leopard. This post is just my initial gut reactions. Want more meat? Surf over to arstechnica.com.

I’ve played with seeds of 10.5 for what seems like years (but is really only a year?) through our Apple Developer Connection subscription. But all of my previous experience was in carefully isolated cleanroom installations, to prevent any bugs from nuking my production system. I’d never tried an upgrade install. I’d never run it for more than a day or two tops because bugs and instability sent me running back to 10.4. So, this is my first real time in Leopard, without an alternate or backup system running a previous version Just In Case™.

My initial thought after install, which I’m sure is hardly unique, was along the lines of “holy frack. it worked perfectly. it just fracking worked.” Seriously. Every app I use still works. All preferences are retained (even my custom dock-pinned-at-start setting). Trivial upgrade to the new OS. Gotta love that.

After that, I played with some of the new toys. Spaces is absolute brill. I’ve used other virtual desktop apps. I paid for CodeTek Virtual Desktop. I used the Open Source Desktop Manager.  I used the other Open Source Space app. I’ve played with virtual desktops in Ubuntu. But Spaces just feels right. Dragging apps between desktops? Very cool. It’s got the best features of the others, without any bloat. Just right.

Time Machine. I plugged in a LaCie 500GB Big Disk Extreme, and 10.5 asked me if I wanted to use it for a Time Machine backup drive. Sure. Why not? I’ll give that a shot. Time Machine sounds pretty cool. So I let it chew (for a couple of hours) to do the initial backup set.

Time Machine initial progress bar

No kidding. 1.4 MILLION files. 124.5 GIGABYTES of data. And I don’t have to think about backing any of it up. Ever again. It’s fully automatic. IIRC, Time Machine keeps the last 24 hours of HOURLY backups, the last week of DAILY backups, and as many WEEKLY backups as your drive allows. That’s so freaking awesome I can’t even put it into words. Knowing that EVERY FILE I USE is backed up already? Priceless.

There is a catch, though.

You don’t necessarily WANT all of your files backed up. That scratch video file of a few gigs of data. That temporary working directory of hundreds or thousands of HTML files, etc… Automatic backups have the potential to archive a helluvalotta crap that you don’t really want to keep (and no, I’m not meaning dwarf-hentai-tentacle-snuff-pr0n, but I guess that would fit as well). So, for the files that I want to work on without squeezing them into my Time Machine backup system, they go into a folder on my desktop called “NO BACKUP”. I’ve added that to my Time Machine prefs as an exclusion. So, if I want to use HTTrack to scape a site to a working directory, it just goes in there. No worries about polluting my backups.

What’s next… Oh, right. Safari Dashboard clippings. Absolutely brilliant. I’d been using a hacked-together widget on 10.4 that was inspired by the 10.5 preview Stevenote. It worked, but it lacked the slick UI for selecting the portion of a web page to display as a Widget. It’s got a visual DOM inspector. You just move the mouse, and it highlights the relevant HTML element and any children. Click it, and tweak the bounding box. Click “Add” and it’s done. A visual DOM inspector with manual override. Fracking brilliant. I’ve added a few web page widgets, including the stats/comments sidebar from my blog’s admin page, and the video feed from Maui.

I’m actually using Safari again as my default browser. The TinyMCE editor that comes with WordPress 2.3.1 works just fine in it. Thank the fracking gods. Now, if only those fixes get pushed into the main TinyMCE product so I don’t have to use Firefox to manage all of my Drupal sites (don’t get me wrong – I love Firefox – but Safari’s text rendering simply blows the crap out of every other browser, except other WebKit-powered flavours).

Update: doh. Safari+TinyMCE aren’t all hot and sweaty after all. seems like there’s some work to do before it works reliably. Safari stripped out all linespacing when I clicked “Save and Continue Editing”

I set up Janice in her account to use GMail via IMAP in Mail.app. Mail.app autodiscovered the settings. I only had to provide her address and password. Mail.app DID THE REST. Fracking brilliant, again.

The last comment I have after running Leopard for less than a day is about the menu bar. Love it or hate it, apparently. I hate it. It’s shiny, and demos relatively well, but the bling is at the expense of the readability of menu items.

MacOSX 10.5 Menubar Translucency

Sure, the primary menu items lose translucency when you click on them. But that’s just annoying. A text-based Whack-a-Mole™ navigation system. Please, Apple, either lose the translucency outright, or have it pop to full opacity when the mouse moves within the menu bar. No clicking and scrubbing required.

Almost forgot! Tabs in Terminal.app! Sweet. Much cleaner than having to command-` between a dozen terminal windows. And, I’ve even caught myself playing with CoverFlow in the Finder. Not sure how much I’d actually USE that, but it sure is purty… 

LOR Typology: CAREO errata

I just poked through Rory’s A Typology of Learning Object Repositories article, starting with the tables, and found a few errors relating to his description of CAREO. Here are the corrections (I don’t have Rory’s email handy, and there aren’t comments on the DSpace page for the article):

  •  CAREO supports hosting content as well as linking to other servers. That was one of the primary goals of the project – to allow people to easily post content without having to know FTP. I don’t have the stats on this, but about half of the items in CAREO were uploaded to the CAREO server via the “add object” form.
  • For “maintaining” an object – CAREO lets the owner of the object edit the metadata, including replacing the media with an updated version.
  • CAREO does allow retrieval of metadata – there’s a “metadata” button on every object – which shows up once you are logged in.
  • CAREO requires an account to submit objects, but anyone can create an account.
  • The metadata schema used was IMS LOM (and later IEEE LOM).

But, it’s all a bit moot, as institutional and provincial support for the CAREO repository evaporated long ago, and the application itself is on its last legs. It’s no longer supported, is barely functioning at the moment, and will be decommissioned at the end of the month.