Remembering CAREO

Today is a memorable day. It’s the day that CAREO, the learning object repository we built at The University of Calgary, is being officially decommissioned. Unplugged, mothballed, and put into storage. It’s been a wild rollercoaster ride for these 6 years, but that ship has sailed. Back in 2001, when CAREO was first created, there was a need for a concrete prototype of a repository. Other available software didn’t quite do what we had in mind, and it was relatively easy to just go ahead and build some software to test out some ideas.

I was just coming out of the First Dot Com Bubble Burst, having just gone down with the ship at an eLearning company in March 2001. So I had some free time, and wanted to learn something new. I was asked if I could put together a working repository, and naively said “sure”. I’d never built any server software, but wanted to play with WebObjects. This was the perfect opportunity. The project picked up a shiny new PowerBook G4 (400MHz! Holy cow!) for the repository to be built on, and I got to work from my home office. Things went well, and I was using EOModeler to implement the nearly-final IMS LOM metadata specification as a set of 80-or-so tables with joins all over the place. Oy.

The first alpha version went live from my home office, served up over my home cable internet connection, and using DynDNS to make it available. Worked like hot damn. The hardest part of the whole process was in learning to stop thinking so hard and just let WebObjects work its magic.

Soon, CAREO was scaled up, features added, and content contributed. A server was acquired (a shiny new XServe rev 1.0). It was a self-contained, standalone repository. Others started to show some interest, and I had the pleasure of working with Brian Lamb, the Learning Objects Discoordinator at UBC’s OLT. We set up a copy of CAREO on a UBC XServe. Gerry Paille set up a copy of CAREO at Peace River. There was a copy at the University of Ottawa (they actually got significant funding to run their copy – much more than we ever saw to cover the building of the thing in the first place…).

Over the first 3 years of CAREO’s life, there was a flurry of development activity. I added features such as wiki pages and threaded discussions tied to each learning object. A “theme engine” was built so people could customize the look and feel of the repository application interface. A custom “SciQ” K-12 science repository was built, and used in the Alberta science curriculum. I added RSS feeds for the “newest objects”, “top objects” as well as for user-defined searches. Support for Trackbacks from other software was implemented, letting people add context to the learning objects via weblogs or other trackback-enabled services.

The custom relational database for storing metadata was replaced with a multifunction generalized XML-in-MySQL store written by Julian Wood, along with adoption of the JUD XML-RPC API. A repository using an API to connect to the separate data store. People could use the ALOHA client application to manage their learning objects – adding metadata, uploading media, etc… and CAREO would pick it up automatically because it was talking to the same abstracted metadata store through the same API.

A bridge to the EduSource network of learning object repositories was built, making it possible to search from one repository and find learning objects scattered throughout the network through a custom inter-repository API. That API and network cost a lot of money and time. And didn’t work as well as Google.

I spent a fair amount of time experimenting with native XML databases to store the LOM metadata – BlueStream XStreamDB and eXist have both matured so much over the years. A Sherlock search widget (this is pre-Dashboard) was built to let people search the repository from their desktops. Installers were built to make it easier to get your own copy of CAREO on your own server.

Heady times. And most of the work was done with surprisingly little financial support. We were able to do a hell of a lot with what we had, though.

Then, things pretty much stagnated. Development stopped, as we focussed on higher priority (i.e., funded) projects. Other software matured to the point where it was difficult to justify maintaining a custom repository application. If I were to start from scratch now, I could deploy a more fully-featured repository powered by Drupal without having to write any code.

Over the years, I’ve been asked several times by people investigating learning object repository software to implement at a national level. Each time, I said that although the source code for CAREO is available, it would be a much more effective use of resources to just go ahead and use Drupal. Work with the larger community, and don’t write (or sustain) code that you don’t absolutely have to.

CAREO was important, back in 2001-2004, as a prototype. As a sandbox for trying out some of these concepts. As a place to easily host metadata and content and try the repository model. From that perspective, I think it was a huge success. Without CAREO, I would likely still be saying that we need centralized institutional repositories to tightly manage resources.

But, because of CAREO, I now know that we don’t need repositories at the institutional level. Personal repositories are much more powerful, effective, and manageable. They’re called blogs, maybe you’ve heard of them? And small pieces, loosely joined. Want to manage photos online? Use Flickr. Videos? Use YouTube/GoogleVideo/etc… We don’t need a monolithic institutional repository.

RIP CAREO

And now, it’s Halloween 2007. And we’re about to decommission our CAREO server here at UCalgary after 6 years. The software has been acting up, and it’s just not worth the time and effort to figure out what’s gone krufty. So it’s time to put it out of its misery. Farewell, CAREO. Thanks for the good times. I’ve learned a LOT about software design, information architecture, and metadata. More importantly, I had the pleasure to meet and work with a LOT of awesome people, all working on similar projects because they believe (as I do) in the greater good. Sure, we were naive, but we meant well. And now, hopefully, people will learn from our successes, failures, and mistakes, and not be doomed to repeat them.

Hotels and Price Gouging

We’re working on a project with some folks at the CHR, and they are travelling to a conference to present their courses and talk about the process. Part of that presentation will be a live demo of the Moodle-powered site and some of the cool Breeze content we put together for them.

The hotel (which shall remain unnamed for now) sent them a sheet, asking what technical services they would like for their 1 hour presentation. Included in that sheet was this portion, listing the costs per service:

starwood price gouging

I had to resize it to fit here, so it’s a bit hard to read, but the basics are:

  • Internet connection (wired): $350
  • Internet connection (wireless): $350
  • Telephone: $175

If that’s not the definition of price gouging, I don’t know what is. That’s insane. Their internet charges, according to these rates, would be over $10,500 per month. And that’s in Canadian money, not that whimpy US stuff!

I could almost see how they could justify these rates if the conference was some hodey-dodey high flying billionaire’s club meeting, or maybe a Web 2.0 pre-bubble-bursting lovefest. But this is a medical education conference.

If I had to pay $350 to have an internet connection during a presentation, I just plain and simple wouldn’t do it. But these folks have committed to giving a live demo, and the only way to do that is to grab some ankle and ask for more.

First thoughts on Leopard

Others will write more profound and deeper posts describing what’s so freaking cool about MacOSX 10.5 Leopard. This post is just my initial gut reactions. Want more meat? Surf over to arstechnica.com.

I’ve played with seeds of 10.5 for what seems like years (but is really only a year?) through our Apple Developer Connection subscription. But all of my previous experience was in carefully isolated cleanroom installations, to prevent any bugs from nuking my production system. I’d never tried an upgrade install. I’d never run it for more than a day or two tops because bugs and instability sent me running back to 10.4. So, this is my first real time in Leopard, without an alternate or backup system running a previous version Just In Case™.

My initial thought after install, which I’m sure is hardly unique, was along the lines of “holy frack. it worked perfectly. it just fracking worked.” Seriously. Every app I use still works. All preferences are retained (even my custom dock-pinned-at-start setting). Trivial upgrade to the new OS. Gotta love that.

After that, I played with some of the new toys. Spaces is absolute brill. I’ve used other virtual desktop apps. I paid for CodeTek Virtual Desktop. I used the Open Source Desktop Manager.  I used the other Open Source Space app. I’ve played with virtual desktops in Ubuntu. But Spaces just feels right. Dragging apps between desktops? Very cool. It’s got the best features of the others, without any bloat. Just right.

Time Machine. I plugged in a LaCie 500GB Big Disk Extreme, and 10.5 asked me if I wanted to use it for a Time Machine backup drive. Sure. Why not? I’ll give that a shot. Time Machine sounds pretty cool. So I let it chew (for a couple of hours) to do the initial backup set.

Time Machine initial progress bar

No kidding. 1.4 MILLION files. 124.5 GIGABYTES of data. And I don’t have to think about backing any of it up. Ever again. It’s fully automatic. IIRC, Time Machine keeps the last 24 hours of HOURLY backups, the last week of DAILY backups, and as many WEEKLY backups as your drive allows. That’s so freaking awesome I can’t even put it into words. Knowing that EVERY FILE I USE is backed up already? Priceless.

There is a catch, though.

You don’t necessarily WANT all of your files backed up. That scratch video file of a few gigs of data. That temporary working directory of hundreds or thousands of HTML files, etc… Automatic backups have the potential to archive a helluvalotta crap that you don’t really want to keep (and no, I’m not meaning dwarf-hentai-tentacle-snuff-pr0n, but I guess that would fit as well). So, for the files that I want to work on without squeezing them into my Time Machine backup system, they go into a folder on my desktop called “NO BACKUP”. I’ve added that to my Time Machine prefs as an exclusion. So, if I want to use HTTrack to scape a site to a working directory, it just goes in there. No worries about polluting my backups.

What’s next… Oh, right. Safari Dashboard clippings. Absolutely brilliant. I’d been using a hacked-together widget on 10.4 that was inspired by the 10.5 preview Stevenote. It worked, but it lacked the slick UI for selecting the portion of a web page to display as a Widget. It’s got a visual DOM inspector. You just move the mouse, and it highlights the relevant HTML element and any children. Click it, and tweak the bounding box. Click “Add” and it’s done. A visual DOM inspector with manual override. Fracking brilliant. I’ve added a few web page widgets, including the stats/comments sidebar from my blog’s admin page, and the video feed from Maui.

I’m actually using Safari again as my default browser. The TinyMCE editor that comes with WordPress 2.3.1 works just fine in it. Thank the fracking gods. Now, if only those fixes get pushed into the main TinyMCE product so I don’t have to use Firefox to manage all of my Drupal sites (don’t get me wrong – I love Firefox – but Safari’s text rendering simply blows the crap out of every other browser, except other WebKit-powered flavours).

Update: doh. Safari+TinyMCE aren’t all hot and sweaty after all. seems like there’s some work to do before it works reliably. Safari stripped out all linespacing when I clicked “Save and Continue Editing”

I set up Janice in her account to use GMail via IMAP in Mail.app. Mail.app autodiscovered the settings. I only had to provide her address and password. Mail.app DID THE REST. Fracking brilliant, again.

The last comment I have after running Leopard for less than a day is about the menu bar. Love it or hate it, apparently. I hate it. It’s shiny, and demos relatively well, but the bling is at the expense of the readability of menu items.

MacOSX 10.5 Menubar Translucency

Sure, the primary menu items lose translucency when you click on them. But that’s just annoying. A text-based Whack-a-Mole™ navigation system. Please, Apple, either lose the translucency outright, or have it pop to full opacity when the mouse moves within the menu bar. No clicking and scrubbing required.

Almost forgot! Tabs in Terminal.app! Sweet. Much cleaner than having to command-` between a dozen terminal windows. And, I’ve even caught myself playing with CoverFlow in the Finder. Not sure how much I’d actually USE that, but it sure is purty… 

LOR Typology: CAREO errata

I just poked through Rory’s A Typology of Learning Object Repositories article, starting with the tables, and found a few errors relating to his description of CAREO. Here are the corrections (I don’t have Rory’s email handy, and there aren’t comments on the DSpace page for the article):

  •  CAREO supports hosting content as well as linking to other servers. That was one of the primary goals of the project – to allow people to easily post content without having to know FTP. I don’t have the stats on this, but about half of the items in CAREO were uploaded to the CAREO server via the “add object” form.
  • For “maintaining” an object – CAREO lets the owner of the object edit the metadata, including replacing the media with an updated version.
  • CAREO does allow retrieval of metadata – there’s a “metadata” button on every object – which shows up once you are logged in.
  • CAREO requires an account to submit objects, but anyone can create an account.
  • The metadata schema used was IMS LOM (and later IEEE LOM).

But, it’s all a bit moot, as institutional and provincial support for the CAREO repository evaporated long ago, and the application itself is on its last legs. It’s no longer supported, is barely functioning at the moment, and will be decommissioned at the end of the month.

Cleaning up the Upcoming Events block in Drupal

We use the Events module to manage workshops here in the Teaching & Learning Centre, and use the “Upcoming Events” block to display the next few workshops on our website. Works great, but the default text leaves a bit to be desired. By default, it shows the event title, and “(2 days)” – which indicates that the event begins in 2 days.

But, it could also mean that the event lasts for 2 days.

So, I just added a trivial change to the event.module file, adding the following line of code at line 1847 (on my copy of the file, which was checked out on June 4, 2007):

$timeleft = 'starts in ' . $timeleft;
That changes the text indicator in the “Upcoming Events” block to read:

(starts in 2 days)

Which is much clearer in meaning. Easy peasy. I just have to remember to edit the module after updating, if this doesn’t make it in…

K12 Online – More than cool tools

I had the chance to work on a presentation for the K12 Online 2007 conference. Alan, Brian and I started by thinking of doing an updated “Small Pieces” piece, and we wound up creating a 53 minute video presentation touching on 9 trends in successful online tools, and how they might be used effectively.

The trends are, in no real order:

  1. embed
  2. connect
  3. socialize
  4. collaborate
  5. share
  6. remix
  7. filter
  8. liberate
  9. disrupt

Here’s the presentation, hosted in chunky Google Video transcoded format. There are links to higher (and lower) res versions on the K12 conference page for the presentation.

There’s a live “fireside chat” Elluminate session scheduled for Saturday, Oct. 20 at 1pm GMT (which is 7am here in Calgary – so much for my day to sleep in…)

I’m thinking of writing up a blog post describing the process we used, which worked out surprisingly well (except for my inability to properly normalize all of the audio – sorry!). Final Cut Pro was used to pull together audio, images, and video from 3 presenters, and spit out the final product. I learned a LOT about using FCP during the process, and think I could do it much quicker (and better) next time around…

on the power of banality

I’ve been thinking about this for some time, but haven’t taken the time to put it into words. Most recently, a post by Jennifer Jones nicely sums up why Twitter is important, and I think it goes even further than that.

Twitter is important because it makes many of the intangible human connections more readily available to people who are separated by distance. I often feel more closely integrated with the people on my Twitter stream than I do with people who work in my department. Why is that? I see those people every day. But – the people on Twitter are constantly reinforcing my connection with them, and vice versa, through the unceasing flow of status updates.

But, why is this important? I think this brings the real, visceral connections that are an essential part of a vibrant community (whether online, offline, or blended) into the forefront. I can tap into my Twitter contacts and ask questions, float ideas, or just shoot the shit. Things that are largely outside the domain of a traditional “online community” resource. The always-on nature of Twitter, and the strong sense of vibrancy and vitality, are what make it so compelling to me. At almost any time of the day or night, my Twitter stream is active, with people posting tidbits on a stunningly broad range of topics.

Sure, many of these are purely banal things like “I’m bored” or “heading out to the pub” – but those are important if only because they help reinforce a connection. I may not care that someone is going to a pub (especially if they’re in another city/country/continent and I can’t tag along), but by seeing their status update, it makes me mindful of them. I think about that person, even if briefly, and the sense of community is strengthened.

So, Twitter is valuable for so much more than simple “nanoblogging” – which is how I initially perceived it. It is important to me because it makes the sense of community and connectedness more tangible. And Twitter isn’t the only tool to help on that front.

One of the reasons I’m a raving, rabid Flickr addict is that I can follow the photos from my contacts. If they do something and post a picture, I see it. I may not have bothered to go hunting to find the picture, but the fact that Flickr streams it to me helps me keep up to date on what dozens of people are doing. I am more mindful of these people, and feel more aware and connected.

Tools like Flickr and Twitter are powerful because they are informal. It’s much quicker and easier to post a simple status update for something that wouldn’t warrant a full blog post. It’s simple to shoot a photo and hurl it up to Flickr – even if it’s not a great photo, it’s an easy way to share what’s going on in a person’s life.

One thing that newcomers to these tools often mention is how simultaneously noisy and empty they seem. Viewing the public Twitter update stream is a confusing and uninteresting activity. It’s not until you find the people that you care about – in real life – that these tools really start to get interesting. It’s not about “contact whoring” or trying to collect the most “followers” – it’s about finding the people you care about and maintaining a state of mindfulness. Something that is surprisingly easy to do with these various banality broadcasting engines.

I’m still thinking through how these tools compare with Facebook. I do know that Facebook has a decidedly different “feel” to it – with the endless flow of zombie-bites, pokes, application requests, and the like. Facebook has become annoying enough that I might check in on it once per week. I usually have Twitter and Flickr open in tabs all the time.  Facebook is evolving into a monolithic environment – the “applications” are so tightly integrated that they might as well be compiled into the kernel of FB. Small Pieces Loosely Joined is basically thrown out the window. Although I can integrate other resources, they become awkwardly sucked into FB, often providing redundant information or functionality (do I post status updates to Twitter, or to Facebook? do I post photos to Flickr or Facebook? etc…). I should be able to do these activities in one place, and one place only, and have the information pulled seamlessly together. Facebook just ain’t it.

pssst. wanna blog?

It’s still not officially released, and I’m still in the early stages of putting together a funding proposal to turn it into a supported service, but if you’re willing to live life on the edge and risk a little beta goodness, UCalgaryBlogs.ca is kinda on the air.

All you need is a valid @ucalgary.ca email address, and you’re off and running. You can create as many blogs as you like, and can select from a bajillion available themes.

Why use the service? Well, it’s more “individual” than the existing weblogs.ucalgary.ca services (which is still running) so it should be less of a communal space. It’s running essentially the same software as WordPress.com, but on a UCalgary server with a UCalgary-ish domain name.

One of the cooler reasons to use UCalgaryBlogs.ca is that you’re not locked into it – wanna take your blog with you? Sure! WordPress can export all of your stuff into a format that can be imported on another server.

Oh, yeah. There are lots of other great reasons to use WordPress to manage a blog, too.

This is not intended to compete with, or replace the Drupal service offered by IT. Want to manage a large departmental website? That’s the way to go. Want to keep a simple blog or newsletter? This just might be for you…

Just be advised that it’s currently a skunkworks project, on server space I’m sneakily “borrowing”, and I’ll be actively tinkering with the software. And I’m half expecting to get spanked for just going ahead with this. But if you want to come play, please feel free! :-)

Converted to WordPress 2.3 Tags

When I upgraded to 2.3, I left my 500+ categories in place. I used categories as tags, so didn’t see the need to convert them over to the native tag format. I’ve thought about it some more, and just bit the bullet. All of my categories have been converted over to tags (I think), and now I’ll use categories and tags in slightly different ways.

Categories will be used to define the “type” of post – work, personal, fun, etc…

Tags will be used to describe the “content” of the post – wordpress, rant, travel, etc..

It looks like a few Categories weren’t completely converted over to be Tags, so I’ll see if there’s any manual intervention I need to do, but that’s the plan for now.

I do REALLY wish the Tags field had autocomplete, as Drupal’s freetagging input field does. It’s soooo hard to remember the exact spelling/tense of all of the tags (is it folksonomy? folksonomies? etc… autocomplete would have me there at folks.)