On Solving Spam

Spam is the scourge of the internets. It clogs Internet Tubes all over the globe, overloading the trucks that take internets around the world.

And it is directly caused by Google’s PageRank and Adsense systems. They (as well as others, but primarily Google – take a look at any spam farm, and you’ll see prominent Adsense ad blocks) created this mess by enabling individuals to cash in on hijacking innocent websites that have enabled anonymous commenting.

A spammer can sit in his basement, run some scripts to find juicy targets, send out some probes, then unleash hell in the hopes that they will improve the PageRank of their (or their client’s) websites, in an attempt to increase Adsense revenue on those sites.

So, here’s the easy solution. If a website is shown to be associated with spammish activities, the Adsense account is suspended. And their PageRank is reset to 0. Take away the financial incentive, and the rules of the came change.

It’s time for Google to step up and show some corporate responsibility. The whole rel="nofollow" solution is a non-starter, since it only works if we all agree to break the nature of the web in the first place by devaluing all links contributed to a website. It’s not worth throwing the baby out with the bathwater.

Now, how to define “spammish activities” – and, who gets to determine if a spam producer is guilty of that? There could be juries. There could be committees. Heck, it could become a social software tagging exercise, where the intelligence of the hive is harnessed to determine if something is spam or not. spamornot.com? Have an appeals process, to prevent abuse. Have a responsible governance system to ensure effectiveness.

It seems to me that it would be in Google’s best interest to protect the value of PageRank and Adsense. By allowing spam farms to co-opt both systems, they devalue both. By ensuring spammers are removed from the system, we’re left with a more realistic representation of the online advertising ecosystem, with (hopefully) better representation of the actual contributors and participants.

But, this has to stop. Now. It’s only getting worse, and is threatening to smother any semblance of openness left on the web (1.0, 2.0 or beyond).

Spam is the scourge of the internets. It clogs Internet Tubes all over the globe, overloading the trucks that take internets around the world.

And it is directly caused by Google’s PageRank and Adsense systems. They (as well as others, but primarily Google – take a look at any spam farm, and you’ll see prominent Adsense ad blocks) created this mess by enabling individuals to cash in on hijacking innocent websites that have enabled anonymous commenting.

A spammer can sit in his basement, run some scripts to find juicy targets, send out some probes, then unleash hell in the hopes that they will improve the PageRank of their (or their client’s) websites, in an attempt to increase Adsense revenue on those sites.

So, here’s the easy solution. If a website is shown to be associated with spammish activities, the Adsense account is suspended. And their PageRank is reset to 0. Take away the financial incentive, and the rules of the came change.

It’s time for Google to step up and show some corporate responsibility. The whole rel="nofollow" solution is a non-starter, since it only works if we all agree to break the nature of the web in the first place by devaluing all links contributed to a website. It’s not worth throwing the baby out with the bathwater.

Now, how to define “spammish activities” – and, who gets to determine if a spam producer is guilty of that? There could be juries. There could be committees. Heck, it could become a social software tagging exercise, where the intelligence of the hive is harnessed to determine if something is spam or not. spamornot.com? Have an appeals process, to prevent abuse. Have a responsible governance system to ensure effectiveness.

It seems to me that it would be in Google’s best interest to protect the value of PageRank and Adsense. By allowing spam farms to co-opt both systems, they devalue both. By ensuring spammers are removed from the system, we’re left with a more realistic representation of the online advertising ecosystem, with (hopefully) better representation of the actual contributors and participants.

But, this has to stop. Now. It’s only getting worse, and is threatening to smother any semblance of openness left on the web (1.0, 2.0 or beyond).

Maximum Carbon Load

There has been much talk and hype about Peak Oil – the fact that the global production of petroleum is about to reach its maximum level, after which it will start to decline until it eventually becomes a scarce resource and we all have to scavenge in landfills for decades-old plastic to recycle.

It may not be as soon as some think. The Saudis are estimating about 4.5 trillion barrels left. Here in Alberta, we’re sitting on an estimated 1 trillion barrels locked in the Athabasca tarsands.

So, Peak Oil may be years or decades away. Unfortunately, that isn’t necessarily a good thing. As we’re all happily buying Hummers and Escalades to drive through Raunchy Ronald’s Drive Thru™, we’re continuously pumping carbon that had been naturally sequestered deep underground, into the atmosphere. The atmosphere can’t hold all of that carbon without leading to the global warming effects we’re observing now. The real, and more immediate, danger isn’t running out of oil. The danger is in not running out soon enough.

The atmosphere will hit maximum carbon load, and then we’ll have to be spending insane amounts of energy working to pump all of that carbon back into stable reservoirs. Sequestering underwater has me just a little bit nervous.

Here’s the parodox. We may manage to delay Peak Oil, at the cost of accelerating global warming. The irony is, if we’d have hit Peak Oil already, our impact on the environment would already be starting to decelerate (not decrease, just slow down for awhile before beginning to reverse).

Unfortunately, I’m not sure we’re (as a species) smart enough to Do The Right Thing any sooner than we absolutely have to, if then. If there’s oil left to burn (even at $300 per barrel) you’d better believe someone will be ready to burn it. Years from now, students will shake their heads in disbelief when they read about what we did with the limited petroleum resource.

There has been much talk and hype about Peak Oil – the fact that the global production of petroleum is about to reach its maximum level, after which it will start to decline until it eventually becomes a scarce resource and we all have to scavenge in landfills for decades-old plastic to recycle.

It may not be as soon as some think. The Saudis are estimating about 4.5 trillion barrels left. Here in Alberta, we’re sitting on an estimated 1 trillion barrels locked in the Athabasca tarsands.

So, Peak Oil may be years or decades away. Unfortunately, that isn’t necessarily a good thing. As we’re all happily buying Hummers and Escalades to drive through Raunchy Ronald’s Drive Thru™, we’re continuously pumping carbon that had been naturally sequestered deep underground, into the atmosphere. The atmosphere can’t hold all of that carbon without leading to the global warming effects we’re observing now. The real, and more immediate, danger isn’t running out of oil. The danger is in not running out soon enough.

The atmosphere will hit maximum carbon load, and then we’ll have to be spending insane amounts of energy working to pump all of that carbon back into stable reservoirs. Sequestering underwater has me just a little bit nervous.

Here’s the parodox. We may manage to delay Peak Oil, at the cost of accelerating global warming. The irony is, if we’d have hit Peak Oil already, our impact on the environment would already be starting to decelerate (not decrease, just slow down for awhile before beginning to reverse).

Unfortunately, I’m not sure we’re (as a species) smart enough to Do The Right Thing any sooner than we absolutely have to, if then. If there’s oil left to burn (even at $300 per barrel) you’d better believe someone will be ready to burn it. Years from now, students will shake their heads in disbelief when they read about what we did with the limited petroleum resource.

Stopping the raging banality

This blog is about 2 posts away from devolving into a bona fide cat diary (and I’m not exactly a fan of cats). I’ll be trying to stop barfing banality into the internet tubes, so as a result I’ll probably be posting much less. Hopefully, as quantity goes down, quality (and relevance) may go up? Or, I might just wind up raising the bar so high that I finally fall out of this whole blogging thing. Either way, meh…

This blog is about 2 posts away from devolving into a bona fide cat diary (and I’m not exactly a fan of cats). I’ll be trying to stop barfing banality into the internet tubes, so as a result I’ll probably be posting much less. Hopefully, as quantity goes down, quality (and relevance) may go up? Or, I might just wind up raising the bar so high that I finally fall out of this whole blogging thing. Either way, meh…

On ePorfolios and Ownership

Patti and I were discussing our ePortfolio project the other day, and we were basically throwing back and forth various versions of "the students won't it because (a) they don't have to, and (b) it's not theirs."

The "they don't have to" part could be misconstrued as meaning "their profs didn't make them do it." That won't work, either. The students have to feel that they want to do this. That they have to do it themselves to make sense of what they're learning and doing.

And, it needs to be modelled successfully. If they see their profs as not "having to" maintain an ePortfolio, why on earth would the students do it? It's not some contrived evaluation tool, it's an internally driven amplifier and archiver of the learning (and teaching) processes.

Helen Barrett just posted a piece that describes this much more coherently, and in much greater depth. The mental picture of the graduation portfolio bonfire should be a big reminder about what can happen when there isn't a healthy sense of ownership fostered within students (and teachers). I remember burning my notebooks at the end of grade 9 – they weren't MY notes, so it felt awesome to toss them on the bonfire… Stephen's commentary is worth a read, too.

This is all about ownership. But ownership can't be given, it has to be built by each individual. It would be so easy to just say "it's a requirement to complete this course/program. you must maintain an ePortfolio." But that won't work. It will just lead to a lot of busywork, and one helluva bonfire at the end of the course/program.

I think the more effective (from a teaching/learning perspective, not a sheer volume/metrics perspective) is to model the ePortfolio as a teacher. "This is how I gather my thoughts together to track what I've done, what I'm doing, and where I'm going in my career as a teacher". If it's not relevant to a professional, why would it be relevant to a student?

Offer ePortfolios as an optional service across the curriculum, to every student on campus. If 1% of them start using them as effective tools, it will spread from there. Not instantly, and maybe not in the same cohort, but it will spread.

If it doesn't spread, it's not an effective tool, so let it die on the vine. The goal is to foster critical thinking about experiences, not to force yet another tool on anyone. 

Patti and I were discussing our ePortfolio project the other day, and we were basically throwing back and forth various versions of "the students won't it because (a) they don't have to, and (b) it's not theirs."

The "they don't have to" part could be misconstrued as meaning "their profs didn't make them do it." That won't work, either. The students have to feel that they want to do this. That they have to do it themselves to make sense of what they're learning and doing.

And, it needs to be modelled successfully. If they see their profs as not "having to" maintain an ePortfolio, why on earth would the students do it? It's not some contrived evaluation tool, it's an internally driven amplifier and archiver of the learning (and teaching) processes.

Helen Barrett just posted a piece that describes this much more coherently, and in much greater depth. The mental picture of the graduation portfolio bonfire should be a big reminder about what can happen when there isn't a healthy sense of ownership fostered within students (and teachers). I remember burning my notebooks at the end of grade 9 – they weren't MY notes, so it felt awesome to toss them on the bonfire… Stephen's commentary is worth a read, too.

This is all about ownership. But ownership can't be given, it has to be built by each individual. It would be so easy to just say "it's a requirement to complete this course/program. you must maintain an ePortfolio." But that won't work. It will just lead to a lot of busywork, and one helluva bonfire at the end of the course/program.

I think the more effective (from a teaching/learning perspective, not a sheer volume/metrics perspective) is to model the ePortfolio as a teacher. "This is how I gather my thoughts together to track what I've done, what I'm doing, and where I'm going in my career as a teacher". If it's not relevant to a professional, why would it be relevant to a student?

Offer ePortfolios as an optional service across the curriculum, to every student on campus. If 1% of them start using them as effective tools, it will spread from there. Not instantly, and maybe not in the same cohort, but it will spread.

If it doesn't spread, it's not an effective tool, so let it die on the vine. The goal is to foster critical thinking about experiences, not to force yet another tool on anyone. 

Wiki vs. Drupal Book

One of the big reasons I had for making the switch to Drupal is the great "Book" content type. It allows structuring of individual pages into a navigation hierarchy, and generates the "table of contents" and inter/page navigation automatically. I wanted to use it for writing longer articles, and wish I'd had it in place to use for the Interface 2006 ePortfolio background information article .

Initially, I wrote up the background article in a wiki, thinking it might be handy if others were able to edit. But, nobody has, and I think the article is less useful/usable as One Long Page Of Stuff. It would make more sense in smaller, bite-sized pieces that could be individually linked. Smaller granularity, allowing for reuse or something equally wishful.

So, to test out the waters, I just moved a copy of the Interface 2006 ePortfolio background article into a structured book here on my blog.

What's the difference between the two? The wiki page version is theoretically more "open" – others are able to edit it. The Drupal book version is theoretically more usable as a reference – easier to navigate and link to. It's also got comments enabled, so feedback is still pretty easy. Any thoughts on the two approaches? 

One of the big reasons I had for making the switch to Drupal is the great "Book" content type. It allows structuring of individual pages into a navigation hierarchy, and generates the "table of contents" and inter/page navigation automatically. I wanted to use it for writing longer articles, and wish I'd had it in place to use for the Interface 2006 ePortfolio background information article .

Initially, I wrote up the background article in a wiki, thinking it might be handy if others were able to edit. But, nobody has, and I think the article is less useful/usable as One Long Page Of Stuff. It would make more sense in smaller, bite-sized pieces that could be individually linked. Smaller granularity, allowing for reuse or something equally wishful.

So, to test out the waters, I just moved a copy of the Interface 2006 ePortfolio background article into a structured book here on my blog.

What's the difference between the two? The wiki page version is theoretically more "open" – others are able to edit it. The Drupal book version is theoretically more usable as a reference – easier to navigate and link to. It's also got comments enabled, so feedback is still pretty easy. Any thoughts on the two approaches? 

Initial thoughts on Drupal as Primary Blogging Platform

After a couple of hours of running with Drupal as my blogging platform, there are some areas that are definitely behind WordPress as a pure blog-friendly system.

  • Comments. Typical blogs have "name", "url", "email" and "comment" fields. Drupal has an optional "Subject" and a "Comment" field. It works, but makes it harder to follow contributions in a conversation – you have to remember to put your name in the comment each and every time you post. Not friendly I was a bonehead – there's an option to make this behave as expected, under admin/comments/configure.
  • Subscrbing to comments. Email subscriptions to a post's comments is the most powerful and effective way to maintain a conversation on a blog. The "subscriptions" module would work, but it only understands Drupal's users. The vast majority of commenters (i.e., everyone but myself) won't have an account in this copy of Drupal, so Subscriptions.module is useless to them. Close, though. All it needs is Anonymous user support, with a way to provide an email address. Not friendly.
  • CoComment support. Lacking. I'm going to miss that, at least until I figure out how to properly implement it. Not friendly.
  • Flicker Photo Album. There's a Flickr module that claims to do something similar, but it just isn't working for me. So, in the meantime, the "photos" link in the header nav bar points directly to Flickr. Not friendly.
  • Flickr photo posting. There's a FlickrInserter module, modeled after Tantan's excellent Flickr Post Bar plugin for WordPress (which, in turn, is modelled after the awesome Flock Flickr Post bar). For now, I'm copying and pasting HTML directly from Flickr. Not friendly.
  • I miss PodPress. Have to find a comparable solution for Drupal. Not fatal, but it sure was nice.

Of course, it's not all cloud – there is some silver in there. I did decide to switch, after all, and am not regretting it one bit (yet). Things that are good:

  • MUCH better search function. Booleans. Filters. Lots of goodness there. Friendly.
  • Tracking what's new since I (or anyone) visited the site – which comments are new? Friendly.
  • Throttling. If the site gets hammered (yeah, right) I have it set to shut down the bells and whistles to ensure content still gets out. Friendly.
  • Content types – not just blog posts, but forums, surveys, books, etc… Friendly.
  • Unpublished content, and unpromoted content. I can stage stuff without it being public until I decide to make it so. More powerful/flexible than drafts in WP.
  • Stats and logs within the admin interface. I can see what's working (or not) without having to go anywhere else.
  • Blocks and menus. Very flexible ways to add functionality without having to hack a template or theme. Friendly.
  • Lots and lots of other great stuff.

After a couple of hours of running with Drupal as my blogging platform, there are some areas that are definitely behind WordPress as a pure blog-friendly system.

  • Comments. Typical blogs have "name", "url", "email" and "comment" fields. Drupal has an optional "Subject" and a "Comment" field. It works, but makes it harder to follow contributions in a conversation – you have to remember to put your name in the comment each and every time you post. Not friendly I was a bonehead – there's an option to make this behave as expected, under admin/comments/configure.
  • Subscrbing to comments. Email subscriptions to a post's comments is the most powerful and effective way to maintain a conversation on a blog. The "subscriptions" module would work, but it only understands Drupal's users. The vast majority of commenters (i.e., everyone but myself) won't have an account in this copy of Drupal, so Subscriptions.module is useless to them. Close, though. All it needs is Anonymous user support, with a way to provide an email address. Not friendly.
  • CoComment support. Lacking. I'm going to miss that, at least until I figure out how to properly implement it. Not friendly.
  • Flicker Photo Album. There's a Flickr module that claims to do something similar, but it just isn't working for me. So, in the meantime, the "photos" link in the header nav bar points directly to Flickr. Not friendly.
  • Flickr photo posting. There's a FlickrInserter module, modeled after Tantan's excellent Flickr Post Bar plugin for WordPress (which, in turn, is modelled after the awesome Flock Flickr Post bar). For now, I'm copying and pasting HTML directly from Flickr. Not friendly.
  • I miss PodPress. Have to find a comparable solution for Drupal. Not fatal, but it sure was nice.

Of course, it's not all cloud – there is some silver in there. I did decide to switch, after all, and am not regretting it one bit (yet). Things that are good:

  • MUCH better search function. Booleans. Filters. Lots of goodness there. Friendly.
  • Tracking what's new since I (or anyone) visited the site – which comments are new? Friendly.
  • Throttling. If the site gets hammered (yeah, right) I have it set to shut down the bells and whistles to ensure content still gets out. Friendly.
  • Content types – not just blog posts, but forums, surveys, books, etc… Friendly.
  • Unpublished content, and unpromoted content. I can stage stuff without it being public until I decide to make it so. More powerful/flexible than drafts in WP.
  • Stats and logs within the admin interface. I can see what's working (or not) without having to go anywhere else.
  • Blocks and menus. Very flexible ways to add functionality without having to hack a template or theme. Friendly.
  • Lots and lots of other great stuff.

BCEdOnline UnKeynote Debriefing

I’m sitting in the airport in Vancouver (and later on the plane coming home) and wanted to capture some of the thoughts I have about how the keynote went. I’m absolutely exhausted, so I’m not sure how coherent this is going to be, but it’s important to get this down before it’s glossed over and starts to fade away.

Some context – this was my first keynote as presenter (well, co-presenter), so I was a bit intimidated by that. I’ve been part of (and have given) presentations to very large groups, but never as Keynote Presenterâ„¢. Our ideas about what the keynote should be about all revolved around topics involving individual autonomy and control of content and learning, of ownership, and of thinking critically about the nature of relationships between students and teachers, as well as with institutions. Education vs. learning. Individual vs. institutional. Some potentially radical and non-traditional keynote topics, which would be completely unsuited to a conventional powerpoint chalk-and-talk presentation.

We had been joking about going into the keynote unprepared – I think mostly to mask nervousness about taking such a big risk with a “keynote” session. The three of us have been tossing around ideas and spit-balling what we’d like to do in the session for a couple of weeks – hoping to generate a level of discomfort and disorientation in the attendees – that this session belongs to them, not us. That learning belongs to the individual, not the institution. That they are in control of what they do, as are their students.

It was easily the scariest and highest “risk” sessions I’ve ever been involved in. We all knew going in that there was a real chance of some pretty dramatic “failure” if the people in the audience didn’t engage.

The first 20 minutes of the session were sheer torture (ironically, amplified by the fact that the microphones Just Didn’t Workâ„¢). We started by coming off the stage to emphasize that the session wasn’t “ours”. We all had wireless microphones, and were trying to wander, to solicit some form of involvement. We set up a web-based chat room to serve as a back channel, and left that on the Big Screen to help direct the session (I’ll come back to that later).

At first, every single attendee looked freaked out, uncomfortable, and wondering what the hell was going on. Why wasn’t there a powerpoint on the screen? Why are these jokers just wandering around? What’s going on? This is the lamest thing I’ve ever seen! What are they DOING? What a waste of time…

After the initial uncomfortableness wore off a little, people started to get into it. Certainly not everyone. The feeling of discomfort in the room was pretty tangible. I wound up subconsciously moving back closer to the stage to provide a semblance of a traditional keynote, I suppose trying to put people a bit at ease. Or, it might have been to put myself at ease.

This was by far the riskiest thing I’ve ever done professionally. I parachuted into Vancouver, and attempted to lead/herd 500(?) strangers into some form of guided anarchy. I was so far outside of my comfort zone it wasn’t even funny, fighting the urge to just bolt from the room. What the hell were we thinking?

And then it felt like it started to gel, at least for a portion of the audience. Some extremely interesting points were raised, and answered by responses from other attendees. We shifted to more of a Phil Donohue role, running with the microphones to people who wanted to speak up. Not everyone got engaged, but enough to drive the conversation forward.

For the last quarter of the session, we started to get some momentum. Questions and responses started to pile up, and I stopped hogging the microphone as much. If we’d had an extra 15 minutes, I think most people would have reached a level of comfort with what was going on so they would have gotten more out of the session. It didn’t hurt that everyone stayed seated for the iPod door prize draws.

The web chat back channel served an invaluable purpose. People were able to anonymously put “huh?”, or “what are they TALKING about?”, or “talk about GLU!” comments (etc…) up on the big screen, helping to guide the session. I think that open back channel helped to save the session, as it helped us get a better feel for what the Audience was going through. I’ll be keeping an archive of that chat transcript available to serve as reference later.

One thing I realized is that it is extremely hard to read an audience that size. A small group is easy to read. You can make eye contact. You can hear comments, rustling, shifting. You can see attention diverting. But in a room with several hundred people, it is hard to get a feel for what is going on. Even when someone was talking, it was quite hard to spot them in the sea of attendees.

So, what are the lessons learned from this?

  • Open, anonymous back channels are insanely important to helping to keep a finger on the pulse of a Large Audience. The anonymity is important because people don’t have to worry about offending by saying something’s gone off the tracks, or is boring, or just by suggesting a topic without having to be put on the spot with a microphone shoved in their face. Having a working wireless network, and an audience with capable laptops, definitely helped here. But not everyone had a laptop. This works out something like “clickers” on steroids, and could be a useful strategy for other presentations, or in the classroom in general.
  • The audience was too large for this kind of activity. Even half the size would have been better. This was approximately the same activity we’d run at both the Social Software Salon and Edublogger Hootenanny, but those events had participant counts around 12-ish and 50-ish, respectively. I hold those previous events as the best sessions I’ve ever been involved with, and am extremely proud of what we were able to do. That chemistry just didn’t happen during this keynote. Perhaps the audience-is-the-presentation model doesn’t scale to 300-500 people? More thought needed on this…
  • Defining a narrower topic or series of topics is important. We’d set up the wiki page, but failed to fall back on it when the audience wasn’t engaging – we were perhaps overcommitted to drawing the audience out? Back to the Salon and Hootenanny – both had (comparatively) narrow topics well defined ahead of time. We’d tried to do that with the wiki page, but didn’t successfully fall back on it when things didn’t move forward fast enough.

In the final conclusion, I felt the session was both a success and a failure. I personally rated it at 5/10. Stephen gave it a 6/10. That’s not great. I’m not used to that. But, I think that it’s actually a good thing. I’d been staying inside my comfort zone way too long. It’s crucial to stretch out and try new things. Failure isn’t necessarily a bad thing. Worst case scenario, we modeled some risk-taking behaviour for the attendees, and survived the experience. Best case scenario, some of the attendees will have walked away with the seeds of some important new ideas waiting to germinate sometime in the future. No way to track that, though.

Am I going to be a little gun-shy about doing a session like this again? Probably. I’ll have to put some thought into how to ensure the session remains useful and interesting for everyone. It’s not acceptable to just push forward, knowing that half the audience is not with you (or, you’re not with them).

After the session, we schlepped our exhausted carcasses across the street to a hole-in-the-wall pub for debriefing. The discussion that Stephen, Brian and myself had there over a few brews was worth the trip and the risk all by itself. I’ve been needing that discussion for a long time, and am feeling a renewed sense of energy that I hope will last for a while. I think I will benefit a lot from learning about Stephen’s walkabout, as well as Brian’s thoughts and feedback. Thanks for that. You are both true friends, in every sense.

Update: Added podcast link to the audio recorded by Stephen.

I’m sitting in the airport in Vancouver (and later on the plane coming home) and wanted to capture some of the thoughts I have about how the keynote went. I’m absolutely exhausted, so I’m not sure how coherent this is going to be, but it’s important to get this down before it’s glossed over and starts to fade away.

Some context – this was my first keynote as presenter (well, co-presenter), so I was a bit intimidated by that. I’ve been part of (and have given) presentations to very large groups, but never as Keynote Presenterâ„¢. Our ideas about what the keynote should be about all revolved around topics involving individual autonomy and control of content and learning, of ownership, and of thinking critically about the nature of relationships between students and teachers, as well as with institutions. Education vs. learning. Individual vs. institutional. Some potentially radical and non-traditional keynote topics, which would be completely unsuited to a conventional powerpoint chalk-and-talk presentation.

We had been joking about going into the keynote unprepared – I think mostly to mask nervousness about taking such a big risk with a “keynote” session. The three of us have been tossing around ideas and spit-balling what we’d like to do in the session for a couple of weeks – hoping to generate a level of discomfort and disorientation in the attendees – that this session belongs to them, not us. That learning belongs to the individual, not the institution. That they are in control of what they do, as are their students.

It was easily the scariest and highest “risk” sessions I’ve ever been involved in. We all knew going in that there was a real chance of some pretty dramatic “failure” if the people in the audience didn’t engage.

The first 20 minutes of the session were sheer torture (ironically, amplified by the fact that the microphones Just Didn’t Workâ„¢). We started by coming off the stage to emphasize that the session wasn’t “ours”. We all had wireless microphones, and were trying to wander, to solicit some form of involvement. We set up a web-based chat room to serve as a back channel, and left that on the Big Screen to help direct the session (I’ll come back to that later).

At first, every single attendee looked freaked out, uncomfortable, and wondering what the hell was going on. Why wasn’t there a powerpoint on the screen? Why are these jokers just wandering around? What’s going on? This is the lamest thing I’ve ever seen! What are they DOING? What a waste of time…

After the initial uncomfortableness wore off a little, people started to get into it. Certainly not everyone. The feeling of discomfort in the room was pretty tangible. I wound up subconsciously moving back closer to the stage to provide a semblance of a traditional keynote, I suppose trying to put people a bit at ease. Or, it might have been to put myself at ease.

This was by far the riskiest thing I’ve ever done professionally. I parachuted into Vancouver, and attempted to lead/herd 500(?) strangers into some form of guided anarchy. I was so far outside of my comfort zone it wasn’t even funny, fighting the urge to just bolt from the room. What the hell were we thinking?

And then it felt like it started to gel, at least for a portion of the audience. Some extremely interesting points were raised, and answered by responses from other attendees. We shifted to more of a Phil Donohue role, running with the microphones to people who wanted to speak up. Not everyone got engaged, but enough to drive the conversation forward.

For the last quarter of the session, we started to get some momentum. Questions and responses started to pile up, and I stopped hogging the microphone as much. If we’d had an extra 15 minutes, I think most people would have reached a level of comfort with what was going on so they would have gotten more out of the session. It didn’t hurt that everyone stayed seated for the iPod door prize draws.

The web chat back channel served an invaluable purpose. People were able to anonymously put “huh?”, or “what are they TALKING about?”, or “talk about GLU!” comments (etc…) up on the big screen, helping to guide the session. I think that open back channel helped to save the session, as it helped us get a better feel for what the Audience was going through. I’ll be keeping an archive of that chat transcript available to serve as reference later.

One thing I realized is that it is extremely hard to read an audience that size. A small group is easy to read. You can make eye contact. You can hear comments, rustling, shifting. You can see attention diverting. But in a room with several hundred people, it is hard to get a feel for what is going on. Even when someone was talking, it was quite hard to spot them in the sea of attendees.

So, what are the lessons learned from this?

  • Open, anonymous back channels are insanely important to helping to keep a finger on the pulse of a Large Audience. The anonymity is important because people don’t have to worry about offending by saying something’s gone off the tracks, or is boring, or just by suggesting a topic without having to be put on the spot with a microphone shoved in their face. Having a working wireless network, and an audience with capable laptops, definitely helped here. But not everyone had a laptop. This works out something like “clickers” on steroids, and could be a useful strategy for other presentations, or in the classroom in general.
  • The audience was too large for this kind of activity. Even half the size would have been better. This was approximately the same activity we’d run at both the Social Software Salon and Edublogger Hootenanny, but those events had participant counts around 12-ish and 50-ish, respectively. I hold those previous events as the best sessions I’ve ever been involved with, and am extremely proud of what we were able to do. That chemistry just didn’t happen during this keynote. Perhaps the audience-is-the-presentation model doesn’t scale to 300-500 people? More thought needed on this…
  • Defining a narrower topic or series of topics is important. We’d set up the wiki page, but failed to fall back on it when the audience wasn’t engaging – we were perhaps overcommitted to drawing the audience out? Back to the Salon and Hootenanny – both had (comparatively) narrow topics well defined ahead of time. We’d tried to do that with the wiki page, but didn’t successfully fall back on it when things didn’t move forward fast enough.

In the final conclusion, I felt the session was both a success and a failure. I personally rated it at 5/10. Stephen gave it a 6/10. That’s not great. I’m not used to that. But, I think that it’s actually a good thing. I’d been staying inside my comfort zone way too long. It’s crucial to stretch out and try new things. Failure isn’t necessarily a bad thing. Worst case scenario, we modeled some risk-taking behaviour for the attendees, and survived the experience. Best case scenario, some of the attendees will have walked away with the seeds of some important new ideas waiting to germinate sometime in the future. No way to track that, though.

Am I going to be a little gun-shy about doing a session like this again? Probably. I’ll have to put some thought into how to ensure the session remains useful and interesting for everyone. It’s not acceptable to just push forward, knowing that half the audience is not with you (or, you’re not with them).

After the session, we schlepped our exhausted carcasses across the street to a hole-in-the-wall pub for debriefing. The discussion that Stephen, Brian and myself had there over a few brews was worth the trip and the risk all by itself. I’ve been needing that discussion for a long time, and am feeling a renewed sense of energy that I hope will last for a while. I think I will benefit a lot from learning about Stephen’s walkabout, as well as Brian’s thoughts and feedback. Thanks for that. You are both true friends, in every sense.

Update: Added a link to the audio recorded by Stephen.

University 2.0?

I’ve been thinking about what some of the possible implications of this various “2.0” stuff might be on Universities (or, I guess, on academic institutions in general). Likely nothing too earthshattering here, just some thoughts that were sparked over the weekend while thinking about the upcoming BCEdOnline fireside chat we’re planning.

Disclaimer: This blog entry is written by myself as an individual, not as a representative of the University of Calgary. I’m not advocating for anything here, just thinking out loud about what some of the implications might be if some trends continue for another 5/10/20 years.

If we assume that things like “web 2.0” tools, and concepts like the “PLE” are going to mature and evolve, and that individuals will be able to effectively manage their own online identities and resources, that has some implications for a University.

If a person is able to manage their own information, outside of the IT-mandated technobubble, they have the ability to negate any monopolistic tendencies of an institution. That is to say, if a student (or faculty member) is able to manage their own online identity and published resources, without the need for direct intervention by an Institution, they will be able to operate outside the boundaries of any single University. Extrapolating this, a student who is able to have relationships with more than one University, and who manages their own PLE, will be able to select what kind of relationship they want to have with each University. Perhaps they take their first-year biology courses from University X, chemistry from University Y, physics from MIT, philosophy from Cambridge, etc… Perhaps a professor is able to teach students who have relationships with any number of institutions (and are located anywhere they’re technically able to access the professor and course materials). In which case, to which University do the student or professor “belong”? Does that even make sense any more?

If individuals are in control of their institutional relationships, what is the role of the institution? Previously, it was (at least partially) to provide services that were not available to individuals without institutional support. Things like email, network access, classrooms, registration systems, scheduling systems, access to researchers, and access to publications were all offered by the University to its faculty, students and staff. If individuals are able to access any of these services as effectively (or moreso) on their own, what is left for the University? Perhaps the primary role becomes as a research institution? It’s still hard for individuals to conduct hard research on their own (chemicals, infrastructure, safety and security, protocols, etc…). Maybe Universities will become hubs of research activities, with teaching and learning under the auspices of the individuals that choose to have a relationship with a University?

So, the Institution becomes a place for individuals to come together to conduct research, and perhaps to facilitate discourse. Teaching and learning activities are perhaps supported by the Institution, but managed by individuals in any number of locations. What happens to curriculum? Degrees? Tenure? How different is this from where we are now?

I’m sure Stephen (one, two, three, four, five, six, seven), David Wiley (eg.), and many others have put much more thought into this than I have.

I’ve been thinking about what some of the possible implications of this various “2.0” stuff might be on Universities (or, I guess, on academic institutions in general). Likely nothing too earthshattering here, just some thoughts that were sparked over the weekend while thinking about the upcoming BCEdOnline fireside chat we’re planning.

Disclaimer: This blog entry is written by myself as an individual, not as a representative of the University of Calgary. I’m not advocating for anything here, just thinking out loud about what some of the implications might be if some trends continue for another 5/10/20 years.

If we assume that things like “web 2.0” tools, and concepts like the “PLE” are going to mature and evolve, and that individuals will be able to effectively manage their own online identities and resources, that has some implications for a University.

If a person is able to manage their own information, outside of the IT-mandated technobubble, they have the ability to negate any monopolistic tendencies of an institution. That is to say, if a student (or faculty member) is able to manage their own online identity and published resources, without the need for direct intervention by an Institution, they will be able to operate outside the boundaries of any single University. Extrapolating this, a student who is able to have relationships with more than one University, and who manages their own PLE, will be able to select what kind of relationship they want to have with each University. Perhaps they take their first-year biology courses from University X, chemistry from University Y, physics from MIT, philosophy from Cambridge, etc… Perhaps a professor is able to teach students who have relationships with any number of institutions (and are located anywhere they’re technically able to access the professor and course materials). In which case, to which University do the student or professor “belong”? Does that even make sense any more?

If individuals are in control of their institutional relationships, what is the role of the institution? Previously, it was (at least partially) to provide services that were not available to individuals without institutional support. Things like email, network access, classrooms, registration systems, scheduling systems, access to researchers, and access to publications were all offered by the University to its faculty, students and staff. If individuals are able to access any of these services as effectively (or moreso) on their own, what is left for the University? Perhaps the primary role becomes as a research institution? It’s still hard for individuals to conduct hard research on their own (chemicals, infrastructure, safety and security, protocols, etc…). Maybe Universities will become hubs of research activities, with teaching and learning under the auspices of the individuals that choose to have a relationship with a University?

So, the Institution becomes a place for individuals to come together to conduct research, and perhaps to facilitate discourse. Teaching and learning activities are perhaps supported by the Institution, but managed by individuals in any number of locations. What happens to curriculum? Degrees? Tenure? How different is this from where we are now?

I’m sure Stephen (one, two, three, four, five, six, seven), David Wiley (eg.), and many others have put much more thought into this than I have.

Internalizing

WARNING: Rambling, stream-of-consciousness, thinking-out-loud (hopefully not navel-gazing) ahead! Just trying to start framing some thoughts so I can make sense and move on.

WARNING: Rambling, stream-of-consciousness, thinking-out-loud (hopefully not navel-gazing) ahead! Just trying to start framing some thoughts so I can make sense and move on.


It’s one of the weird paradoxes of the last few years for me – I’m much more involved with external (off campus) groups and online communities than I am with local ones. I’m more well-known off-campus than on. I’m more linked to individuals spread around the globe than those at my own institution.

The latest example of this was offered up inadvertently by someone returning from a recent trip abroad, which included a stop in Hong Kong. “D’Arcy, they know you in Hong Kong. They were asking if I know you since we’re both from the U of C.” (ps., howdy Nick!) Turns out I did vaguely know this person (and he knew me mostly because he walked by my “office” that is newly equipped with a nametag – I basically recognized him as having seen him several times, but struggle to put a name to the face), but the point is – I knew exactly who he was talking about, and could list off some cool stuff that Nick is doing. And I’ve never been to Hong Kong.

It seems like I’ve been more involved with projects in BC than in Alberta. In international projects, rather than local.

I’ve been feeling disconnected from the people who are physically around me, because it is so much easier to connect with likeminded individuals around the world – my global online community of practice. What does that say about the nature of communication and relationships?

I’ve also had to spend most of the last year or so on Big Projects – large multinational/multi-institutional endeavors that steer like oil tankers. External timelines, external demands, external users. What I think is needed is more time on smaller, nimble, adaptable projects that will make more of a difference in the trenches. I’m lucky in that I think I’ve been moved/moving in that direction, spending most of the last few months in Drupal and Moodle, thinking about how to integrate them into communities and workflows, rather than building New Applications Just Because Someone Said They Need It.

Stephen’s hiatus (whatever the cause) struck a nerve. I’ve gotten so wrapped up into this online stuff as part of my identity – my sense of self is being partially defined by what I (and others) are doing online. Is that wrong? Is that the way things are moving? It’s a bit disturbing. Why am I so comfortable just hanging everything out here? Is it as simple as some freaky narcisistic tendencies? I never thought I’d use that word wrt to myself, but this apparent need for external validation raises the question.

I’m going to have to put some thought into how to continue this in a more healthy way. Not even sure what that means, but something just doesn’t feel right. Have to track that down first…

20 Years after Challenger

I can’t believe it’s been 20 years since the Space Shuttle Challenger exploded shortly after takeoff on January 28, 1986. I can still remember seeing that horrific column of smoke and fire from the still-burning boosters. Damned frozen O-rings and dysfunctional communication in NASA.

Christa McAuliffe was to be the first schoolteacher in space, sent up as a payload specialist. She’s the one most remembered, but all seven crewmembers were lost in that tragic accident. I still shudder thinking of the ride they must have had, trapped in the cockpit as it fell to the water…

20 years later, and an additional Shuttle lost, and we’re now paralyzed by the need for safe access to space. The atmosphere of pioneering exploration has been replaced by an apparent desire to have a space program operate more like a commercial airline. Mankind would have never made it to space without being willing to take risks.

I can’t believe it’s been 20 years since the Space Shuttle Challenger exploded shortly after takeoff on January 28, 1986. I can still remember seeing that horrific column of smoke and fire from the still-burning boosters. Damned frozen O-rings and dysfunctional communication in NASA.

Christa McAuliffe was to be the first schoolteacher in space, sent up as a payload specialist. She’s the one most remembered, but all seven crewmembers were lost in that tragic accident. I still shudder thinking of the ride they must have had, trapped in the cockpit as it fell to the water…

20 years later, and an additional Shuttle lost, and we’re now paralyzed by the need for safe access to space. The atmosphere of pioneering exploration has been replaced by an apparent desire to have a space program operate more like a commercial airline. Mankind would have never made it to space without being willing to take risks.