Reclaiming Educational Technology: flexible and open

Episode 3 of Reclaiming Educational Technology, looking at the transition from monolithic vendor-provided enterprise solutions to more flexible and adaptive projects. Some of the segments are also used in episodes 1 and 2, but in order for this to work as a standalone piece, needed to be re-included here as well. When I do a longer supercut version, I’ll remove the duplicate clips.

Reclaiming Educational Technology – episode 3 from UCalgary Taylor Institute on Vimeo.

Reclaiming Educational Technology: the business and politics of edtech

During the Reclaim Hackathon at UMW last week, several of us were talking over food and beverages and realized that we had the opportunity to document the current thinking in the “edtech scene”. It’s something that we hadn’t tried to do explicitly before, but we realized that if we don’t do it ourselves we’ll be left with the narratives pushed by the Big Business of Edtech Venture Capital™. So, David Kernohan and I took it on as a project. We recruited Andy Rush to record a series of impromptu interviews with some of the people who were present at the event1, and off we went.

I took on editing the footage into something that tells the stories, starting with this:

Reclaiming Educational Technology: the business and politics of edtech from UCalgary Taylor Institute on Vimeo.

Thanks so much to Audrey Watters, Kin Lane, and Martha Burtis for agreeing to participate (and to the many other folks who took part – they’ll be making appearances in future episodes – OOH! THE SUSPENSE!).

I’m planning on several additional segments/episodes, exploring the nature of innovation, shifts in culture and technology, and more. I’ll make time to put those together ASAP. When all of the smaller segments are done, I’ll try to work them together into a longer documentary that ties everything together.

  1. I’d have loved to interview everyone, but even these brief interviews produced an hour and 48 minutes of raw footage – we’ll have to plan follow-up sessions later… []

reclaiming website search

I’ve been withdrawing from relying on Google wherever possible, for various reasons. One place where I was still stuck in the Googleverse was with the embedded site search I was using on my self-hosted static file photo gallery site. That was one of the few places where I couldn’t find a decent replacement for Google, so it stayed there. And I wasn’t comfortable with that – I don’t think Google needs to be informed every time someone visits a page I host1. I use that embedded search pretty regularly, and cringe every time the page loads.

There had to be a good search utility that could be self-hosted. I went looking, and tried a few. My requirements were pretty basic – I don’t need multiple administrators, or shards of database replication, or multiple crawling schedulers etc… I don’t want to have to install a new application framework or runtime environment just for a search engine. I want it to be a simple install – ideally either a simple CGI script or something that can trivially drop onto a standard LAMP server.

Today, I installed a website indexer on a fresh new subdomain. Currently, the only website it indexes is darcynorman.net/gallery, but I can add any site to it, and then index and search on my own terms, without feeding data into or out of Google (or any other third party).

The search tool is powered by Sphider and seems pretty decent. It’s a simple installation process, and uses a MySQL database to store the index. Seems pretty fast – on my single-site index, with one user (me).

The biggest flaw I’ve found with Sphider so far is in how it handles relative links. Say you have a website structure like this:

  • index.html
    • page1.html
    • page2.html

If index.html uses a simple relative link like <a href="page1.html">Page 1</a>, Sphider skips it. Unless the index.html page has a <base> element to tell Sphider explicitly how to regenerate full URLs for the relative links. Something like this:

<base href="http://photos.darcynorman.net/" />

Which Sphider can then use to turn relative links into fully resolved absolute links.

But this is strange – I had 2 choices:

  1. hack the Sphider code to teach it how to behave properly (and then re-hack the code if there’s an update)
  2. update each gallery menu page to add the <base> head element

I chose #2, because I just didn’t have the energy to fix Sphider, and the HTML fix was simple enough. It definitely feels like a bug – there’s no way that editing every page to add a <base> element should be required, but whatever.

Bottom line, Sphider works perfectly for my needs. It’s now powering the site search for my photo gallery site, and works quite well for that. And, it’s going to be available to index any of my other projects if needed.

  1. as would happen when the embedded search javascripts are loaded – that activity data could then be tracked/stored/analyzed by Google to better model what you’re interested in, who you know, etc… []

giving up on owncloud (for now)

I’ve really been loving running my own dropbox clone, by using owncloud running on my Hippie Hosting Co-op account. It’s (mostly) seamless and automatic, and (usually) Just Works™. It’s not as polished as Dropbox’s UI, but that’s not critical (although the status badges on files and folder badges would be nice…)

But, over the last week or 2, I’ve been noticing that owncloud on my work computer gets wedged. Digging into the status, the URL changes from my owncloud instance to something intercepted by browser-based wifi authentication. Just changing the URL in configuration doesn’t seem to solve it. I have to nuke my owncloud settings, add a new config, delete it because it insists on syncing to /clientsync rather than /, and then re-adding it manually. Then, deleting the /clientsync folder on the server. Annoying. I just need this to work.

So. I’m back to Dropbox for awhile. I don’t have time to fart around with this stuff right now. I need my file sync service to Really Just Work™. I’ll try owncloud again when I have some downtime to muck about with it.

Reclaim Project: 2 steps forward, 1 step back.

tumblr!

Yahoo! is buying Tumblr for $1.1B US. Cash, not stock paper-shuffling. Why? Marissa Mayer says:

In terms of working together, Tumblr can deploy Yahoo!’s personalization technology and search infrastructure to help its users discover creators, bloggers, and content they’ll love. In turn, Tumblr brings 50 billion blog posts (and 75 million more arriving each day) to Yahoo!’s media network and search experiences. The two companies will also work together to create advertising opportunities that are seamless and enhance user experience.

Gee. That sounds awesome. If only my blog had access to personalization technology and search infrastructure to help users discover creators and content. And, if only my blog had Yahoo’s media network and search experiences. And I was thinking just the other day, that things would be so much better, if only I could create advertising opportunities that are seamless and enhance user experience.

Said no one. Ever.

I’m holding out to cash in on leveraging synergistic paradigms by extending audience reach and engagement in order to drive personalization of advertising placement. That’s where the money is.

reclaim open

Audrey Watters and Jim Groom were at the MIT Media Lab with Philipp Schmidt and others for a hackathon. Sounds like it was a pretty incredible couple of days.

The video below captures some of the discussion. So much goodness in it. We haven’t lost the open web. We can (continue to) choose to build it. Yes, there are silos and commodifcation and icky corporate stuff that would be easy to rail against, but what if we just let go of that and (continue to) build the web we want and need? Yeah. Let’s (continue to) do that… That’s what Boone’s Project Reclaim is all about. That’s what I do on a tiny, insignificant, human scale. That’s why I publish my own stuff here – I’ve built this site up exactly how I want it, to support my ability to be as open as I choose, without relying on others to enable (or decide not to) me.

It’s not about protesting against silos or corporate activity streams. Freedom means people get to choose how they manage their digital artifacts (including delegation of that responsibility to third parties). It’s about doing what I think is right, and feeling good about that. That’s all I can do.

I’m really looking forward to seeing what UMW does with their Domain of One’s Own project – and hoping to do more of that kind of thing here on our campus. Some pretty amazing things can happen if you enable and encourage individual students and instructors to build their own stuff…

Reclaim Open Learning – Not Anti-MOOC. But pro open. from Jöran und Konsorten on Vimeo.

Anil Dash on The Web We Lost

David Weinberger shared his notes from Anil Dash’s recent talk at Berkman about social media and the (d)evolution thereof. Some really important stuff in there.

on shared values and culture:

There was a time when it was meaningful thing to say that you’re a blogger. It was distinctive. Now being introduced as a blogger “is a little bit like being introduced as an emailer.” “No one’s a Facebooker.” The idea that there was a culture with shared values has been dismantled.

on metadata and intentional sharing:

A decade ago, metadata was all the rage among the geeks. You could tag, geo-tag, or machine-tag Flickr photos. Flickr is from the old community. That’s why you can still do Creative Commons searches at Flickr. But you can’t on Instagram. They don’t care about metadata. From an end-user point of view, RSS is out of favor. The new companies are not investing in creating metadata to make their work discoverable and shareable.

on lock-in and the impact of corporate control over discourse platforms:

We have “given up on standard formats.” “Those of us who cared about this stuff…have lost,” overall. Very few apps support standard formats, with jpg and html as exceptions. Likes and follows, etc., all use undocumented proprietary formats. The most dramatic shift: we’ve lost the expectation that they would be interoperable. The Web was built out of interoperability. “This went away with almost no public discourse about the implications of it.”

on streams, and the algorithmic control of conversation flow:

Our arrogance keeps us thinking that the Web is still about pages. Nope. The percentage of time we spend online looking at streams is rapidly increasing. It is already dominant. This is important because these streams are controlled access. The host controls how we experience the content. “This is part of how they’re controlling the conversation.”

on the lack of historical context:

We count on 23 yr olds to (build websites/apps/tools), but they were in 5th grade when the environment was open.

First. Dang. That makes me feel old. But, how can we expect the people that are building the current and next generations of things to have learned from history, when they weren’t around to experience it to know how important this is, or how it can be done differently?

I’m not sure that we’ve lost the web. Yes, the open web is marginalized, and the corporate streams are predominant. But, it’s not over. Eventually, Facebook will fall – my gut says they’ll do something colossally stupid with the new Facebook Home android thing with constant tracking of users, and may (finally) attract significant attention and oversight. And then, people will likely withdraw. And eventually come back to wanting to control their own content and activities rather than unthinkingly relying on “free” corporate streams…

reclaim your rss feed reader

So Google is killing Reader:

We launched Google Reader in 2005 in an effort to make it easy for people to discover and keep tabs on their favourite websites. While the product has a loyal following, over the years usage has declined. So, on July 1, 2013, we will retire Google Reader. Users and developers interested in RSS alternatives can export their data, including their subscriptions, with Google Takeout over the course of the next four months.

Translation: Thanks for letting us mine your activity and data for a few years. We’ve decided you just don’t make enough money for us, and we’ve decided to stop using your activity to feed into our search algorithm. You are no use to us anymore. We’re killing Reader. End transmission.

Translation 2: Using a web page to read feeds is emasculating.

I’m not at all surprised by this. (remember iGoogle?)

But there is an easy way to reclaim your feed reader, so nobody can take it away from you, or cripple it, or mine your activities and data.

I switched to Fever˚ a couple of years ago, migrating all of my feeds from Google Reader. And haven’t looked back. It’s not free – it costs a whopping $30 for a license. But the licensing fee goes to support a fantastic developer, and means that there are no ads or data mining or anything skanky.

Here’s my current Fever˚ “Hot” dashboard:

Screen Shot 2013 03 13 at 6 40 39 PM

Here’s my “★★★★★” folder of must-read feeds:

Screen Shot 2013 03 13 at 6 46 31 PM

Here’s my “Photos” folder – mostly from Flickr users, but also people posting photos elsewhere. All in one handy feed display:

Screen Shot 2013 03 13 at 6 47 42 PM

It’s also got a great iOS app, Reeder (which is best on the iPhone – pixel doubled on iPad for some reason).

Screenshot of “hot” items in Reeder on my godphone:

20130313-193856.jpg

And the five-star feed folder:

20130313-194047.jpg

You can still “share” items – you can expose an RSS feed for items you star within Fever•, and – wait for it – anyone can subscribe to that feed, using any reader that hasn’t been “sunsetted” by a giant corporation. I display my “shared” items on a page on my blog, powered by a self-hosted instance of Alan’s awesome Feed2JS tool.

It’s my Fever˚. No company can decide to “sunset” it. Well, I guess Shaun can decide to abandon it, but even if that happens, the software is running on my server, so worst case scenario I don’t get updates provided by him (through the fantastic automated software updater, btw).

Anyway. Google kills Reader. Not surprising. If you’re still relying on anything Google provides, it’s now shame on you. Reclaim your stuff.

online content producers timeline

I’ve been thinking about the Posterous shutdown, and about previous large-hosted-service shutdowns, going all the way back go Geocities. I think I’ve been so deep in the host-your-own-stuff world that I haven’t been seeing the larger context. Just because I host my stuff, and just because most of the people I know host some (or most) of their stuff, doesn’t mean that the rest of the online population does the same thing. But, how far out of whack are my feelings about the commonality of people managing their own stuff?

I spent a few hours today trying to dig up historical numbers for people using various tools and services to host content. It was surprisingly difficult to find historical data for the number of people with active accounts or downloads of various tools and services. So, I had to settle for cobbling together numbers from various press releases and questionable online reports.

Here’s a rough timeline of active content producers, from 1995 – when Geocities kicked off – to the end of 2012. The figure on the left is raw numbers of people publishing online, using hosted services (blue) or self-hosted software (red). The figure on the right is the proportion of the active content producers using hosted services (blue) or self-hosted software (red) – note that this figure is zoomed in at the 95-100% range because, not surprisingly, almost everyone uses hosted services.

OnlineContentProducers

Services that I could find numbers for:

  • Blogger
  • Facebook
  • Geocities (now defunct)
  • Livejournal
  • MySpace
  • Posterous (now defunct)
  • Tumblr
  • Twitter
  • WordPress (.org and .com)

I haven’t been able to find numbers for Movabletype or Typepad, or a long list of other tools and services. I’ll keep poking around, and will update the figures when I find more data… This also doesn’t include the ~50 million Flickr users etc… (yet).1

It boils down to this – almost everyone – well over 95% of people – use hosted services to publish their content (if they publish content at all). And, these hosted services have a long history of withering or closing down entirely. With about 2 billion accounts2, and with almost all of that publishing activity occurring in hosted services, we will need to come to terms with what that means for an online culture when these now incredibly popular services do what hosted services do.

Do we step up attempts to archive these services, as the Internet Archive did with the 650GB of data from Geocities? Do we attempt to help distribute content across multiple services or self-hosted websites to try to mitigate the impact of any one of these services disappearing?

Do we even care? Does this stuff even really matter, or am I just over thinking this?

  1. although, compared to the numbers of people using Facebook etc…, the user base of sites like Flickr are basically just rounding errors. []
  2. not representing 2 billion people – there will be strong overlap with people having several accounts each []