Starting a new blog/site to document my train of thought through the CMD PhD program. Projects I’m working on, etc.

Starting a new blog/site to document my train of thought through the CMD PhD program. Projects I’m working on, etc.

Update: well, it started as a separate blog (running on the great Known platform), but I realized I don’t want to be managing separate sites for different things. Simpler is gooder. So, I exported it all from the Known site, and imported it here in my main blog, under a “phdnotes” category.

Notes: Porter et al. (2016). A qualitative analysis of institutional drivers and barriers to blended learning adoption in higher education.

Porter, W. W., Graham, C. R., Bodily, R. G., & Sandberg, D. S. (2016). A qualitative analysis of institutional drivers and barriers to blended learning adoption in higher education. The Internet and Higher Education, 28, 17–27. Retrieved from

An article from the future! (it’s not 2016 here yet, but articles from next year are already showing up. Go go, Gibson!)

Interesting paper, tying technology adoption stuff into professional development and support. This leads directly into our Learning Technologies Coaches program. Good timing.

Basically, more courses are going online or blended (LOTS of courses are getting shifted into blended format). Instructors are loosely described in broad categories: innovators, early adopters, early majority, late majority, and laggards. Ugh. I hate the term laggards for those-who-seem-to-resist.

They follow Graham et al’s (2013)1 framework to describe barriers based on institutional strategy, structure, and support.

The researchers did an online survey of instructors at BYU-I, followed up with interviews of a stratified sample of survey respondents.

They found that instructors aren’t very trustful of the motivations and demands from Administration, and that they trust their peers because they are “in the trenches.” Man, I hate when people draw on the rhetoric of war when describing how they view teaching. Anyway.

So, instructors like to learn and design with peer instructors. They like one-on-one F2F support while building their blended and online courses, so they can see body language etc. But that seems a bit tone-deaf, when they are talking about building blended and online courses where their students won’t have body language cues. Hey. Whatever. Instructors are fun.

Instructors in the study want broadly defined policies, which they can interpret as needed. Guidance from above, without meddling or direct oversight.

“Standardization in terms of definition, and also you can leave it open in terms of how faculty would approach it.”

Makes sense – context-specific implementation of high-level guidance.

They note that infrastructure is a big factor – if the tech isn’t reliable, they get stuck. “If a student has a bad experience or difficulty with the technology, it can squelch their interest and excitement for the context of the course.” – We see this all the time. Frustration when network funkiness makes people have to wait or try again or wait and try again or give up and try later. Key bit:

“infrastructure is influential because course work and engagement stop when infrastructure fails during class or when students are completing assigned work.”

The researchers identified a few factors that were described by respondents as things that could help them to be successful in implementing blended learning:

  • course load reductions – give me time!
  • financial stipends – not a big factor. they want time more than anything.
  • tenure and promotion – also not a big deal, if they have the time to do things.

So, give one-on-one mentoring or coaching with peers, give them solid technology platforms, and give them the time to do stuff.

  1. Graham, C.R., Woodfield, W., & Harrison, J.B. (2013). A framework for institutional adoption and implementation of blended learning in higher education. The Internet and Higher Education, 18, 4–14. []

the one where I finally publish my thesis

So it’s been in progress for a long time. A long, long time. It’s been nearly done for some time as well. I completed (and passed) my oral exam on Nov. 30, and had some additional revisions to make before the thing could be considered officially complete.

Now, it is. I present…

A Case Study Using the Community of Inquiry Framework to Analyze Online Discussions in WordPress and Blackboard in a Graduate Course1

That’s a mouthful. It’s actually a shorter title than I was using originally. What is it?


Online discussions in a graduate level education course were compared using the Community of Inquiry framework and a Classroom Community survey within a mixed methods case study with concurrent triangulation of data sources. Discussion posts were published in two separate software applications: WordPress and Blackboard. Data collected included online discussion metadata, Community of Inquiry coding of online discussion content, survey responses from students, and an interview with the instructor to identify pedagogical decisions made in the design of the course. Content analysis of the discussion archives described differences in posts published to the two platforms, as well as differences in simultaneous indications of Community of Inquiry presences over time. Five new online discussion timeline visualization methods are presented. Key findings include an emphasis on pedagogical design over software selection in facilitating rich online discussions in the context of a graduate level course, although selection of software may provide signals to participants regarding the instructor’s expectations. Recommendations for reproducing similar research, identification of areas for future research, and recommendations for practice are provided.

Yeah. So, what’s that?

Basically, I did a case study of a grad-level online course at the UofC. Online discourse was done in Blackboard and WordPress. I archived the stuff posted by students who consented to participate, and then coded their posts using a template from the Community of Inquiry framework. I then crunched the coded data, mixed with the metadata about the posts themselves, and found some interesting patterns. I had to make up some new ways to visualize the online discussion data in order to describe things the way I wanted.

I went into it thinking “blogging is going to be more awesome than LMS discussions” – I was going to try to provide some data to back that up.

I got the data, but the reason for blogging being more awesome than LMS discussions, in this case study, turned out to have little to do with the technology choice itself. The biggest factor was the pedagogical design of the course – students were given assigned writing and commenting activities in WordPress, whereas Blackboard was more of an info dump for discussions about the course itself. Also, WordPress posts and comments were graded by the instructor. Blackboard discussions, not so much. Guess where students wrote longer, more thoughtful posts.

Yeah. So that happened.

But, the methodology is still interesting. And the case study makes it clear that the design of a course and pedagogical activities are crucial in setting up meaningful online discourse.

I got to do some really interesting analysis of online discussions – combining the raw metadata with coded data about the posts themselves. Lots of cool stuff going on there.

And, I discovered just how powerful Excel pivot tables are. Seriously. Most of the heavy lifting of storing, normalizing, processing and analysing the data, as well as many of the visualizations, was done all in Excel. Seriously. Likely a blog post coming up about how I did that… I used a lot of tools, but Excel was definitely the main one. Go figure. I would not have predicted that…

  1. it’s also linked from the Projects menu on my blog, cryptically under something called “Thesis”. []

Notes: Clarke & Kinne (2012). Asynchronous discussions as threaded discussions or blogs

Clarke, L, & Kinne, L. (2012). Asynchronous discussions as threaded discussions or blogs. Journal of Digital Learning in Teacher Education, 29, 4-13.

The article looked at students publishing online discussions using Blackboard and WordPress, and their reported sense of community, etc…

Kinda perfect for use in my thesis.

But the article is embargoed from our library collection, and the ISTE website for the journal locks it behind a broken paywall. I’ve tried several times to buy the article, but can’t get near it.

Open access, people. Don’t lock your awesomeness behind a paywall. This article is perfect for my thesis, but won’t be used because I can’t get to it.

discussion visualization with gephi

I’ve been playing around with gephi today, to see what I could come up with to display the discussion threads from my research data. Lots of manual data entry later, and I’ve got this:

and this:

WordPress sites are shown in red, Blackboard discussion forums in blue. So far, just a pretty picture, but I’ll hopefully be able to coax out a diagram or two that shows the difference in interaction patterns between the two platforms…

discussion network visualization

I just put together some quick network maps for the online discussions from my thesis research data. Haven’t done any analysis – just some purty pictures to see any at-a-glance differences:

Both discussion platforms had about the same number of posts and responses, but the pattern of connections is markedly different for some reason…

aggregated metadata for online discussions

here’s a quick look at the aggregated metadata for all of the online discussions I’m using in my thesis:

About the same number of posts in each platform, with a bit more of a time-spread in the WordPress discussions, substantially longer posts in WordPress, about the same (non) use of images, more links in WordPress posts, and more attachments in Blackboard posts.

basic metadata analysis

Here’s a quick pass at analyzing the basic metadata for the online discussions.

I plotted a few calculated values (Excel pivot tables fracking ROCK, BTW…), to try to compare activity patterns. What’s interesting in this graph is the average wordcount (green line) – low for the Blackboard discussion board threads (the left 5 items) and markedly higher for the 8 student blog (the right 8 items).

The number of posts in each discussion (dark blue line) is relatively consistent across all discussions. Slightly lower for the WordPress blog sites, but not dramatically so.

Also interesting is the red line – standard deviation of the “day of course” for posts. It’s a rough estimate at how rapidly posts occur – a low standard deviation indicates the posts occurred relatively close together on the calendar. A high value indicates the posts occurred over a longer spread of days. This suggests that Blackboard posts were added in brief, rapid bursts, while the WordPress posts and comments were posted over longer durations. People kept coming back to blog posts long after they were started. Interesting. There could be a number of reasons for this – it’s easier to see Bb discussion boards all in one place – and easier to forget to check various blogs for activity, etc… Or, do they just reflect more, and more deeply on blogs? Interesting… I’d love to find out the reasons behind the different values…

So… The WordPress discussions occurred over longer periods, using slightly fewer posts/responses, but with dramatically longer posts than was seen in the Blackboard discussions…

full online discussion metadata visualization

I’ve finally entered all of the metadata information for the online discussions I’m using in my thesis. This includes the person who posts something, the date, and the size of the post. I worked through my earlier visualization mockup, and wanted to try it with the full set of data. So, here’s the Blackboard discussions (top image) and WordPress blog posts (bottom image):

It’s only the most basic of metadata, but already differences in activity patterns are becoming apparent. Both images are on the same time- and size- scales. The WordPress discussions appear to be using significantly longer posts and comments, spread over much more time. Blackboard discussions appear to be shorter posts, over briefer durations.

Next up, I get to code each post for Community of Inquiry model “presences” – as described by indicators for social, cognitive and teaching contributions in the posts. I’ll figure out some way to overlay that information on top of the basic metadata visualization.