on digital posters at an academic conference

Poster sessions are an important part of any academic conference – providing a way for researchers (including both faculty and students) to share their research in a format that supports describing methods, discussion, and results as well as fostering discussion about the project. Normally, these posters are printed on large format printers, carefully rolled into tubes for travel, and hung from poster boards or walls in a conference venue. It works, but requires the posters to be completed days (or weeks) ahead of time to allow for layout and printing (and any revisions to fix typos or omissions). It also requires a the content to be static – it’s a printed poster – and the format usually involves a 4’x6′ sheet of paper packed with dense micro-print and footnotes.

When we were planning the 2016 University of Calgary Conference on Postsecondary Learning and Teaching, we knew it would be the first conference to be held in the new Taylor Institute for Teaching and Learning. It’s a facility designed from the ground up to be pervasively digital, and it felt wrong to be doing printed posters when we have 37 high quality screens for use in the learning studios.

So. We committed to doing the poster session in digital format. Something we’ve never done before. I’d never designed a digital poster – there are different affordances and constraints. Although the screens are full 1080p HD resolution, that’s actually much lower resolution and lower data density than people are used to with traditional large-format printed posters. We had to do some experimentation, but it turns out that it’s possible to take a poster file that has been prepared for print and display the PowerPoint or PDF version of the poster just fine on the 50″ displays on our collaboration carts1.

Another thing we had to plan for – the rooms where the posters would be displayed were being actively used during the conference. So, the rooms would need to be combined by raising 2 Skyfold walls and moving all of the tables and chairs out of the way. It took a grand total of 40 minutes from start to finish, and the entire combined Learning Studios ABC was ready for the poster session. Amazing. Many hands make light work, and the technology worked flawlessly2.

After the poster session (which worked – I was totally not anxious about that. At all.), I did some quick napkin math. We were using 18 of the displays, which works out to 75 feet of digital posters. And we only used half of the Collaboration Carts in the building3. Wow.

I drafted a one-page “How to design a digital poster” document for participants to get an idea of what might be involved. It grew to 2 pages. It’s a lame and incomplete guide, but I needed to give something to the people who were being asked to prepare digital posters… I also ran a drop-in session for people to try their poster designs, and my team consulted with several people about how they might prepare their material for the poster session. Best team ever.

We realized that most people will be familiar with creating PowerPoint and PDF files, and our Collaboration Carts4 don’t have the full Microsoft Office suite, so we needed to find a way to reliably display these standard files through a web interface. I initially thought of using Google Slides (as indicated in the howto document), but quickly discovered that Slides renders many things… non-optimally. But – there is a company that has some experience with Office files, and who provides a cloud-based web renderer for such things.

Digital posters served up via OneDrive shared folder

OneDrive to the rescue. The PowerPoint rendering is flawless, and the more-conventional posters with microtext rendered perfectly – and were completely legible on the 50″ HD displays! I set up a shared OneDrive folder so my team had access to upload copies of peoples’ poster files (most showed up with a PowerPoint file on a USB thumbdrive), so a quick drag-and-drop to the shared OneDrive folder took care of webifying the poster for display during the session. Easy peasy.

But not all posters were conventional print-posters-displayed-on-digital-display things. A handful tried something completely new – one team created an amazing graphic poster inspired by Nick Sousanis’ work on Unflattening. Amazing. And it won the “Best Poster” prize!

Poster created by Sloan Dugan and Gillian Ayers. Posted with permission from Sloan Dugan.

I created my poster (in collaboration / based on work by Brian Lamb) as a self-playing looped video file (authored in Keynote and Procreate ) – and really struggled with the format. The pacing is difficult to get right, because the poster runs in the background to foster conversation. For a video poster, people may not be able to stay for long, and wind up missing large chunks of the concept. I’ll work on it some more… Perhaps with a layer of navigation and interactivity, so participants can trigger brief video (or other) segments as they explore, rather than sitting and watching something like a museum exhibit…

Quick observations from the session… It was by far the highest energy academic poster session I’ve ever seen. There wasn’t the usual “I’m here for the wine and food, and will talk to enough people that I won’t feel guilty about leaving early” kind of thing – people were genuinely having great discussions. Fantastic. We actually had to kick people out of the room after the end of the session because we had to reset the room for conference mode before day 2 of the conference (and we only had our volunteers for so long, so needed to finish that conversion before they headed out). We had to give 4 announcements, flash the lights, and eventually just throw the switch to turn off the poster displays before people started leaving. Amazing. I’ve never seen such a thing. I don’t know if it was a result of the novelty of the session – nobody had seen digital posters done in that way, at that scale, before – or if it’s yet another indicator that we have an absolutely incredible community of instructors and students at the University of Calgary…

One thing we’ll need to do is facilitate a series of sessions throughout the year to provide opportunities for people from across campus to come together and think creatively about what a digital academic poster could be – how can interactivity be a part of the experience? How can live data or external media be pulled in? What happens when the design constraints of a physical large-format printed poster are let go, and a poster is designed to take advantage of the affordances offered by the medium?

If anyone knows of any really good resources to help support mindful design of digital/interactive academic posters, I’d love to see them!

  • Nancy Chick [wrote a brief essay about creating posters for the scholarship of teaching and learning, and visual representation of concepts](http://sotl.ucalgaryblogs.ca/posters/).
  1. we REALLY need to come up with a better name for these things – 50″ touch-enabled display, embedded Windows PC for Internet connectivity, and a Mersive Solstice Pod, on wheels, times 37. Collaboration Cart doesn’t really work, but it’s all we’ve come up with that isn’t a corny forced acronym… The cart things are able to be moved throughout the learning studios on the main floor of the building, and the actual hardware that runs them is safely tucked into a server room on another floor – all we have on the carts themselves is the display, and a converter to send the video signal down from the server room over HD-BaseT and then into the display via HDMI, and another wire that sends the USB signal for the touch interface over Ethernet up to the server room []
  2. which was a huge concern for me – this is the first full-scale event we’ve run in the building, aside from grand-opening types of things – and was also the first time all of the Carts were fired up and running independently (usually they’d been run in presentation mode, mirroring the instructors’ display that’s on the Big Screen, but for this session, each Cart was a separate thing which put a HUGE load on the network because of all of the video being chucked around to drive the units… []
  3. again. Better name? []
  4. again. Horrible name. []

Notes from InfoComm 2015

I was at InfoComm 2015 this week, touring some vendors that have been recommended by our AV consultants for the Taylor Institute construction project, The Sextant Group. This was my first time at InfoComm, and I was kind of stunned at the sheer size of the trade show – and at how many similar products exist, with variations and overalaps. It’s rare to see a product that is truly unique – and from what I saw, it comes down mostly to the overall experience and how people are able to actually use the tools, rather than the feature-list checkboxes. No surprise there. Sometimes, having the most features is not a good thing. It’s having the right features (and not having the others). Here are my rough-ish notes about some of the vendors and products that we visited.

Long(ish) post, more after the break…

Continue reading “Notes from InfoComm 2015”

CNIE session on campus engagement

I was fortunate to be able to present a session at CNIE 2014, to share some of the campus engagement stuff we did as part of our long LMS replacement project. I tried to stay away from the technology itself, and focus on the engagement process. Full slide deck is available online, and fuller reports describing the engagement and findings are still available online, as well as the GitHub repository of LMS RFP requirements1.

Basically, I described the process, which started as a conventional inventory of shiny things. We then realized that we had the opportunity to have a more meaningful discussion as a campus community, and the conversation shifted to more interesting topics such as how people actually teach and learn, and what they actually care about.

I billed this as a hands-on session, and was rewarded with a coveted 90 minute slot. The first activity was to have participants try working through building a “fishbone diagram”, based on the research of Jeffrey Nyeboer. It’s a useful way of organizing the description of organizational attributes – things that make up the workflow of an organization – in a way that’s more meaningful than simple word clouds.


(photo by the awesome and talented Irwin DeVries)

It’s a process we used with faculty leadership across our campus, to describe what they mean by “teaching and learning”. We provided them with a simplified template as a null hypothesis, and asked each faculty to correct/complete/adapt/recreate it as needed to describe what they care about. The beauty of this kind of diagram is that it’s pretty inclusive – it’s easy to work on with a group, and when there is disagreement about something that’s on it, or something that has been missed, it is easy to hand people markers to hack away at the diagram until they like it. Used that way, it’s an interesting way to build consensus around what things an organization cares about, which is something that often triggers conflict and defensive postures. The cool thing about the engagement model is that it has lead to some much deeper discussions about things that are much more interesting than what they need from an LMS – it’s opened the door to ongoing discussions about teaching and learning that would have been difficult, impossible, or unavailable otherwise.

Here’s the simplified fishbone we used as a starting point for each faculty on campus:

Fish Bones

Here’s one of the fishbones that was adapted by one of our faculties:2

Fish Bones  Education

And the fishbones that some of the session participants came up with, to describe various contexts:

Evernote Snapshot 20140515 162905Evernote Snapshot 20140515 162905Evernote Snapshot 20140515 162905Evernote Snapshot 20140515 162905

In the session, I also talked about how we identified the various types of people/groups that make up our community, which is surprisingly difficult at a complex organization such as a university.


The session went really well, even though it was an “LMS session” at a time when we’re finally getting some movement away from The LMS As All That There Is™ – but this engagement model would work well (and has worked well) for anything – the LMS change on our campus just provided us with the Macguffin to get the plot moving.

  1. but I would strongly recommend that you don’t use the full set – this was far too much for everyone, and with enough items, things basically cancel each other out. pick a subset of items that you really care about, and have the respondents tailor their responses to that, rather than the whole shooting match. at a high level, they’re essentially all the same thing anyway… []
  2. we provided these via copies of documents in Google Docs, so people could happily add/edit/remove stuff without worrying about access or tools []

Desire2Learn Fusion 2013 notes

Since we’re adopting Desire2Learn, the UofC sent a few folks to the annual Desire2Learn Fusion conference – the timing was extremely fortuitous, with the conference starting about a month after we signed the contract. I’d never been to a D2L conference before, so wasn’t sure really what to expect. Looking at the conference schedule ahead of time, it looked pretty interesting – and would have many sessions that promised to cover some of the extremely-rapid-deployment adoption cycle we’re faced with.

The short version of the conference is that I’m quite confident that we’ll have the technology in place by the time our users need it1. But, I’m a little freaked out about our ability to train instructors before they need to start using D2L. We’ve got some plans to mitigate this, but this is the Achilles Heel of our implementation plan. If we can’t get our instructors trained… DOOM! etc…

The sessions at the conference were really good. I’d expected a vendor conference to focus on the shiny bits of the vendor’s tools, and on the magical wondrousness of clicking checkboxes and deploying software to make unicorns dance with pandas, etc… But it really wasn’t about the vendor’s product – for many of the sessions (excepteven the rapid-deployment sessions that I went to), you could easily remove Desire2Learn’s products from the presentation and it would still have been an interesting and useful session. One of our team members who went to a session on online discussions commented something along the lines of “I was thinking I’d see how to set up the D2L discussion boards in a course – but they didn’t even talk about D2L! It was all pedagogy and learning theory…” The conference as a whole had extremely high production value. Surprisingly high. Whoever was responsible for the great custom-printed-conference-schedule lanyard badge deserves a raise or two.

This was also one of the most exhausting conferences I’ve been to. It’s the first one in a long time where I had to really be present, attend all sessions, and pay attention. We’re under just a little pressure to get this deployment done right, and there’s no time to screw up and backtrack. So, pay attention.

The sessions I chose largely share a theme. We’re looking at migrating from Bb8 to D2L starting now, and wrapping up with 100% adoption by May 2014. So, my session selection and note-taking focus was largely driven by the panic of facing 31,000 FTE students, a few thousand instructors, 14 deans, a Provost, a Vice Provost, and a CIO (when we get a new one) and being able to say “hey. we’re on it. we can do this.”2

I’m not going to blab about the social events (which were awesome), or about how nice (but ungodly hot) Boston was (it was very nice). I’ve posted photos3.

Here’s abridged highlights from my session notes:

Administration pre-conference

Kerry O’Brien

Learned a bunch about the Org Unit model in D2L – and something clicked when thinking about “Course Templates” – I’d been thinking Templates were course design templates. No. They’re course offering templates, used for grouping course offerings based on an organizational hierarchy. So, if you offer a course like Math 251, there’s a Course Template called “Math 251” and all offerings (instances of a course with students actually taking the course) are just Course Offerings that use the “Math 251” template. So, a course like “Math 251 – Fall 2013” is Course Offering of “Math 251” (and also belongs to the “Fall 2013” semester org unit, and likely has Sections enabled so that Math 251 L01-L10 are contained within the single Course Offering. Sounds complicated, but once it clicked, I realized it should help to keep things nicely organized.

Also, the distinction between Sections and Groups was useful – Sections are defined by the Registrar – they’re official subdivisions of a Course Offering, and will be pulled from our PeopleSoft SIS integration – while Groups are informal ad hoc subdivisions that are created by the instructor(s) to put students into smaller bunches to work on stuff together4.


Al Essa

I’ve seen Al online, but this was my first time seeing him in person. He’s a really interesting presenter, and made me overcome some of my resistance to analytics and enterprise data analysis. I’m still kind of pissed off at what the Gates Foundation is doing to education, but there’s something interesting going on with analytics, if we’re careful about how we interpret the data, and what we do with it.

One thing I’m really interested in exploring with the Analytics/Insights package is the modelling of curriculum and learning outcomes within a course, department, faculty, and across campus. This has some really interesting implications for the development and refinement of courses to make sure students are given the full range of learning experiences that will help them succeed in their program(s).

Navigation & Themes

Lindsay – Calgary Catholic School District

As an aside, one of the big reasons we went with D2L was because the CCSD and CBE (and SAIT, and Bow Valley College, and a bunch of other institutions in Calgary) use it, so we’ll have much more opportunity for collaboration and sharing of resources.

One thing we’ll have to figure out is what the default course layout and permissions should be – what tools/content should be presented by default, and what will instructors be able to change? I’d been thinking we should just hand the keys over to the instructors, and let them change everything if they want. But, we may need to think more about how to set up the default layout, and which bits need to adjusted by instructors. This comes down to maturity level of the users, and of the organization, and on the level of training we’ve been able to provide by the time they get there… I’ll also be looking to keep things as simple as possible, while providing access to deeper tools for advanced users. Interesting tension there…

Quick & Dirty Implementation

Michel Singh, La Cite Collegial

Michel (and team) migrated about 5,000 FTE students from Blackboard CE to D2L. Signed a contract in May 2012, and launched on September 1 2012. The technology portion of the implementation and migration worked great. But training of instructors was the big risk and weak spot (sounds familiar…).

They did a “cold turkey” migration – everyone jumped into the pool together.

Some lessons from Michel:

  1. Have a plan and choose your battles
  2. Manage expectations (admin and faculty)
  3. Share responsibilities among your group
  4. Have empathy for users
  5. Structure your support team
  6. Leverage video and tutorials
  7. Celebrate small successes
  8. Maintain good communication with D2L

They used Course Templates in an interesting way – although course content isn’t automatically copied from Templates to Offerings, they used Templates as a central place for the curriculum review team to build the course and add resources, which could then be manually copied down into the Offering for use with students. Nice separation, while allowing collaboration on course design.

Victoria University D2L Implementation

Lisa Germany

They took on a wider task – refreshing the campus eLearning environment – which included migrating from WebCT to Blackboard in addition to curriculum reform and restructuring the university. Several eLearning tools were added or replaced, including D2L, Echo 360, Bb Collaborate, TurnItIn, and auditing of processes and data retention across systems.

Lessons learned:

  1. Understand how teaching & learning activities interact with university systems and determine high-level and detailed business requirements before going to tender (systems map, in context with connections between tools/platforms/data/people).
  2. Involve strategic-thinking people in selection process (not a simple tech decision – there are wider implications)
  3. Make sure requirements are written by people who have done something like this before…
  4. Involve the legal team from the start, so they aren’t the bottleneck at the end. cough
  5. Have a good statement of work outlining vendor and university roles/expectations.
  6. Don’t do it over the summer break! cough
  7. Have a business owner who is able to make initial decisions.

Program priorities (similar to what ours are):

  1. Time (WebCT expires – in our case, timeline driven by Bb license expiring in May 2014)
  2. Quality (don’t step backward, don’t screw it up)
  3. Scope (keep it manageable for all stakeholders)

They took a 3 semester timeline as well:

  1. Pilot
  2. Soft launch
  3. Cutover

DN: So, I’m feeling surprisingly good about the timeline we have, from the perspective of the technology. The biggest tech hurdle we have will be PeopleSoft integration, and that’s doable. It’s the training and user adoption that will kill this…

One Learning Environment

Greg Sabatine – UGuelph

In a way, as a new D2L campus, this was kind of a “don’t do it this way” kind of session. Guelph was the first D2L client, a decade ago, and has kind of accreted campus-wide adoption over the years. So, initial decisions didn’t scale or adapt to support additional use-cases…

They run 11 different instances of self-hosted D2L, on different upgrade schedules (based on the needs of the faculties using each instance – Medicine is on a different academic schedule than Agriculture, so servers can’t have same upgrade timeline etc…). I wonder what our D2L-hosted infrastructure will do to us along these lines – minor upgrades are apparently seamless now with no downtime, but I wonder what Major Upgrades will do5

Looking at the Org Structure they use, we’ll likely have to mimic it, so that each Faculty can have some separation and control. So, our org structure would likely look like:

University > Faculty > Department > Course Offering

They did some freaky custom coding stuff to handle student logins, to point students to the appropriate Org Home Page depending on their program enrolment. Man, I hope we don’t have to do anything as funky as that…

Community of Collaboration in D2L

Barry Robinson – University System of Georgia

They use D2L with 30 out of 31 institutions in the system6. Over 10,000 instructors. 120 LMS Administrators. 315,000 students across the system. Dang. That’s kinda big. They use a few groups to manage D2L and to make decisions:

  1. Strategic – Strategic Advisory Board – 15 reps from across the system, making the major decisions, meeting quarterly
  2. Strategic – Steering Committee – monthly meetings to guide implementation…
  3. Operational – LMS Admin Retreats – bi-annual sessions where the 120 LMS Admins get together to talk about stuff
  4. Tactical – LMS Admins Weekly Confernece – every Thursday at 10am, an online meeting for the admins to discuss issues etc…

Community resources:

  • A D2L course for the community to use to discuss/share things.
  • Listserv
  • Emergency communication system
  • Helpdesk tickets
  • Vendor – Operational Level Guidelines
  • Vendor – SLAs

They migrated 322,347 courses from Bb Vista 8, and had over 99% success rate on course migrations…

D2L for Collaboration & Quality Assurance

Mindy Lee – Centennial College

They review curriculum every 3-5 years. Previously used wikis to collect info, but then had export-to-Word-hell. Needed to transition to continuous improvement, rather than static snapshots. Now, use a D2L course shell to share info and then build a Word document to report on it after the fact.

D2L Curriculum Review course shells:
* 1 for program. common place for all courses in a program. meta.
* 1 for each course under review – used to document evidence for the report (content simply uploaded as files in the course). Serves as a community resource for instructors teaching the course – shared content, rubric, etc…

DN: This last part is actually pretty cool – the curriculum review course site is used as an active resource by instructors who later teach the course. It’s not a separate static artifact. It’s part of the course design/offering/review/refinement process. Tie this into the Course Outcomes features, and there’s a pretty killer curriculum mapping system…

Maximize LOR Functionalities

Lamar State College

DN: the D2L LOR feature is actually one of the big things we’re looking to roll out. Which is kind of funny/ironic, given my history with building learning object repositories…

They use the LOR to selectively publish content to repositories within the organization – you can set audiences for content.

Would it make sense to use a LOR for just-in-time support resources? (similar to the videos available from D2L)?

From Contract to Go-Live in 90 Days


DN: 90 days. Holy crap. Our timeline looks downright relaxed in comparison. This should be interesting…

They had permanent committees guiding decisions and implementation, and a few ad hoc committees:

  • LMS Guidance (CIO etc…)
  • LMS Futures (Faculty members…)
  • LMS Implementation (Tech team / IT)

On migration: they encouraged faculty to “re-engineer” courses rather than to simply copy them over from the old LMS. “If a course is really simple, it’s better to just recreate it in D2L. If it’s complex, or has a LOT of content, migrate it.”

Phased launch – don’t enable every feature at first – it’s overwhelming, and places an added burden on the training and support teams. Best to stage features in phases – key Learning Environment first, then other features like Eportfolio, LOR, Analytics, etc… once you’ve gotten running.

Train the trainers first (and key überusers).

Work with early adopters – they will make or break adoption in a faculty.

They ran several organized, small configuration drop-in sessions, each focused on a specific tool or task. Don’t try to do everything in one session…

8 Months vs. 8 Weeks: Rapid LMS Implementation

Thomas Tobin – Northeastern Illinois University

First, any session that starts with a soundtrack and Egyptian-Pharoah-as-LMS-Metaphor though exercise has GOT to be good.

Much of this should be common sense, but it’s super-useful to see how it lays out in the context of an LMS implementation…

On building the list of key stakeholders – ask each stakeholder “who else should we include? why?” – don’t miss anyone. they get cranky about that.

Identify the skeptics, and recruit them.

The implementation PM must have the authority to make decisions on the fly, or the project will stall.

Develop a charter agreement – define the scope, roles, goals, etc… so people know what’s going on.

The detailed Gantt chart is not for distribution. It has too much detail, changes regularly, and will freak people out.

Plan tasks in parallel vs. serial – break things out, delegate, and let them do their jobs. Success relies on multiple people working together, not single-worker-bottlenecks.

Use a gated plan for milestones and releases – and celebrate (small) successes. (but what do you do about failures?)

Designate 1 person as “schedule checker” – doesn’t have to be the project manager (actually, may be useful to be someone else…)

Assess existing and new risks regularly.

Do a “white glove” review – review and test all settings and features. So, the ePortfolio tool is supposed to be enabled. Is it? Were permissions set so people could actually use it? Were they trained? Does it work? etc…

Unified Communications at the Univ. System of Georgia

David Disney

Interesting talk – he pointed out the importance of making sure end users have current information on the status of tools/networks/services, so they’re not left guessing. I pointed out that if people have to monitor a status website to see how things are doing, that may be a symptom of larger problems…

They have a cool website for monitoring key services across the University System of Georgia.

  1. we’re doing a small-scale pilot this Fall semester, without full integration to PeopleSoft. Winter 2014 will be a “soft launch” with anyone wanting to use it (and some faculties deciding to switch over 100% for January), and with our Blackboard 8 server sunsetted – we’re taking a page from Google’s playbook – in May 2014, so the Spring 2014 semester will be the first with 100% use of Desire2Learn. []
  2. because I sure wasn’t feeling confident about being able to say that before the Fusion conference… []
  3. well, some of the photos… []
  4. and the recent D2L acquisition of Wiigo should make the use of Groups REALLY interesting… []
  5. the Desire2Learn community site was just down for 2 days for a major upgrade to the latest version of D2L – will this kind of thing be necessary going forward from 10.2? []
  6. they mentioned this several times – with the ominous but unspoken question of “what’s up with the 31st institution?” []

UofC Collaborating for Learning Conference 2013

The Teaching & Learning Centre at the University of Calgary is putting on a conference on May 15-16 2013, intended to bring together people who are interested in collaboration for learning (and teaching).

Collaboration for learning

Dr. Gary Poole is the keynote speaker, and the call for proposals is out (closing March 1 2013). Hopefully, we’ll see a good mix of folks from various fields. (yes, I mean you. vain.)

Canadian Learning Commons conference session on DS106

blurb about the conference via @ppival:

On May 7-9, 2012 the University of Calgary hosted the 6th Canadian Learning Commons Conference. The theme of the conference was New Media, New Fluencies and Life Skills Development: Preparing Learners for the 21st Century.

I was asked to do a session, and worked up a presentation describing how the DS106 course experience can be framed as a student-centric learning commons, placing the student in the role of teacher (and vice versa). Wherein, I used the words “cool” and “awesome” entirely too often.

Probably the biggest “holy crap” moment in the presentation, if there was one, was the Inspire site built by students in the course. Students, deciding they needed better tools to share and showcase each other’s work. So they built it. Cool. Awesome.


Session proceedings, including my presentation on DS106, are now up on the UofC DSpace collection. A repository, if you will. Of learning-object-like resources.

Northern Voice 2010

Lots of insanely smart, funny, interesting people at Northern Voice. The conference was just gravy. Also, I got to think through some of my plans with these insanely smart, funny and interesting people, and think I’ve got a much better handle on both my MSc research proposal, and what I need to do on campus as part of my Day Job™.

Thursday: Online Community Enthusiasts – put on by BCCampus/SCOPE – an all-day workshop on facilitating/fostering/participating in online communities

online community enthusiastsenthusiasticproblemsgraphic

Friday: Northern Voice Day 1 – Bryan Alexander’s keynote on Mystery. #altmoosecamp sessions

Bryanepic atriumunbrokenmany eyesI have an iPadgood and evildavephotocampbryan abidesD'Arcy (by Jon)

Saturday: Northern Voice Proper – Chris Messina keynote, edujamsession at Casa del Lamb-McPhee.

saturday keynotealex and krisshaggy krisgrantgrant, playinggrant and brian jam - 1grant and brian jam - 2grant and brian jam - 3musical

Sunday: Mother’s Day. Hanging out with Harry and my Vancouver family before heading home to be with mine.

avid readeriona