phdnotes – D’Arcy Norman dot net no more band-aids Wed, 24 Aug 2016 23:21:25 +0000 en-US hourly 1 phdnotes – D’Arcy Norman dot net 32 32 1067019 consolidating phd notes Tue, 26 Jul 2016 01:36:52 +0000 Continue reading "consolidating phd notes"]]> I started a new blog site, running the fantastic Known blogging platform on a fresh subdomain running on my webspace at Reclaim Hosting. The intention was to give a place to think out loud about stuff I’m working on or thinking about for my PhD program. I started publishing some stuff, and then realized that having a separate site for that was awkward. There was no real need to separate and disconnect that content from the Day Job™ content from the-rest-of-my-life content.

So. I just imported the 8 whole posts I’d published over there, into my blog here. They’re now in a separate category called, creatively enough, phdnotes. Yeah. I added a navigation link to the theme, and there’s an RSS feed just for those posts (does anyone else still do RSS?). I’ll be posting stuff there as my program starts up (officially kicks off in September) and I start to get ideas about what I’d like to work on.

Screen Shot 2016-07-25 at 7.33.16 PM

]]> 0 24566
Ideas on the documentation and interpretation of interactions in a classroom environment Sun, 17 Jul 2016 01:27:30 +0000 Continue reading "Ideas on the documentation and interpretation of interactions in a classroom environment"]]> Some rough notes of some ideas I hope to work on, potentially as part of my PhD program.

My Masters degree thesis was based on the use of social network and discourse analysis in an online course to attempt to understand the differences in student activity and interactions in two different online platforms and course designs. Tools like Gephi and NodeXL are available to anyone teaching online, to feed the data (system-generated activity logs, raw discussion text, twitter hashtags, search queries etc.) and get a powerful visualization of how the students interacted. It struck me that the tools are so much richer for online interactions than they are for offline (or blended) face-to-face interactions.

As part of our work in the Taylor Institute, we work closely with instructors and students in classroom-based face-to-face courses, in support of their teaching and learning as well as their research and dissemination about what they learn while teaching (and learning) in the Institute. That is something that could definitely use visualization tools similar to Gephi and NodeXL, as ways to document and share the patterns of interactions between students in various experimental course designs and classroom activities.

There are several layers that need simultaneous documentation and analysis in a classroom, including at least:

  1. Environment. The design of the learning spaces and technologies available in those spaces.
  2. Performance. What people actually do while participating in the session.
  3. Learning. This includes course design, instructional design, and the things that people take away from the session(s).


At the most basic level, this includes the architectural, design, and technology integration schematics. What are the dimensions of the space? Where is the “front” of the space? What kinds of furniture are in the space? How is it arranged? How can it be re-arranged by participants? How is functionality within the space controlled? Who has access to the space during the sessions? Who is able to observe?

This kind of documentation might also be informed by theatre research methods, including scenography, where participants document their interpretation of the space in various forms, and how it shaped their interactions with each other (and, by extension, their teaching and/or learning).


What do people (instructors, students, TAs, other roles) do during the session. This might involve raw documentation through video recording of the session, which might also then be post-processed to generate data for interpretation. Who is “leading” parts of the session? What is the composition of participants (groups? Solo? Large-class lecture? Other?) Who is able to present? To speak? To whom? How are participants collaborating? Are they creating content/media/art/etc? How are they doing that?

There is some existing work on this kind of documentation, but I think it gathers too much data, making it either too intrusive or too difficult to manage. Ogan & Gerritsen’s work on using Kinect sensors to record HD video and dot matrices from a session is interesting. McMasters’ LiveLab has been exploring this for awhile, but its implementation is extremely complicated and couldn’t be replicated in other spaces without significant investment, and would be difficult in a classroom setting.

This layer might also be a candidate for methods such as classroom ethnography or microethnography – both of these methods provide rich data for interpretation, but both are incredibly resource intensive, requiring much time and labour to record, analyze, code, and interpret the data. I think this is where the development of new tools – the field of computational ethnography – might come into play. What if the interactions and performances could be documented and data generated in realtime (or near realtime) through the use of computerized tools to record, process, manipulate, and interpret the raw data to generate logs akin to the system-generated activity logs used in the study of online learning?

There are likely many other research methods employed in theatre which might be useful in this context. I’m taking a research methods course in the fall semester that should help there…


Most of the evaluation of learning will be domain-specific, and within the realm of the course being taught in the classroom session. But, there may be other aspects of student learning that could be used – perhaps a subset of NSSE? Rovai’s Classroom Community Scale? Garrison, Anderson and Archer’s Community of Inquiry model?

What might this look like?

I put together some super-rough sketches of what microethnographic documentation of a classroom session might look like. I have a few ideas for how the documentation may be automated, and need to do a LOT more reading before I try building anything.


]]> 0 24534
ethnography links Mon, 11 Jul 2016 04:55:53 +0000 Gathering links on ethnography, microethnography, etc. To help flesh out the ideas around computational microethnography in documenting and analyzing classroom interactions.

]]> 0 24535
Thinking about documenting and visualizing interactions Sat, 09 Jul 2016 22:23:22 +0000 Some super-rough sketches of some ideas for documenting and visualizing interactions between people in a learning space. Lots of work left to refine the ideas and then to try implementing them…


]]> 0 24536
some light reading on technology and robots as tutors Thu, 23 Jun 2016 18:46:28 +0000 Continue reading "some light reading on technology and robots as tutors"]]>
  • Bartneck, C., Kuli, D., Croft, E., & Zoghbi, S. (2008). Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. International Journal of Social Robotics, 1(1), 71-81.
  • Burgard, W., Cremers, A. B., Fox, D., & Hähnel, D. (1998). The interactive museum tour-guide robot. Aaai/Iaai.
  • Castellano, G., Paiva, A., Kappas, A., Aylett, R., Hastie, H., Barendregt, W., et al. (2013). Towards Empathic Virtual and Robotic Tutors. In Artificial Intelligence in Education (Vol. 7926, pp. 733-736). Berlin, Heidelberg: Springer Berlin Heidelberg.
  • Corrigan, L. J., Peters, C., & Castellano, G. (2013). Identifying Task Engagement: Towards Personalised Interactions with Educational Robots. 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), 655-658.
  • Dautenhahn, K. (2007). Socially intelligent robots: dimensions of human-robot interaction. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 362(1480), 679-704.
  • Ganeshan, K. (2007). Teaching Robots: Robot-Lecturers and Remote Presence (Vol. 2007, pp. 252-260).
  • Gockley, R., Bruce, A., Forlizzi, J., Michalowski, M., Mundell, A., Rosenthal, S., et al. (2005). Designing robots for long-term social interaction. ” On Intelligent Robots “, 1338-1343.
  • Han, J. (2010). Robot-aided learning and r-learning services.
  • Han, J., Hyun, E., Kim, M., Cho, H., Kanda, T., & Nomura, T. (2009). The Cross-cultural Acceptance of Tutoring Robots with Augmented Reality Services. Jdcta.
  • Harteveld, C., & Sutherland, S. C. (2015). The Goal of Scoring: Exploring the Role of Game Performance in Educational Games. the 33rd Annual ACM Conference (pp. 2235-2244). New York, New York, USA: ACM.
  • Howley, I., Kanda, T., Hayashi, K., & Rosé, C. (2014). Effects of social presence and social role on help-seeking and learning. the 2014 ACM/IEEE international conference (pp. 415-422). New York, New York, USA: ACM.
  • Kanda, T., Hirano, T., Eaton, D., & Ishiguro, H. (2004). Interactive Robots as Social Partners and Peer Tutors for Children: A Field Trial. Human-Computer Interaction, 19(1), 61-84.
  • Kardan, S., & Conati, C. (2015). Providing Adaptive Support in an Interactive Simulation for Learning: An Experimental Evaluation. the 33rd Annual ACM Conference (pp. 3671-3680). New York, New York, USA: ACM.
  • Kennedy, J., Baxter, P., & Belpaeme, T. (2015). The Robot Who Tried Too Hard: Social Behaviour of a Robot Tutor Can Negatively Affect Child Learning. the Tenth Annual ACM/IEEE International Conference (pp. 67-74). New York, New York, USA: ACM.
  • Kenny, P., Hartholt, A., Gratch, J., & Swartout, W. (2007). Building Interactive Virtual Humans for Training Environments. Presented at the Proceedings of I/ “.
  • Kiesler_soccog_08.pdf. (n.d.). Kiesler_soccog_08.pdf. Retrieved June 15, 2016, from
  • Kopp, S., Jung, B., Lessmann, N., & Wachsmuth, I. (2003). Max – A Multimodal Assistant in Virtual Reality Construction. Ki.
  • Lee, D.-H., & Kim, J.-H. (2010). A framework for an interactive robot-based tutoring system and its application to ball-passing training. 2010 IEEE International Conference on Robotics and Biomimetics (ROBIO) (pp. 573-578). IEEE.
  • Leyzberg, D., Spaulding, S., & Scassellati, B. (2014). Personalizing robot tutors to individuals’ learning differences. the 2014 ACM/IEEE international conference (pp. 423-430). New York, New York, USA: ACM.
  • Leyzberg, D., Spaulding, S., Toneva, M., & Scassellati, B. (2012). The physical presence of a robot tutor increases cognitive learning gains.
  • Lin, R., & Kraus, S. (2010). Can automated agents proficiently negotiate with humans? Communications of the ACM, 53(1), 78-88.
  • Mitnik, R., Recabarren, M., Nussbaum, M., & Soto, A. (2009). Collaborative robotic instruction: A graph teaching experience. Computers & Education, 53(2), 330-342.
  • Mubin, O., Stevens, C. J., Shahid, S., Mahmud, A. A., & Dong, J.-J. (2013). A REVIEW OF THE APPLICABILITY OF ROBOTS IN EDUCATION. Technology for Education and Learning, 1(1).
  • Nkambou, R., Belghith, K., Kabanza, F., & Khan, M. (2005). Supporting Training on a Robotic Simulator using a Flexible Path Planner. AIED.
  • Nomikou, I., Pitsch, K., & Rohlfing, K. J. (Eds.). (2013). Robot feedback shapes the tutor’s presentation: How a robot’s online gaze strategies lead to micro-adaptation of the human’s conduct. Interaction Studies, 14(2), 268-296.
  • Peterson, I. (1992). Looking-Glass Worlds. Science News, 141(1), 8-10+15.
  • Rizzo, A., Lange, B., Buckwalter, J. G., Forbell, E., Kim, J., Sagae, K., et al. (n.d.). SimCoach: an intelligent virtual human system for providing healthcare information and support. International Journal on Disability and Human Development, 10(4).
  • Ros, R., Coninx, A., Demiris, Y., Patsis, G., Enescu, V., & Sahli, H. (2014). Behavioral accommodation towards a dance robot tutor. the 2014 ACM/IEEE international conference (pp. 278-279). New York, New York, USA: ACM.
  • Saerbeck, M., Schut, T., Bartneck, C., & Janse, M. D. (2010). Expressive robots in education: varying the degree of social supportive behavior of a robotic tutor. the 28th international conference (pp. 1613-1622). New York, New York, USA: ACM.
  • Satake, S., Kanda, T., Glas, D. F., Imai, M., Ishiguro, H., & Hagita, N. (2009). How to approach humans?-strategies for social robots to initiate interaction. ” -Robot Interaction ( “, 109-116.
  • Serholt, S., Basedow, C. A., Barendregt, W., & Obaid, M. (2014). Comparing a humanoid tutor to a human tutor delivering an instructional task to children. 2014 IEEE-RAS 14th International Conference on Humanoid Robots (Humanoids 2014), 1134-1141.
  • Shin, N., & Kim, S. (n.d.). Learning about, from, and with Robots: Students’ Perspectives. RO-MAN 2007 – the 16th IEEE International Symposium on Robot and Human Interactive Communication, 1040-1045.
  • Swartout, W. (2010). Lessons Learned from Virtual Humans. AI Magazine, 31(1), 9-20.
  • The (human) science of medical virtual learning environments. (2011). The (human) science of medical virtual learning environments, 366(1562), 276-285.
  • Toombs, A. L., Bardzell, S., & Bardzell, J. (2015). The Proper Care and Feeding of Hackerspaces: Care Ethics and Cultures of Making. the 33rd Annual ACM Conference (pp. 629-638). New York, New York, USA: ACM.
  • Vollmer, A.-L., Lohan, K. S., Fischer, K., Nagai, Y., Pitsch, K., Fritsch, J., et al. (2009). People modify their tutoring behavior in robot-directed interaction for action learning. 2009 IEEE 8th International Conference on Development and Learning (pp. 1-6). IEEE.
  • Walters, M. L., Dautenhahn, K., Koay, K. L., Kaouri, C., Boekhorst, R., Nehaniv, C., et al. (2005). Close encounters: spatial distances between people and a robot of mechanistic appearance. 5th IEEE-RAS International Conference on Humanoid Robots, 2005., 450-455.
  • Yannier, N., Israr, A., Lehman, J. F., & Klatzky, R. L. (2015).  FeelSleeve : Haptic Feedback to Enhance Early Reading. the 33rd Annual ACM Conference (pp. 1015-1024). New York, New York, USA: ACM.
  • You, S., Nie, J., Suh, K., & Sundar, S. S. (2011). When the robot criticizes you…: self-serving bias in human-robot interaction. the 6th international conference (pp. 295-296). New York, New York, USA: ACM.
  • ]]> 0 24537
    experimental soundscape, mark II Tue, 07 Jun 2016 21:02:18 +0000 I just sat in the atrium and shaped this soundscape as people were walking through, trying to simulate some kind of response to motion. It’s still pretty muddy, but I think there’s something there. The trick will be in getting it to be unobtrusive and ambient while still providing information…

    ]]> 0 24539
    experimental soundscape Tue, 31 May 2016 14:40:03 +0000 Continue reading "experimental soundscape"]]> I played with a few layers of modulating soundscapes – one is a humpback whale track, the other, samples from a Speak-N-Spell. Not sure how annoying this might get in a longer-playing session, but some kind of cool effects that might be useful to be connected to inputs such as location, speed, vector, group size, etc.

    ]]> 0 24540
    on pretention – simpler is gooder Sun, 29 May 2016 00:41:14 +0000 Continue reading "on pretention – simpler is gooder"]]>

    Just realized that last post about soundscape app for iOS sounded horribly pretentious. Wow. Not sure I’ll get to Thing Explainer level, but I’m going to try really hard to break out of jargon.

    So. What I was meaning to say was that I’m messing around with a cool iOS app for making computer generated sounds (not music, but not noise). I want to try playing with it to see what kinds of sounds I can make with it, and am wondering about how those sounds might be triggered or changed based on signals such as people moving through a room or something else. I started playing with converting movement into signals and that kind of works. Now I need to figure out what to do with that data – what kinds of sounds or visuals might be interesting in response to what people are doing in a space.

    ]]> 0 24541
    synthetic modulated ambient soundscapes Fri, 27 May 2016 03:03:00 +0000

    Trying out Soundscaper as a way to explore synthetic modulated ambient soundscapes, to see what kinds of sounds might work for algorithmic generation from spatial data…

    prototyping atrium-sized theramin Thu, 26 May 2016 21:56:29 +0000 Continue reading "prototyping atrium-sized theramin"]]> I’ve been exploring some hacking to prototype a sonification experiment – the idea was to build a way to provide audio biofeedback to shape the soundscape within a space in response to movement and activity. I prototyped a quick mockup using Python and imutils.

    It started as a “skunkworks” project idea:

    An atrium-sized theremin, so a person (or a few people, or a whole gaggle of people) could make sounds (or, hopefully, music) by moving throughout the atrium. A theremin works by responding to the electrical field in a person – normal-sized theremins respond to hand movements. An atrium-sized theremin might respond to where a person walks or stands in the atrium, or how they move. I have absolutely NO idea how to do this, but think it could be a fun way to gently nudge people to explore motion and position in a space. Bonus points for adding some form of synchronized visualization (light show? Digital image projection? Something else?)

    So I started hacking stuff together to see what might work, and also to see if I could do it. I got the basic motion detection working great, using the imutils Python library. I then generated raw frequencies to approximate notes (based on the X/Y coordinates of an instance of motion).

    Turn your volume WAY down. It sounds like crap and is horribly loud. But the concept worked. Motion tracking by a webcam overlooking the atrium of the Taylor Institute (the webcam was only there for the recording of this demo – it’s not a permanent installation), run through motion detection and an algorithm that calculates frequencies for notes played by each instance of movement during a cycle (the “players” count).

    I updated the code after making this recording to refresh the motion detection buffer more frequently, so things like sunlight moving across a polished floor don’t trigger constant notes.

    Next up: try to better explore what soundscapes could be algorithmically generated or modified in response to the motion input. Possibly using CSound?

    and an updated version with improved motion detection (and annoying audio stripped out):