phdnotes – D’Arcy Norman dot net https://darcynorman.net no more band-aids Wed, 24 Aug 2016 23:21:25 +0000 en-US hourly 1 https://darcynorman.net/wp-content/uploads/2015/04/crankforpeace3-552f33a1v1_site_icon-32x32.png phdnotes – D’Arcy Norman dot net https://darcynorman.net 32 32 1067019 consolidating phd notes https://darcynorman.net/2016/07/25/consolidating-phd-notes/ https://darcynorman.net/2016/07/25/consolidating-phd-notes/#respond Tue, 26 Jul 2016 01:36:52 +0000 https://darcynorman.net/?p=24566 Continue reading "consolidating phd notes"]]> I started a new blog site, running the fantastic Known blogging platform on a fresh subdomain running on my webspace at Reclaim Hosting. The intention was to give a place to think out loud about stuff I’m working on or thinking about for my PhD program. I started publishing some stuff, and then realized that having a separate site for that was awkward. There was no real need to separate and disconnect that content from the Day Job™ content from the-rest-of-my-life content.

So. I just imported the 8 whole posts I’d published over there, into my blog here. They’re now in a separate category called, creatively enough, phdnotes. Yeah. I added a navigation link to the theme, and there’s an RSS feed just for those posts (does anyone else still do RSS?). I’ll be posting stuff there as my program starts up (officially kicks off in September) and I start to get ideas about what I’d like to work on.

Screen Shot 2016-07-25 at 7.33.16 PM

]]>
https://darcynorman.net/2016/07/25/consolidating-phd-notes/feed/ 0 24566
Ideas on the documentation and interpretation of interactions in a classroom environment https://darcynorman.net/2016/07/16/ideas-on-the-documentation-and-interpretation-of-interactions-in-a-classroom-environment/ https://darcynorman.net/2016/07/16/ideas-on-the-documentation-and-interpretation-of-interactions-in-a-classroom-environment/#respond Sun, 17 Jul 2016 01:27:30 +0000 https://phdnotes.darcynorman.net/2016/ideas-on-the-documentation-and-interpretation-of-interactions-in-a Continue reading "Ideas on the documentation and interpretation of interactions in a classroom environment"]]> Some rough notes of some ideas I hope to work on, potentially as part of my PhD program.

My Masters degree thesis was based on the use of social network and discourse analysis in an online course to attempt to understand the differences in student activity and interactions in two different online platforms and course designs. Tools like Gephi and NodeXL are available to anyone teaching online, to feed the data (system-generated activity logs, raw discussion text, twitter hashtags, search queries etc.) and get a powerful visualization of how the students interacted. It struck me that the tools are so much richer for online interactions than they are for offline (or blended) face-to-face interactions.

As part of our work in the Taylor Institute, we work closely with instructors and students in classroom-based face-to-face courses, in support of their teaching and learning as well as their research and dissemination about what they learn while teaching (and learning) in the Institute. That is something that could definitely use visualization tools similar to Gephi and NodeXL, as ways to document and share the patterns of interactions between students in various experimental course designs and classroom activities.

There are several layers that need simultaneous documentation and analysis in a classroom, including at least:

  1. Environment. The design of the learning spaces and technologies available in those spaces.
  2. Performance. What people actually do while participating in the session.
  3. Learning. This includes course design, instructional design, and the things that people take away from the session(s).

Environment

At the most basic level, this includes the architectural, design, and technology integration schematics. What are the dimensions of the space? Where is the “front” of the space? What kinds of furniture are in the space? How is it arranged? How can it be re-arranged by participants? How is functionality within the space controlled? Who has access to the space during the sessions? Who is able to observe?

This kind of documentation might also be informed by theatre research methods, including scenography, where participants document their interpretation of the space in various forms, and how it shaped their interactions with each other (and, by extension, their teaching and/or learning).

Performance

What do people (instructors, students, TAs, other roles) do during the session. This might involve raw documentation through video recording of the session, which might also then be post-processed to generate data for interpretation. Who is “leading” parts of the session? What is the composition of participants (groups? Solo? Large-class lecture? Other?) Who is able to present? To speak? To whom? How are participants collaborating? Are they creating content/media/art/etc? How are they doing that?

There is some existing work on this kind of documentation, but I think it gathers too much data, making it either too intrusive or too difficult to manage. Ogan & Gerritsen’s work on using Kinect sensors to record HD video and dot matrices from a session is interesting. McMasters’ LiveLab has been exploring this for awhile, but its implementation is extremely complicated and couldn’t be replicated in other spaces without significant investment, and would be difficult in a classroom setting.

This layer might also be a candidate for methods such as classroom ethnography or microethnography – both of these methods provide rich data for interpretation, but both are incredibly resource intensive, requiring much time and labour to record, analyze, code, and interpret the data. I think this is where the development of new tools – the field of computational ethnography – might come into play. What if the interactions and performances could be documented and data generated in realtime (or near realtime) through the use of computerized tools to record, process, manipulate, and interpret the raw data to generate logs akin to the system-generated activity logs used in the study of online learning?

There are likely many other research methods employed in theatre which might be useful in this context. I’m taking a research methods course in the fall semester that should help there…

Learning

Most of the evaluation of learning will be domain-specific, and within the realm of the course being taught in the classroom session. But, there may be other aspects of student learning that could be used – perhaps a subset of NSSE? Rovai’s Classroom Community Scale? Garrison, Anderson and Archer’s Community of Inquiry model?

What might this look like?

I put together some super-rough sketches of what microethnographic documentation of a classroom session might look like. I have a few ideas for how the documentation may be automated, and need to do a LOT more reading before I try building anything.

learningspaces-thumb

]]>
https://darcynorman.net/2016/07/16/ideas-on-the-documentation-and-interpretation-of-interactions-in-a-classroom-environment/feed/ 0 24534
ethnography links https://darcynorman.net/2016/07/10/ethnography-links/ https://darcynorman.net/2016/07/10/ethnography-links/#respond Mon, 11 Jul 2016 04:55:53 +0000 https://phdnotes.darcynorman.net/2016/gathering-links-on-ethnography-microethnography-etc-to-help-flesh-out Gathering links on ethnography, microethnography, etc. To help flesh out the ideas around computational microethnography in documenting and analyzing classroom interactions.

https://links.darcynorman.net/bookmarks.php/dnorman/ethnography
https://links.darcynorman.net/bookmarks.php/dnorman/microethnography

]]>
https://darcynorman.net/2016/07/10/ethnography-links/feed/ 0 24535
Thinking about documenting and visualizing interactions https://darcynorman.net/2016/07/09/thinking-about-documenting-and-visualizing-interactions/ https://darcynorman.net/2016/07/09/thinking-about-documenting-and-visualizing-interactions/#respond Sat, 09 Jul 2016 22:23:22 +0000 https://phdnotes.darcynorman.net/2016/thinking-about-documenting-and-visualizing-interactions Some super-rough sketches of some ideas for documenting and visualizing interactions between people in a learning space. Lots of work left to refine the ideas and then to try implementing them…

learningspaces-thumb

]]>
https://darcynorman.net/2016/07/09/thinking-about-documenting-and-visualizing-interactions/feed/ 0 24536
some light reading on technology and robots as tutors https://darcynorman.net/2016/06/23/some-light-reading-on-technology-and-robots-as-tutors/ https://darcynorman.net/2016/06/23/some-light-reading-on-technology-and-robots-as-tutors/#respond Thu, 23 Jun 2016 18:46:28 +0000 https://phdnotes.darcynorman.net/2016/some-light-reading-on-technology-and-robots-as-tutors Continue reading "some light reading on technology and robots as tutors"]]>
  • Bartneck, C., Kuli, D., Croft, E., & Zoghbi, S. (2008). Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. International Journal of Social Robotics, 1(1), 71-81. http://doi.org/10.1007/s12369-008-0001-3
  • Burgard, W., Cremers, A. B., Fox, D., & Hähnel, D. (1998). The interactive museum tour-guide robot. Aaai/Iaai.
  • Castellano, G., Paiva, A., Kappas, A., Aylett, R., Hastie, H., Barendregt, W., et al. (2013). Towards Empathic Virtual and Robotic Tutors. In Artificial Intelligence in Education (Vol. 7926, pp. 733-736). Berlin, Heidelberg: Springer Berlin Heidelberg. http://doi.org/10.1007/978-3-642-39112-5_100
  • Corrigan, L. J., Peters, C., & Castellano, G. (2013). Identifying Task Engagement: Towards Personalised Interactions with Educational Robots. 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), 655-658. http://doi.org/10.1109/ACII.2013.114
  • Dautenhahn, K. (2007). Socially intelligent robots: dimensions of human-robot interaction. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 362(1480), 679-704. http://doi.org/10.1098/rstb.2006.2004
  • Ganeshan, K. (2007). Teaching Robots: Robot-Lecturers and Remote Presence (Vol. 2007, pp. 252-260).
  • Gockley, R., Bruce, A., Forlizzi, J., Michalowski, M., Mundell, A., Rosenthal, S., et al. (2005). Designing robots for long-term social interaction. ” On Intelligent Robots “, 1338-1343. http://doi.org/10.1109/IROS.2005.1545303
  • Han, J. (2010). Robot-aided learning and r-learning services.
  • Han, J., Hyun, E., Kim, M., Cho, H., Kanda, T., & Nomura, T. (2009). The Cross-cultural Acceptance of Tutoring Robots with Augmented Reality Services. Jdcta. http://doi.org/10.1.1.698.6657
  • Harteveld, C., & Sutherland, S. C. (2015). The Goal of Scoring: Exploring the Role of Game Performance in Educational Games. the 33rd Annual ACM Conference (pp. 2235-2244). New York, New York, USA: ACM. http://doi.org/10.1145/2702123.2702606
  • Howley, I., Kanda, T., Hayashi, K., & Rosé, C. (2014). Effects of social presence and social role on help-seeking and learning. the 2014 ACM/IEEE international conference (pp. 415-422). New York, New York, USA: ACM. http://doi.org/10.1145/2559636.2559667
  • Kanda, T., Hirano, T., Eaton, D., & Ishiguro, H. (2004). Interactive Robots as Social Partners and Peer Tutors for Children: A Field Trial. Human-Computer Interaction, 19(1), 61-84. http://doi.org/10.1207/s15327051hci1901&2_4
  • Kardan, S., & Conati, C. (2015). Providing Adaptive Support in an Interactive Simulation for Learning: An Experimental Evaluation. the 33rd Annual ACM Conference (pp. 3671-3680). New York, New York, USA: ACM. http://doi.org/10.1145/2702123.2702424
  • Kennedy, J., Baxter, P., & Belpaeme, T. (2015). The Robot Who Tried Too Hard: Social Behaviour of a Robot Tutor Can Negatively Affect Child Learning. the Tenth Annual ACM/IEEE International Conference (pp. 67-74). New York, New York, USA: ACM. http://doi.org/10.1145/2696454.2696457
  • Kenny, P., Hartholt, A., Gratch, J., & Swartout, W. (2007). Building Interactive Virtual Humans for Training Environments. Presented at the Proceedings of I/ “.
  • Kiesler_soccog_08.pdf. (n.d.). Kiesler_soccog_08.pdf. Retrieved June 15, 2016, from http://sfussell.hci.cornell.edu/pubs/Manuscripts/Kiesler_soccog_08.pdf
  • Kopp, S., Jung, B., Lessmann, N., & Wachsmuth, I. (2003). Max – A Multimodal Assistant in Virtual Reality Construction. Ki.
  • Lee, D.-H., & Kim, J.-H. (2010). A framework for an interactive robot-based tutoring system and its application to ball-passing training. 2010 IEEE International Conference on Robotics and Biomimetics (ROBIO) (pp. 573-578). IEEE. http://doi.org/10.1109/ROBIO.2010.5723389
  • Leyzberg, D., Spaulding, S., & Scassellati, B. (2014). Personalizing robot tutors to individuals’ learning differences. the 2014 ACM/IEEE international conference (pp. 423-430). New York, New York, USA: ACM. http://doi.org/10.1145/2559636.2559671
  • Leyzberg, D., Spaulding, S., Toneva, M., & Scassellati, B. (2012). The physical presence of a robot tutor increases cognitive learning gains.
  • Lin, R., & Kraus, S. (2010). Can automated agents proficiently negotiate with humans? Communications of the ACM, 53(1), 78-88. http://doi.org/10.1145/1629175.1629199
  • Mitnik, R., Recabarren, M., Nussbaum, M., & Soto, A. (2009). Collaborative robotic instruction: A graph teaching experience. Computers & Education, 53(2), 330-342. http://doi.org/10.1016/j.compedu.2009.02.010
  • Mubin, O., Stevens, C. J., Shahid, S., Mahmud, A. A., & Dong, J.-J. (2013). A REVIEW OF THE APPLICABILITY OF ROBOTS IN EDUCATION. Technology for Education and Learning, 1(1). http://doi.org/10.2316/Journal.209.2013.1.209-0015
  • Nkambou, R., Belghith, K., Kabanza, F., & Khan, M. (2005). Supporting Training on a Robotic Simulator using a Flexible Path Planner. AIED.
  • Nomikou, I., Pitsch, K., & Rohlfing, K. J. (Eds.). (2013). Robot feedback shapes the tutor’s presentation: How a robot’s online gaze strategies lead to micro-adaptation of the human’s conduct. Interaction Studies, 14(2), 268-296. http://doi.org/10.1075/is.14.2.06pit
  • Peterson, I. (1992). Looking-Glass Worlds. Science News, 141(1), 8-10+15. http://doi.org/10.2307/3976251?ref=search-gateway:0b99b4ee4ada2716135055fafcfc8c8d
  • Rizzo, A., Lange, B., Buckwalter, J. G., Forbell, E., Kim, J., Sagae, K., et al. (n.d.). SimCoach: an intelligent virtual human system for providing healthcare information and support. International Journal on Disability and Human Development, 10(4). http://doi.org/10.1515/IJDHD.2011.046
  • Ros, R., Coninx, A., Demiris, Y., Patsis, G., Enescu, V., & Sahli, H. (2014). Behavioral accommodation towards a dance robot tutor. the 2014 ACM/IEEE international conference (pp. 278-279). New York, New York, USA: ACM. http://doi.org/10.1145/2559636.2559821
  • Saerbeck, M., Schut, T., Bartneck, C., & Janse, M. D. (2010). Expressive robots in education: varying the degree of social supportive behavior of a robotic tutor. the 28th international conference (pp. 1613-1622). New York, New York, USA: ACM. http://doi.org/10.1145/1753326.1753567
  • Satake, S., Kanda, T., Glas, D. F., Imai, M., Ishiguro, H., & Hagita, N. (2009). How to approach humans?-strategies for social robots to initiate interaction. ” -Robot Interaction ( “, 109-116. http://doi.org/10.1145/1514095.1514117
  • Serholt, S., Basedow, C. A., Barendregt, W., & Obaid, M. (2014). Comparing a humanoid tutor to a human tutor delivering an instructional task to children. 2014 IEEE-RAS 14th International Conference on Humanoid Robots (Humanoids 2014), 1134-1141. http://doi.org/10.1109/HUMANOIDS.2014.7041511
  • Shin, N., & Kim, S. (n.d.). Learning about, from, and with Robots: Students’ Perspectives. RO-MAN 2007 – the 16th IEEE International Symposium on Robot and Human Interactive Communication, 1040-1045. http://doi.org/10.1109/ROMAN.2007.4415235
  • Swartout, W. (2010). Lessons Learned from Virtual Humans. AI Magazine, 31(1), 9-20. http://doi.org/10.1609/aimag.v31i1.2284
  • The (human) science of medical virtual learning environments. (2011). The (human) science of medical virtual learning environments, 366(1562), 276-285. http://doi.org/10.1098/rstb.2010.0209
  • Toombs, A. L., Bardzell, S., & Bardzell, J. (2015). The Proper Care and Feeding of Hackerspaces: Care Ethics and Cultures of Making. the 33rd Annual ACM Conference (pp. 629-638). New York, New York, USA: ACM. http://doi.org/10.1145/2702123.2702522
  • Vollmer, A.-L., Lohan, K. S., Fischer, K., Nagai, Y., Pitsch, K., Fritsch, J., et al. (2009). People modify their tutoring behavior in robot-directed interaction for action learning. 2009 IEEE 8th International Conference on Development and Learning (pp. 1-6). IEEE. http://doi.org/10.1109/DEVLRN.2009.5175516
  • Walters, M. L., Dautenhahn, K., Koay, K. L., Kaouri, C., Boekhorst, R., Nehaniv, C., et al. (2005). Close encounters: spatial distances between people and a robot of mechanistic appearance. 5th IEEE-RAS International Conference on Humanoid Robots, 2005., 450-455. http://doi.org/10.1109/ICHR.2005.1573608
  • Yannier, N., Israr, A., Lehman, J. F., & Klatzky, R. L. (2015).  FeelSleeve : Haptic Feedback to Enhance Early Reading. the 33rd Annual ACM Conference (pp. 1015-1024). New York, New York, USA: ACM. http://doi.org/10.1145/2702123.2702396
  • You, S., Nie, J., Suh, K., & Sundar, S. S. (2011). When the robot criticizes you…: self-serving bias in human-robot interaction. the 6th international conference (pp. 295-296). New York, New York, USA: ACM. http://doi.org/10.1145/1957656.1957778
  • ]]>
    https://darcynorman.net/2016/06/23/some-light-reading-on-technology-and-robots-as-tutors/feed/ 0 24537
    experimental soundscape, mark II https://darcynorman.net/2016/06/07/experimental-soundscape-mark-ii/ https://darcynorman.net/2016/06/07/experimental-soundscape-mark-ii/#respond Tue, 07 Jun 2016 21:02:18 +0000 https://phdnotes.darcynorman.net/2016/experimental-soundscape-mark-ii I just sat in the atrium and shaped this soundscape as people were walking through, trying to simulate some kind of response to motion. It’s still pretty muddy, but I think there’s something there. The trick will be in getting it to be unobtrusive and ambient while still providing information…

    ]]>
    https://darcynorman.net/2016/06/07/experimental-soundscape-mark-ii/feed/ 0 24539
    experimental soundscape https://darcynorman.net/2016/05/31/experimental-soundscape/ https://darcynorman.net/2016/05/31/experimental-soundscape/#respond Tue, 31 May 2016 14:40:03 +0000 http://phdnotes.darcynorman.net/2016/experimental-soundscape Continue reading "experimental soundscape"]]> I played with a few layers of modulating soundscapes – one is a humpback whale track, the other, samples from a Speak-N-Spell. Not sure how annoying this might get in a longer-playing session, but some kind of cool effects that might be useful to be connected to inputs such as location, speed, vector, group size, etc.

    ]]>
    https://darcynorman.net/2016/05/31/experimental-soundscape/feed/ 0 24540
    on pretention – simpler is gooder https://darcynorman.net/2016/05/28/on-pretention-simpler-is-gooder/ https://darcynorman.net/2016/05/28/on-pretention-simpler-is-gooder/#respond Sun, 29 May 2016 00:41:14 +0000 https://phdnotes.darcynorman.net/2016/just-realized-that-last-post-about-soundscape-app-for-ios Continue reading "on pretention – simpler is gooder"]]>

    Just realized that last post about soundscape app for iOS sounded horribly pretentious. Wow. Not sure I’ll get to Thing Explainer level, but I’m going to try really hard to break out of jargon.

    So. What I was meaning to say was that I’m messing around with a cool iOS app for making computer generated sounds (not music, but not noise). I want to try playing with it to see what kinds of sounds I can make with it, and am wondering about how those sounds might be triggered or changed based on signals such as people moving through a room or something else. I started playing with converting movement into signals and that kind of works. Now I need to figure out what to do with that data – what kinds of sounds or visuals might be interesting in response to what people are doing in a space.

    ]]>
    https://darcynorman.net/2016/05/28/on-pretention-simpler-is-gooder/feed/ 0 24541
    synthetic modulated ambient soundscapes https://darcynorman.net/2016/05/26/trying-out-soundscaper-as-a-way-to-explore-synthetic-modulated-ambient-soundscapes-to-see-what-kinds-of-sounds-might-work-for-algorithmic-generation-from-spatial/ Fri, 27 May 2016 03:03:00 +0000 https://phdnotes.darcynorman.net/2016/trying-out-soundscaper-as-a-way-to-explore-synthetic-modulated

    Trying out Soundscaper as a way to explore synthetic modulated ambient soundscapes, to see what kinds of sounds might work for algorithmic generation from spatial data…

    http://motion-soundscape.blogspot.ca/2015/02/soundscaper-experimental-sound-mini-lab.html

    ]]>
    24542
    prototyping atrium-sized theramin https://darcynorman.net/2016/05/26/prototyping-atrium-sized-theramin/ Thu, 26 May 2016 21:56:29 +0000 https://phdnotes.darcynorman.net/2016/prototyping-atrium-sized-theramin Continue reading "prototyping atrium-sized theramin"]]> I’ve been exploring some hacking to prototype a sonification experiment – the idea was to build a way to provide audio biofeedback to shape the soundscape within a space in response to movement and activity. I prototyped a quick mockup using Python and imutils.

    It started as a “skunkworks” project idea:

    An atrium-sized theremin, so a person (or a few people, or a whole gaggle of people) could make sounds (or, hopefully, music) by moving throughout the atrium. A theremin works by responding to the electrical field in a person – normal-sized theremins respond to hand movements. An atrium-sized theremin might respond to where a person walks or stands in the atrium, or how they move. I have absolutely NO idea how to do this, but think it could be a fun way to gently nudge people to explore motion and position in a space. Bonus points for adding some form of synchronized visualization (light show? Digital image projection? Something else?)

    So I started hacking stuff together to see what might work, and also to see if I could do it. I got the basic motion detection working great, using the imutils Python library. I then generated raw frequencies to approximate notes (based on the X/Y coordinates of an instance of motion).

    Turn your volume WAY down. It sounds like crap and is horribly loud. But the concept worked. Motion tracking by a webcam overlooking the atrium of the Taylor Institute (the webcam was only there for the recording of this demo – it’s not a permanent installation), run through motion detection and an algorithm that calculates frequencies for notes played by each instance of movement during a cycle (the “players” count).

    I updated the code after making this recording to refresh the motion detection buffer more frequently, so things like sunlight moving across a polished floor don’t trigger constant notes.

    Next up: try to better explore what soundscapes could be algorithmically generated or modified in response to the motion input. Possibly using CSound?

    and an updated version with improved motion detection (and annoying audio stripped out):

    ]]>
    24543