consolidating phd notes

I started a new blog site, running the fantastic Known blogging platform on a fresh subdomain running on my webspace at Reclaim Hosting. The intention was to give a place to think out loud about stuff I’m working on or thinking about for my PhD program. I started publishing some stuff, and then realized that having a separate site for that was awkward. There was no real need to separate and disconnect that content from the Day Job™ content from the-rest-of-my-life content.

So. I just imported the 8 whole posts I’d published over there, into my blog here. They’re now in a separate category called, creatively enough, phdnotes. Yeah. I added a navigation link to the theme, and there’s an RSS feed just for those posts (does anyone else still do RSS?). I’ll be posting stuff there as my program starts up (officially kicks off in September) and I start to get ideas about what I’d like to work on.

Screen Shot 2016-07-25 at 7.33.16 PM


I’ve been frustrated by how much time I burn away fidgeting with social media. Lately, it’s been essentially a form of self-regulation or soothing as it feels like civilization is melting down. Trump stumbles to pronounce a 5-letter acronym fed to him on a teleprompter? Ugh. To Twitter! etc.

The world isn’t melting down. I need to snap out of the pattern of just pissing away time on social media. So, I’ve deleted the Twitter and Facebook apps from my phone and iPad. And I’ve added a handy /etc/hosts file to my Mac that will block everything (even MySpace and Orkut! Thank Jebus!)

Anyway. I’m not deleting any accounts. I’m not disappearing. I’m (hopefully) just snapping out of this pattern of fidgetting with social media rather than doing literally anything else that is more interesting and productive and relevant to anything – even nothing. Life is too short for that kind of bullshit.

Screen Shot 2016-07-25 at 8.15.36 AM

Ideas on the documentation and interpretation of interactions in a classroom environment

Some rough notes of some ideas I hope to work on, potentially as part of my PhD program.

My Masters degree thesis was based on the use of social network and discourse analysis in an online course to attempt to understand the differences in student activity and interactions in two different online platforms and course designs. Tools like Gephi and NodeXL are available to anyone teaching online, to feed the data (system-generated activity logs, raw discussion text, twitter hashtags, search queries etc.) and get a powerful visualization of how the students interacted. It struck me that the tools are so much richer for online interactions than they are for offline (or blended) face-to-face interactions.

As part of our work in the Taylor Institute, we work closely with instructors and students in classroom-based face-to-face courses, in support of their teaching and learning as well as their research and dissemination about what they learn while teaching (and learning) in the Institute. That is something that could definitely use visualization tools similar to Gephi and NodeXL, as ways to document and share the patterns of interactions between students in various experimental course designs and classroom activities.

There are several layers that need simultaneous documentation and analysis in a classroom, including at least:

  1. Environment. The design of the learning spaces and technologies available in those spaces.
  2. Performance. What people actually do while participating in the session.
  3. Learning. This includes course design, instructional design, and the things that people take away from the session(s).


At the most basic level, this includes the architectural, design, and technology integration schematics. What are the dimensions of the space? Where is the “front” of the space? What kinds of furniture are in the space? How is it arranged? How can it be re-arranged by participants? How is functionality within the space controlled? Who has access to the space during the sessions? Who is able to observe?

This kind of documentation might also be informed by theatre research methods, including scenography, where participants document their interpretation of the space in various forms, and how it shaped their interactions with each other (and, by extension, their teaching and/or learning).


What do people (instructors, students, TAs, other roles) do during the session. This might involve raw documentation through video recording of the session, which might also then be post-processed to generate data for interpretation. Who is “leading” parts of the session? What is the composition of participants (groups? Solo? Large-class lecture? Other?) Who is able to present? To speak? To whom? How are participants collaborating? Are they creating content/media/art/etc? How are they doing that?

There is some existing work on this kind of documentation, but I think it gathers too much data, making it either too intrusive or too difficult to manage. Ogan & Gerritsen’s work on using Kinect sensors to record HD video and dot matrices from a session is interesting. McMasters’ LiveLab has been exploring this for awhile, but its implementation is extremely complicated and couldn’t be replicated in other spaces without significant investment, and would be difficult in a classroom setting.

This layer might also be a candidate for methods such as classroom ethnography or microethnography – both of these methods provide rich data for interpretation, but both are incredibly resource intensive, requiring much time and labour to record, analyze, code, and interpret the data. I think this is where the development of new tools – the field of computational ethnography – might come into play. What if the interactions and performances could be documented and data generated in realtime (or near realtime) through the use of computerized tools to record, process, manipulate, and interpret the raw data to generate logs akin to the system-generated activity logs used in the study of online learning?

There are likely many other research methods employed in theatre which might be useful in this context. I’m taking a research methods course in the fall semester that should help there…


Most of the evaluation of learning will be domain-specific, and within the realm of the course being taught in the classroom session. But, there may be other aspects of student learning that could be used – perhaps a subset of NSSE? Rovai’s Classroom Community Scale? Garrison, Anderson and Archer’s Community of Inquiry model?

What might this look like?

I put together some super-rough sketches of what microethnographic documentation of a classroom session might look like. I have a few ideas for how the documentation may be automated, and need to do a LOT more reading before I try building anything.


comments on facebook

These comments were started in response to a friend, who was taking a stand against Facebook and their take-it-or-leave-it end user license agreement (EULA). They’re not the most profound comments, nor the most well-crafted, but I think they need to exist (also) outside of Facebook’s corporate walled garden. Ironically, after I posted the first comment, the Facebook iPad app prompted me to take a survey about how (un)comfortable I was with the state of Facebook, with specific questions asking about the algorithmic feed. So, I filled it in to indicate that I am very (VERY) uncomfortable with the algorithmic news feed…

From the Facebook post that triggered my responses:

OK, then: I do not give Facebook or any entities associated with Facebook permission to use my pictures, information, messages or posts, both past and future. With this statement, I give notice to Facebook it is strictly forbidden to disclose, copy, distribute, or take any other action against me based on this profile and/or its contents. The content of this profile is private and confidential information. The violation of privacy can be punished by law (UCC 1-308- 1 1 308-103 and the Rome Statute). NOTE: Facebook is now a public entity. All members must post a note like this. If you prefer, you can copy and paste this version. If you do not publish a statement at least once it will be tactically allowing the use of your photos, as well as the information contained in the profile status updates.

And, my responses:

No. By using Facebook, you give them the right to do everything outlined in their EULA. You don’t have to like it, but you agreed to it by activating your account. I’m seriously considering nuking my Facebook account (again, for maybe the third time) because of Facebook’s creepiness and overreaching, and their messing around with privacy and experimentation with algorithms – I can’t trust their algorithmic news feed because I have no idea how it works (but I do – it is obviously optimized to maximize eyeball-time rather than to act as a news feed). But Facebook is where many of my extended family exist online, and where many of my non-online-innovation friends hang out. So, I’m stuck. Nuke my FB account to withdraw from corporate greed, or keep connected with friends and family, while choking back the distaste. Sigh.

I’d guess that unless FB is reclassified as a utility rather than a proprietary social network, there’s not much hope. It’s completely their game, and if we don’t like it, we have to leave. Or, governments have to step in to say it’s more than a social network and needs to be regulated to ensure we have fair and equitable access to the information managed on our behalf. It’s now the biggest news publisher, with no transparency on editorial oversight over the algorithms. Kind of a scary thing to have in a democracy…

and so, here we are. Democracy vs. Capitalism. When many (most?) people are unclear about the definitions of either. When everyone is a lottery ticket (or a TLC reality show contract) away from being a millionaire, they identify with successful capitalists and against “the people” (who are then recast as freeloaders and bums). When popularity and fame are equated with democratic representation, we’re left with reality show dropouts as viable contenders for the most powerful governmental position on the planet. Holy shit this is scary stuff.

Collaboration station demo

The Taylor Institute has 5 learning studios, designed for active and collaborative learning. People who are using the space have access to some great technology to support their work, including 37 “collaboration stations” (we really need to come up with a better name for those…).

Here’s a quick-ish demo of the basic functionality provided by the stations, recorded using the lecture capture system built into the learning studios.

Technologies mentioned and/or used in the video:

some light reading on technology and robots as tutors

  • Bartneck, C., Kuli, D., Croft, E., & Zoghbi, S. (2008). Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. International Journal of Social Robotics, 1(1), 71-81.
  • Burgard, W., Cremers, A. B., Fox, D., & Hähnel, D. (1998). The interactive museum tour-guide robot. Aaai/Iaai.
  • Castellano, G., Paiva, A., Kappas, A., Aylett, R., Hastie, H., Barendregt, W., et al. (2013). Towards Empathic Virtual and Robotic Tutors. In Artificial Intelligence in Education (Vol. 7926, pp. 733-736). Berlin, Heidelberg: Springer Berlin Heidelberg.
  • Corrigan, L. J., Peters, C., & Castellano, G. (2013). Identifying Task Engagement: Towards Personalised Interactions with Educational Robots. 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), 655-658.
  • Dautenhahn, K. (2007). Socially intelligent robots: dimensions of human-robot interaction. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 362(1480), 679-704.
  • Ganeshan, K. (2007). Teaching Robots: Robot-Lecturers and Remote Presence (Vol. 2007, pp. 252-260).
  • Gockley, R., Bruce, A., Forlizzi, J., Michalowski, M., Mundell, A., Rosenthal, S., et al. (2005). Designing robots for long-term social interaction. ” On Intelligent Robots “, 1338-1343.
  • Han, J. (2010). Robot-aided learning and r-learning services.
  • Han, J., Hyun, E., Kim, M., Cho, H., Kanda, T., & Nomura, T. (2009). The Cross-cultural Acceptance of Tutoring Robots with Augmented Reality Services. Jdcta.
  • Harteveld, C., & Sutherland, S. C. (2015). The Goal of Scoring: Exploring the Role of Game Performance in Educational Games. the 33rd Annual ACM Conference (pp. 2235-2244). New York, New York, USA: ACM.
  • Howley, I., Kanda, T., Hayashi, K., & Rosé, C. (2014). Effects of social presence and social role on help-seeking and learning. the 2014 ACM/IEEE international conference (pp. 415-422). New York, New York, USA: ACM.
  • Kanda, T., Hirano, T., Eaton, D., & Ishiguro, H. (2004). Interactive Robots as Social Partners and Peer Tutors for Children: A Field Trial. Human-Computer Interaction, 19(1), 61-84.
  • Kardan, S., & Conati, C. (2015). Providing Adaptive Support in an Interactive Simulation for Learning: An Experimental Evaluation. the 33rd Annual ACM Conference (pp. 3671-3680). New York, New York, USA: ACM.
  • Kennedy, J., Baxter, P., & Belpaeme, T. (2015). The Robot Who Tried Too Hard: Social Behaviour of a Robot Tutor Can Negatively Affect Child Learning. the Tenth Annual ACM/IEEE International Conference (pp. 67-74). New York, New York, USA: ACM.
  • Kenny, P., Hartholt, A., Gratch, J., & Swartout, W. (2007). Building Interactive Virtual Humans for Training Environments. Presented at the Proceedings of I/ “.
  • Kiesler_soccog_08.pdf. (n.d.). Kiesler_soccog_08.pdf. Retrieved June 15, 2016, from
  • Kopp, S., Jung, B., Lessmann, N., & Wachsmuth, I. (2003). Max – A Multimodal Assistant in Virtual Reality Construction. Ki.
  • Lee, D.-H., & Kim, J.-H. (2010). A framework for an interactive robot-based tutoring system and its application to ball-passing training. 2010 IEEE International Conference on Robotics and Biomimetics (ROBIO) (pp. 573-578). IEEE.
  • Leyzberg, D., Spaulding, S., & Scassellati, B. (2014). Personalizing robot tutors to individuals’ learning differences. the 2014 ACM/IEEE international conference (pp. 423-430). New York, New York, USA: ACM.
  • Leyzberg, D., Spaulding, S., Toneva, M., & Scassellati, B. (2012). The physical presence of a robot tutor increases cognitive learning gains.
  • Lin, R., & Kraus, S. (2010). Can automated agents proficiently negotiate with humans? Communications of the ACM, 53(1), 78-88.
  • Mitnik, R., Recabarren, M., Nussbaum, M., & Soto, A. (2009). Collaborative robotic instruction: A graph teaching experience. Computers & Education, 53(2), 330-342.
  • Mubin, O., Stevens, C. J., Shahid, S., Mahmud, A. A., & Dong, J.-J. (2013). A REVIEW OF THE APPLICABILITY OF ROBOTS IN EDUCATION. Technology for Education and Learning, 1(1).
  • Nkambou, R., Belghith, K., Kabanza, F., & Khan, M. (2005). Supporting Training on a Robotic Simulator using a Flexible Path Planner. AIED.
  • Nomikou, I., Pitsch, K., & Rohlfing, K. J. (Eds.). (2013). Robot feedback shapes the tutor’s presentation: How a robot’s online gaze strategies lead to micro-adaptation of the human’s conduct. Interaction Studies, 14(2), 268-296.
  • Peterson, I. (1992). Looking-Glass Worlds. Science News, 141(1), 8-10+15.
  • Rizzo, A., Lange, B., Buckwalter, J. G., Forbell, E., Kim, J., Sagae, K., et al. (n.d.). SimCoach: an intelligent virtual human system for providing healthcare information and support. International Journal on Disability and Human Development, 10(4).
  • Ros, R., Coninx, A., Demiris, Y., Patsis, G., Enescu, V., & Sahli, H. (2014). Behavioral accommodation towards a dance robot tutor. the 2014 ACM/IEEE international conference (pp. 278-279). New York, New York, USA: ACM.
  • Saerbeck, M., Schut, T., Bartneck, C., & Janse, M. D. (2010). Expressive robots in education: varying the degree of social supportive behavior of a robotic tutor. the 28th international conference (pp. 1613-1622). New York, New York, USA: ACM.
  • Satake, S., Kanda, T., Glas, D. F., Imai, M., Ishiguro, H., & Hagita, N. (2009). How to approach humans?-strategies for social robots to initiate interaction. ” -Robot Interaction ( “, 109-116.
  • Serholt, S., Basedow, C. A., Barendregt, W., & Obaid, M. (2014). Comparing a humanoid tutor to a human tutor delivering an instructional task to children. 2014 IEEE-RAS 14th International Conference on Humanoid Robots (Humanoids 2014), 1134-1141.
  • Shin, N., & Kim, S. (n.d.). Learning about, from, and with Robots: Students’ Perspectives. RO-MAN 2007 – the 16th IEEE International Symposium on Robot and Human Interactive Communication, 1040-1045.
  • Swartout, W. (2010). Lessons Learned from Virtual Humans. AI Magazine, 31(1), 9-20.
  • The (human) science of medical virtual learning environments. (2011). The (human) science of medical virtual learning environments, 366(1562), 276-285.
  • Toombs, A. L., Bardzell, S., & Bardzell, J. (2015). The Proper Care and Feeding of Hackerspaces: Care Ethics and Cultures of Making. the 33rd Annual ACM Conference (pp. 629-638). New York, New York, USA: ACM.
  • Vollmer, A.-L., Lohan, K. S., Fischer, K., Nagai, Y., Pitsch, K., Fritsch, J., et al. (2009). People modify their tutoring behavior in robot-directed interaction for action learning. 2009 IEEE 8th International Conference on Development and Learning (pp. 1-6). IEEE.
  • Walters, M. L., Dautenhahn, K., Koay, K. L., Kaouri, C., Boekhorst, R., Nehaniv, C., et al. (2005). Close encounters: spatial distances between people and a robot of mechanistic appearance. 5th IEEE-RAS International Conference on Humanoid Robots, 2005., 450-455.
  • Yannier, N., Israr, A., Lehman, J. F., & Klatzky, R. L. (2015).  FeelSleeve : Haptic Feedback to Enhance Early Reading. the 33rd Annual ACM Conference (pp. 1015-1024). New York, New York, USA: ACM.
  • You, S., Nie, J., Suh, K., & Sundar, S. S. (2011). When the robot criticizes you…: self-serving bias in human-robot interaction. the 6th international conference (pp. 295-296). New York, New York, USA: ACM.

experimental soundscape, mark II

I just sat in the atrium and shaped this soundscape as people were walking through, trying to simulate some kind of response to motion. It’s still pretty muddy, but I think there’s something there. The trick will be in getting it to be unobtrusive and ambient while still providing information…