By Humphrey Obuobi ’18


You really need to take a step back and realize what it is you’re doing with your life every day — or rather, how it is that you do it.

Let’s face it — we’re incredibly needy. As history has run its course and civilization has run forward alongside it, our basic needs from simpler times have evolved into an entirely new beast, one which changes in form based on the individual. Thankfully, our innovative potential has enabled us to take advantage of the resources in our environment to make tools for our survival. This, in turn, has led to a world inundated with technology, technology driven by intelligent tools that can handle the needs of our environments.

Computers are the tools of our time. The internal processes and complex engineering and mathematics that powers these devices allow us to traverse an otherwise insurmountable mountain of data. It’s a pretty magical black box; the challenge is designing an interface through which man can exchange information with machine. After all, this machine is still a tool; in order to be truly useful to us, it must be designed in such a way that it can transform the esoteric 1s and 0s of its language into manageable, natural information for the user.

Take the experience of the using Microsoft’s Surface series, advertised as a “2-in1” laptop and tablet. Everything from the graphical interface to the usability of the laptop is designed with the user in mind. With a simple swipe of fingers on the screen, you can flip through an entire database of articles, datasheets, etc. as if paging through a book of records. You can use the Surface Pen, specifically designed for this particular computer, to physically write on the surface of the screen, converting traditional kinesthetic note-taking to a whole new experience entirely — one that is intuitive, convenient, and, frankly, beautiful.

It is this experience that is at the heart of human computer interfaces, or HCI. Defined as the “study of how people interact with computers and to what extent computers are or are not developed for successful interaction with human beings,” HCI aims to combine the basic system of inputs and outputs of the computer and wrap them in a user-friendly package (5). In order to fully capture this experience, the field of human computer interfacing covers everything from industrial design to cognitive and behavioral sciences (as noted in the example above). The term ‘HCI’ itself finds its origins in a 1983 book by Stuart Card and Allen Newell titled “The Psychology of Human Computer Interaction” which describes the nature of what was then a very basic interaction. As the field progressed and the breadth and depth of technology increased, more scholars and technologists alike began to contribute to the train of thought, focusing on how technology could rise to improve the interface between computation and user. By 1997 (at which point companies like Apple Inc. had already begun to seriously  innovate in the user experience side of things), papers began to surface in the psychology community investigating the usability of computer systems and how such a thing could be improved using principles from psychology  and computer science. In sum, the last 40 years of innovation in this field have uncovered its many new dimensions of the field, including those of user input, functionality, etc.

Of course, one of the fundamental aspects of modern HCI is the visual component; after all, we’re incredibly visually-oriented creatures. The first computers were entirely focused on the calculations they were built to perform; visualizing the data was hardly a possibility, much less a concern. However, by the 1960s, some companies had already begun to look into better ways of communicating between the two entities. Digital Equipment Corporation, or DEC, released computers with digital displays in its commercial lineup, including the best-selling PDP-8 (still, the size of a refrigerator). The visual displays have since evolved to dynamically display everything imaginable, from subsets of data to information and pictures about a recently seen monument, again, tailored to the demands of the user. Especially interesting has been the rise in virtual and augmented reality innovations, creating a new type of interaction by seamlessly merging virtual data with the physical world and thus reducing the barrier of comfort with some technologies.

Functional innovations to improve human computer interactions tend to have a little more diversity. The explosion of touchscreen devices in the mid-2000s — pushed by the invention and following proliferation of capacitive touchscreens — was a huge advance in integrating natural motions (swiping, tapping, etc) with computer-generated elements. Similarly, styluses have found their place in digitizing notes, equations, etc. for a more natural experience, thus mirroring traditional, comfortable styles of input into tech. And yet, even within these categories of aspects to consider in computer functionality, there are other, deeper considerations that greatly affect the usability of the final product. For example, the allowance of multiple touch gestures on trackpads and touchscreens was a revolution in the way people interacted with technology as natural motions became applicable to technological faces. Overall, these methods of change (amongst endless others) govern the way that we push the field forward. Keeping these methods of change in mind, it’s important to see how individuals plan on disrupting the field — both in academia and out of it.

Given the MIT Media Labs’ focus on “encouraging the most unconventional mixing and matching of seemingly disparate research areas,” the institution as a whole is an  absolute breeding grounds for such interdisciplinary work (3). On one end of the Media Labs, we find the work of Hiroshi Ishii in a field that the average person has never heard of before: tangible electronics. His lab’s most well-known work, Transform, is akin to a large three-dimensional screen that morphs and creates contours according to some computational input; in their words, the work is an example of  “how shape display technology can be integrated into our everyday lives” (). By adding in the dimensions of both dynamic and tangible processes, the screens open up a whole host of potential applications, including accessibility in homes, data visualization, remote manipulation of objects, and more. The research is grounded in the general idea of dynamic integration in HCI; that is, the user should have a role in defining the uses of the device through an active process of use and refining. However, the final product belies none of the complexities of the background research and is much closer to a functional piece of art than an extension of computational means.

The Responsive Environments group works on a more “macro” level, using data from sensors and computation in natural environments in order to improve our environment. To that end, they’ve developed data-driven elevator music, wearable transducers for exploring one’s environment through sound, wearable lighting, and plenty more “invisible innovations” that effectively integrate technology into everyday life.

Although our conversation has focused on more fundamental interactive elements between man and machine, there is one area of interest that has not yet been mentioned: human-robot interactions. The Personal Robots group at the MIT Media Labs is focused on research into everything that helps to create a more natural interface between humans and robotic personal assistants, whose actions and responses to human stimuli should aim to enhance social interactions. The field brings together even more aspects of computer, mechanical, and electrical engineering, as the team investigates everything from natural language processing to the materials and motions that make for a natural social interaction. One of the group’s most well-known projects, Huggable, is a tool meant to enhance the “human social network” in medicine, education, and quite nearly anything else; through a series of mechanical parts chosen for their “lifelike motion” and a system for gathering and integrating data from the user, the Huggable is akin to an intelligent, emotive companion  (6). While the product is still being refined, it’s a glimpse into what could be.

The fact that technology is now so greatly integrated into our everyday lives is a testament to its evolution in terms of human-computer interaction. As companies and research labs alike continue to work towards more intuitive platforms for HCI, the prospects of the field are bright; the fact that we don’t always know how we make words fly onto a computer screen or flip between applications on a phone is a great thing indeed and a sign of things to come.



  1. Card, S., & Moran, T. (1983). The psychology of human-computer interaction. Hillsdale, N.J.: L. Erlbaum Associates.
  2. Carroll, J. (1997). HUMAN-COMPUTER INTERACTION: Psychology as a Science of Design. Annual Review of Psychology Annu. Rev. Psychol., (48), 61-83.
  3. Research Groups and Projects | MIT Media Lab. (n.d.). Retrieved November 8, 2015.
  4. Responsive Environments | MIT Media Lab. (n.d.). Retrieved November 8, 2015.
  5. Rouse, M. (2005, September 1). What is HCI (human-computer interaction)? – Definition from Retrieved November 8, 2015.
  6. Stiehl, W., Lieberman, J., Breazeal, C., Basel, L., Cooper, R., Knight, H., . . . Purchase, S. (2006). The huggable: A therapeutic robotic companion for relational, affective touch. CCNC 2006. 2006 3rd IEEE Consumer Communications and Networking Conference, 2006.
  7. Surface Pro 3. (n.d.). Retrieved November 8, 2015.
  8. (n.d.). Retrieved November 8, 2015.
  9. Vink, L., Kan, V., Nakagaki, K., Leithinger, D., Follmer, S., Schoessler, P., . . . Ishii, H. (2015). TRANSFORM as Adaptive and Dynamic Furniture. Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems – CHI EA ’15.