Just had a short piece, After Interaction, published in Interactions magazine.
<snip>… I want to argue that as a concept, interaction hinges on an outmoded notion of technology in use. I’ll argue that technology use is, in fact, already and always has been about a lot more than human-machine interactions (at least in how interaction is regularly imagined in HCI and IxD). I want to suggest that what we have been doing by both investigating and designing technology is participating in and to some extent configuring dense, interconnected relationships of humans and non-humans. That is, we have been assembling and reassembling human-machine hybrids, often in great numbers. And rather than working at a neatly defined interface, we have knitted together and entangled ourselves in these interwoven networks of relations, and go on doing so…</snip>
Read the full piece here.
I wanted to write you a short note with a response to your interactions article, which was very inspiring and thoughtful, and chimed with my own thoughts recently, alongside as you’d expect some disagreements :-). I’ve been trying to think of a HCI without interaction, or rather presumptuously a ‘post-interaction hci’ and what that might be, although i was rather thinking more of how things might do things differently, rather than re-conceptualising existing worlds of technology and people.
As I saw your argument it had two steps. The first, is the familiar concern with rethinking material/non-material relationships; and recasting relationships between hybrid entities in more networked, entanglement-ed descriptions.
Now potentially productive though one could see these moves, I do see them making a rather dangerous mistake. Because things and people are different — just as bodies and mind are different sorts of things. Like Ryle says, it’s a category mistake to start to try and treat them as the same thing — such as in terms such as ‘charismatic machine’ (poetic though it is). As the “other Ryle” said, it’s dangerous when language goes on holiday.
This mistake can lead us into nonsense and confusion. But there is in the short term a quick profit to be had. The actant move lets us take more seriously two oft neglected aspects of objects — first their authored, constructed and used nature, and second their complexity and interactions. You can see that in Latour’s old example of the sleeping policeman; there the object is set to try and stop others driving fast. And there is a nice double mix there — Person/car, traffic authority/sleeping policeman. By thinking about hybrids here you open up the artefact to all the sorts of relationships it is embedded in. Talking about the ‘interface’ to the sleeping policemen would be like talking about roads as ‘interface’, so it can be tempting to talk about hybrids and it talks to the way in which artefacts are constructed but also used.
The second benefit comes from starting to see how these devices interact with each other in complex ways. And so, for example, we might see the sleeping policemen slows the traffic and improves the area. which causes young families to move to the area, and then the area to gentrify and so on. So you have these complex interactions going between people and things. And then all sorts of normal accidents, wicked problems etc etc.
So what goes wrong? The problem is that things are different from people (as in categorically) and if we attribute human verbs to (like decide, think, want etc.) we will end up making all sorts of mistakes. That it to say to treat humans like determinate things (correlation, causations), and to treat objects like they make judgements, can be moral. etc. But perhaps as important the ontology becomes a confusing distraction and we spend our time using the shock value of particular terms and concepts, rather than focusing on the workings or politics of particular situations.
And I think that this is quite a nice benefit. but at serious conceptual cost. Clearly we are not confused about the idea of (for example) the nuclear bomb being “immoral”. We don’t by that mean that the physical object itself made a judgement, but that it is the product of a immoral process. But the problem is then that if we start saying things like a nuclear power station is “immoral”. Here we find that a sensible argument can get hopeless confused by the interactions of moral judgements about practical things (like how to invest billions of pounds), and the things themselves. It’s confusing to talk about the immoral power station, and while it’s the sort of thing guardian headline writers might do, taking it too seriously does conceptual damage.
Not that any of this is particularly new terrain, but as always perhaps we are catching up with it in HCI.
So then in your article you make a nice move from pointing out that we then need to think about the ways in which technical objects, which we might describe in terms of being ‘interactived with’ are actually these complex hybrids entangled with each other — and you make this nice point about Engelbert’s movements. Surely, if we want to intelligently understand what Engelbert was doing we need to think more than just the interaction. That’s just one part of this whole thing! And that seems nice.
But another way of coming at interaction though is to think of interactional sociology, and how say the Goodwins approach sociology. One argument there is that you can get your hook, you can get some grip on this huge beast (existence!) from looking at particular specific human interactions. So I remember a talk Charles Goodwin did once where he used the idea of human rights, and the notion of legal rights and responsibilities. Yet the talk itself ‘ took its purchase in interaction and the idea that if if we can interact with a person might infact be the ordinary source of how much of these theoretical legal judgements are ordinarily made.
So maybe interaction might, in the end, not be such a terrible starting point for thinking through and starting on human-machine relationships. Since it is ‘where the action is’?
My thoughts in that direction were to point out that already we have some quite unusual new interactions with technology which seem to me quite interesting. So for example, fitbits support this rather odd ‘track’ and then occasionally ‘view’ interaction. Smartwatches have this very very old glance mechanism. Even hue lightbulbs do this ’supporting other activities’ interaction. Which all seems quite new to me.