Published “After Interaction”

Just had a short piece, After Inter­ac­tion, pub­lished in Inter­ac­tions magazine.


<snip>… I want to argue that as a con­cept, inter­ac­tion hinges on an out­moded notion of tech­no­logy in use. I’ll argue that tech­no­logy use is, in fact, already and always has been about a lot more than human-machine inter­ac­tions (at least in how inter­ac­tion is reg­u­larly ima­gined in HCI and IxD). I want to sug­gest that what we have been doing by both invest­ig­at­ing and design­ing tech­no­logy is par­ti­cip­at­ing in and to some extent con­fig­ur­ing dense, inter­con­nec­ted rela­tion­ships of humans and non-humans. That is, we have been assem­bling and reas­sembling human-machine hybrids, often in great num­bers. And rather than work­ing at a neatly defined inter­face, we have knit­ted togeth­er and entangled ourselves in these inter­woven net­works of rela­tions, and go on doing so…</snip>

Read the full piece here.

2 thoughts on “Published “After Interaction”

  1. I wanted to write you a short note with a respon­se to your inter­ac­tions art­icle, which was very inspir­ing and thought­ful, and chimed with my own thoughts recently, along­side as you’d expect some dis­agree­ments :-). I’ve been try­ing to think of a HCI without inter­ac­tion, or rather pre­sump­tu­ously a ‘post-interaction hci’ and what that might be, although i was rather think­ing more of how things might do things dif­fer­ently, rather than re-conceptualising exist­ing worlds of tech­no­logy and people. 

    As I saw your argu­ment it had two steps. The first, is the famil­i­ar con­cern with rethink­ing material/non-material rela­tion­ships; and recast­ing rela­tion­ships between hybrid entit­ies in more net­worked, entanglement-ed descrip­tions.

    Now poten­tially pro­duct­ive though one could see these moves, I do see them mak­ing a rather dan­ger­ous mis­take. Because things and people are dif­fer­ent — just as bod­ies and mind are dif­fer­ent sorts of things. Like Ryle says, it’s a cat­egory mis­take to start to try and treat them as the same thing — such as in terms such as ‘cha­ris­mat­ic machine’ (poet­ic though it is). As the “oth­er Ryle” said, it’s dan­ger­ous when lan­guage goes on hol­i­day.

    This mis­take can lead us into non­sense and con­fu­sion. But there is in the short term a quick profit to be had. The act­ant move lets us take more ser­i­ously two oft neg­lected aspects of objects — first their authored, con­struc­ted and used nature, and second their com­plex­ity and inter­ac­tions. You can see that in Latour’s old example of the sleep­ing police­man; there the object is set to try and stop oth­ers driv­ing fast. And there is a nice double mix there — Person/car, traf­fic authority/sleeping police­man. By think­ing about hybrids here you open up the arte­fact to all the sorts of rela­tion­ships it is embed­ded in. Talk­ing about the ‘inter­face’ to the sleep­ing police­men would be like talk­ing about roads as ‘inter­face’, so it can be tempt­ing to talk about hybrids and it talks to the way in which arte­facts are con­struc­ted but also used.

    The second bene­fit comes from start­ing to see how these devices inter­act with each oth­er in com­plex ways. And so, for example, we might see the sleep­ing police­men slows the traf­fic and improves the area. which causes young fam­il­ies to move to the area, and then the area to gentri­fy and so on. So you have these com­plex inter­ac­tions going between people and things. And then all sorts of nor­mal acci­dents, wicked prob­lems etc etc. 

    So what goes wrong? The prob­lem is that things are dif­fer­ent from people (as in cat­egor­ic­ally) and if we attrib­ute human verbs to (like decide, think, want etc.) we will end up mak­ing all sorts of mis­takes. That it to say to treat humans like determ­in­ate things (cor­rel­a­tion, caus­a­tions), and to treat objects like they make judge­ments, can be mor­al. etc. But per­haps as import­ant the onto­logy becomes a con­fus­ing dis­trac­tion and we spend our time using the shock value of par­tic­u­lar terms and con­cepts, rather than focus­ing on the work­ings or polit­ics of par­tic­u­lar situ­ations.

    And I think that this is quite a nice bene­fit. but at ser­i­ous con­cep­tu­al cost. Clearly we are not con­fused about the idea of (for example) the nuc­le­ar bomb being “immor­al”. We don’t by that mean that the phys­ic­al object itself made a judge­ment, but that it is the pro­duct of a immor­al pro­cess. But the prob­lem is then that if we start say­ing things like a nuc­le­ar power sta­tion is “immor­al”. Here we find that a sens­ible argu­ment can get hope­less con­fused by the inter­ac­tions of mor­al judge­ments about prac­tic­al things (like how to invest bil­lions of pounds), and the things them­selves. It’s con­fus­ing to talk about the immor­al power sta­tion, and while it’s the sort of thing guard­i­an head­line writers might do, tak­ing it too ser­i­ously does con­cep­tu­al dam­age.

    Not that any of this is par­tic­u­larly new ter­rain, but as always per­haps we are catch­ing up with it in HCI

    So then in your art­icle you make a nice move from point­ing out that we then need to think about the ways in which tech­nic­al objects, which we might describe in terms of being ‘inter­act­ived with’ are actu­ally these com­plex hybrids entangled with each oth­er — and you make this nice point about Engelbert’s move­ments. Surely, if we want to intel­li­gently under­stand what Engel­bert was doing we need to think more than just the inter­ac­tion. That’s just one part of this whole thing! And that seems nice. 

    But another way of com­ing at inter­ac­tion though is to think of inter­ac­tion­al soci­ology, and how say the Good­wins approach soci­ology. One argu­ment there is that you can get your hook, you can get some grip on this huge beast (exist­ence!) from look­ing at par­tic­u­lar spe­cific human inter­ac­tions. So I remem­ber a talk Char­les Good­win did once where he used the idea of human rights, and the notion of leg­al rights and respons­ib­il­it­ies. Yet the talk itself ‘ took its pur­chase in inter­ac­tion and the idea that if if we can inter­act with a per­son might infact be the ordin­ary source of how much of these the­or­et­ic­al leg­al judge­ments are ordin­ar­ily made. 

    So may­be inter­ac­tion might, in the end, not be such a ter­rible start­ing point for think­ing through and start­ing on human-machine rela­tion­ships. Since it is ‘where the action is’?

    My thoughts in that dir­ec­tion were to point out that already we have some quite unusu­al new inter­ac­tions with tech­no­logy which seem to me quite inter­est­ing. So for example, fit­bits sup­port this rather odd ‘track’ and then occa­sion­ally ‘view’ inter­ac­tion. Smart­watches have this very very old glance mech­an­ism. Even hue light­bulbs do this ’sup­port­ing oth­er activ­it­ies’ inter­ac­tion. Which all seems quite new to me.

Leave a comment