Published “After Interaction”

Just had a short piece, After Inter­ac­tion, pub­lished in Inter­ac­tions magazine.

after-interaction
<snip>… I want to argue that as a con­cept, inter­ac­tion hinges on an out­mod­ed notion of tech­nol­o­gy in use. I’ll argue that tech­nol­o­gy use is, in fact, already and always has been about a lot more than human-machine inter­ac­tions (at least in how inter­ac­tion is reg­u­lar­ly imag­ined in HCI and IxD). I want to sug­gest that what we have been doing by both inves­ti­gat­ing and design­ing tech­nol­o­gy is par­tic­i­pat­ing in and to some extent con­fig­ur­ing dense, inter­con­nect­ed rela­tion­ships of humans and non-humans. That is, we have been assem­bling and reassem­bling human-machine hybrids, often in great num­bers. And rather than work­ing at a neat­ly defined inter­face, we have knit­ted togeth­er and entan­gled our­selves in these inter­wo­ven net­works of rela­tions, and go on doing so…</snip>

Read the full piece here.

2 thoughts on “Published “After Interaction”

  1. I want­ed to write you a short note with a response to your inter­ac­tions arti­cle, which was very inspir­ing and thought­ful, and chimed with my own thoughts recent­ly, along­side as you’d expect some dis­agree­ments :-). I’ve been try­ing to think of a HCI with­out inter­ac­tion, or rather pre­sump­tu­ous­ly a ‘post-inter­ac­tion hci’ and what that might be, although i was rather think­ing more of how things might do things dif­fer­ent­ly, rather than re-con­cep­tu­al­is­ing exist­ing worlds of tech­nol­o­gy and people.
    As I saw your argu­ment it had two steps. The first, is the famil­iar con­cern with rethink­ing mate­ri­al/non-mate­r­i­al rela­tion­ships; and recast­ing rela­tion­ships between hybrid enti­ties in more net­worked, entan­gle­ment-ed descriptions.
    Now poten­tial­ly pro­duc­tive though one could see these moves, I do see them mak­ing a rather dan­ger­ous mis­take. Because things and peo­ple are dif­fer­ent — just as bod­ies and mind are dif­fer­ent sorts of things. Like Ryle says, it’s a cat­e­go­ry mis­take to start to try and treat them as the same thing — such as in terms such as ‘charis­mat­ic machine’ (poet­ic though it is). As the “oth­er Ryle” said, it’s dan­ger­ous when lan­guage goes on holiday.
    This mis­take can lead us into non­sense and con­fu­sion. But there is in the short term a quick prof­it to be had. The actant move lets us take more seri­ous­ly two oft neglect­ed aspects of objects — first their authored, con­struct­ed and used nature, and sec­ond their com­plex­i­ty and inter­ac­tions. You can see that in Latour’s old exam­ple of the sleep­ing police­man; there the object is set to try and stop oth­ers dri­ving fast. And there is a nice dou­ble mix there — Person/car, traf­fic authority/sleeping police­man. By think­ing about hybrids here you open up the arte­fact to all the sorts of rela­tion­ships it is embed­ded in. Talk­ing about the ‘inter­face’ to the sleep­ing police­men would be like talk­ing about roads as ‘inter­face’, so it can be tempt­ing to talk about hybrids and it talks to the way in which arte­facts are con­struct­ed but also used.
    The sec­ond ben­e­fit comes from start­ing to see how these devices inter­act with each oth­er in com­plex ways. And so, for exam­ple, we might see the sleep­ing police­men slows the traf­fic and improves the area. which caus­es young fam­i­lies to move to the area, and then the area to gen­tri­fy and so on. So you have these com­plex inter­ac­tions going between peo­ple and things. And then all sorts of nor­mal acci­dents, wicked prob­lems etc etc.
    So what goes wrong? The prob­lem is that things are dif­fer­ent from peo­ple (as in cat­e­gor­i­cal­ly) and if we attribute human verbs to (like decide, think, want etc.) we will end up mak­ing all sorts of mis­takes. That it to say to treat humans like deter­mi­nate things (cor­re­la­tion, cau­sa­tions), and to treat objects like they make judge­ments, can be moral. etc. But per­haps as impor­tant the ontol­ogy becomes a con­fus­ing dis­trac­tion and we spend our time using the shock val­ue of par­tic­u­lar terms and con­cepts, rather than focus­ing on the work­ings or pol­i­tics of par­tic­u­lar situations.
    And I think that this is quite a nice ben­e­fit. but at seri­ous con­cep­tu­al cost. Clear­ly we are not con­fused about the idea of (for exam­ple) the nuclear bomb being “immoral”. We don’t by that mean that the phys­i­cal object itself made a judge­ment, but that it is the prod­uct of a immoral process. But the prob­lem is then that if we start say­ing things like a nuclear pow­er sta­tion is “immoral”. Here we find that a sen­si­ble argu­ment can get hope­less con­fused by the inter­ac­tions of moral judge­ments about prac­ti­cal things (like how to invest bil­lions of pounds), and the things them­selves. It’s con­fus­ing to talk about the immoral pow­er sta­tion, and while it’s the sort of thing guardian head­line writ­ers might do, tak­ing it too seri­ous­ly does con­cep­tu­al damage.
    Not that any of this is par­tic­u­lar­ly new ter­rain, but as always per­haps we are catch­ing up with it in HCI.
    So then in your arti­cle you make a nice move from point­ing out that we then need to think about the ways in which tech­ni­cal objects, which we might describe in terms of being ‘inter­ac­tived with’ are actu­al­ly these com­plex hybrids entan­gled with each oth­er — and you make this nice point about Engelbert’s move­ments. Sure­ly, if we want to intel­li­gent­ly under­stand what Engel­bert was doing we need to think more than just the inter­ac­tion. That’s just one part of this whole thing! And that seems nice.
    But anoth­er way of com­ing at inter­ac­tion though is to think of inter­ac­tion­al soci­ol­o­gy, and how say the Good­wins approach soci­ol­o­gy. One argu­ment there is that you can get your hook, you can get some grip on this huge beast (exis­tence!) from look­ing at par­tic­u­lar spe­cif­ic human inter­ac­tions. So I remem­ber a talk Charles Good­win did once where he used the idea of human rights, and the notion of legal rights and respon­si­bil­i­ties. Yet the talk itself ‘ took its pur­chase in inter­ac­tion and the idea that if if we can inter­act with a per­son might infact be the ordi­nary source of how much of these the­o­ret­i­cal legal judge­ments are ordi­nar­i­ly made.
    So maybe inter­ac­tion might, in the end, not be such a ter­ri­ble start­ing point for think­ing through and start­ing on human-machine rela­tion­ships. Since it is ‘where the action is’?
    My thoughts in that direc­tion were to point out that already we have some quite unusu­al new inter­ac­tions with tech­nol­o­gy which seem to me quite inter­est­ing. So for exam­ple, fit­bits sup­port this rather odd ‘track’ and then occa­sion­al­ly ‘view’ inter­ac­tion. Smart­watch­es have this very very old glance mech­a­nism. Even hue light­bulbs do this ’sup­port­ing oth­er activ­i­ties’ inter­ac­tion. Which all seems quite new to me.

Leave a Reply

Your email address will not be published.