Nesta kindly invited me to one of their ‘hot topics’ events a couple of weeks ago to present a provocation on AI and human-computer interaction. They also asked for me to write a few words that they’ve now published on the “TheLong+Short” blog here. I append the original text to my provocation below.
I came across this photo on my computer today (sorry, I’ve looked to see if I can attribute it to someone, but so far failed). It’s a lovely image in it’s own right, playing with a vintage quality to the future, but in this context I think it does invite the question ‘is this the limit of our imaginations?’ I’d like to suggest AI might open us up to so much more.
It seems to me that again and again, in our visions of human computer interaction, we keep coming back to a peculiar notion of human intelligence as a benchmark for what we want our machines to behave like — chess players, go players, conversational agents, car drivers, humanoid robots, etc.
To me, this seems to be a terribly restrictive idea of intelligence, one that limits our imaginations and constrains what the innovations in machine learning and AI could offer. To be blunt: our clunky ideas of human intelligence, smartness, emotion, and so on, are things we’re so deeply invested in, intellectually and culturally, that they distract us from far more promising possibilities.
To get us thinking differently, I want to experiment a little with some questions about what we think intelligence is, and what kind of intelligence we might want in the machines we’re building and that we’ll eventually come to live with. Through a different way of viewing the challenges, I want to ask what other kinds of intelligence we might just imagine in our interactions with machines?
To help develop this line of questioning, let me begin with a parable of sorts, one that might seem to diverge from the topic at hand, but I hope to show has an apposite lesson:
For decades, animal behaviourists have invested their research energies in assessing whether birds can talk and whether, with that talk, they exhibit higher-functioning cognitive abilities, that is ones closer to our own. In the laboratory — through all sorts of experimental configurations — mynah birds, parrots, macaws etc. have been pushed and prodded to talk. What’s hardly surprising is that the results point towards conclusive evidence that birds have a less sophisticated cognitive capacity to let us say other highly evolved nonhumans such as primates, and of course ourselves. In experimental conditions, birds perform badly (as a matter of fact they do their utmost to sabotage the equipment. To put it another way, it turns out that mynahs, macaws, parrots, and the like just don’t like to talk under the experimental conditions they are subject to.
However, outside the laboratory, at least in a limited number of cases, it seems that if people invest in developing a relationship with these birds, one in which the needs and desires are understood to be things that are negotiated and developed over time — what the philosopher and ethnologist Vinciane Depret refers to as “a constant movement of attunement” — they can start to talk and they can at the end of a day (literally) end up saying things like:Now, my point here isn’t yet another argument for or against anthropomorphising birds, animals or even machines. My interest — again, taking from Despret — is in how we might start to ask a different set of questions: In the case of birds, it’s not whether we can generalise to say that birds are intrinsically like/unlike humans or, more specifically, whether they can talk like humans. Rather, can we ask what are the conditions through which we can begin to talk with them, and that they might talk back?
To bring this back to our topic, can we ask what the conditions would be for something akin to intelligence to surface in our human-machine interactions? What questions do we need to ask of those things we interact with to allow an intelligence to surface? It’s this turn to a‑thinking-about-the-conditions that are created and what-might-just-be-possible that invites us to ask some very different questions, questions not about some intrinsic quality of animal or machine intelligence, but of humans and nonhumans altogether, of a wider set of relations and entanglements that bring intelligence into being.
So what might these different conditions be? And how might we imagine something else through the entanglements between humans and machines? With questions like these I think we open ourselves up to a vast array of possibilities, but let me offer just one idea to illustrate. I want to suggest that through the infrastructural capacities of vastly distributed systems and the production of data, we could begin to see the conditions for difference.
Take the example of the in IP geolocation from Maxmind, the US-based provider of “IP intelligence”. Through a system designed to locate people using their connected devices, we see how certain demographic categories are sustained and cemented: what do people living here buy? Or where do nefarious interact activities originate? And, in some cases, we see how the technical particularities of such an algorithmic system can give rise to a so-called glitch where, because of their geographic location, people and households are inadvertently accused of criminal activity. If there’s any intelligence here, it’s invested in counting and bucketing people into coarse and problematic socio-economic categories.
My question would be to ask how this configuration of geography, people and technology might be changed to surface something else. How might the conditions be altered so that populations are seen not in overly simplistic and at times error-prone ways, but in ways that open us up to different possibilities. How might they be used, for instance, to understand how people could be counted differently, how new classifications might be surfaced that open us up to the other ways we inhabit and build relations to spaces. And what if the specific algorithmic limitations of the system weren’t bracketed off, and treated as noise, but used to ask who is not being counted here, and how might they be?
I wouldn’t want to pretend to have any concrete or half-baked answers here, but I think we need to take seriously the invitation to ask different questions like these. They will be what surface an intelligence that is more than mimicry and invested, instead, in how we hope to live our lives. Thus, we might ask: What is it each of us might learn from using a system that responds, intelligently, to the relations between ourselves and place? What, derived from a panoply of data sources and billions of human-machine interactions, might each of us develop a sense of? How might each of us truly accomplish something, together, with an emerging back and forth of engagements and interactions?
In interacting with an intelligence of this sort, we may not need to know its inner workings, just as we don’t need to know the inner workings of someone else (or for that matter of another species) to talk to them. What we do need are the conditions to actively produce something in common, to bit by bit “result in shared perspectives, intelligences and intentions, resemblances, inversions and exchanges of properties.”Notes:
1. I humbly borrow this phrasing from Vinciane Despret who has invested a career in figuring out the right questions to ask of animals and most recently published the fabulous book “What would animals say if we asked the right questions?”2. This is taken from Despret’s book. She cites the following as the source: Griffin, D. (1992). Animal Minds. Chicago: Chicago University Press.3. Vinciane Despret 2008. The Becomings of Subjectivity in Animal Worlds. Subjectivity, 23 (1), p 125.4. Irene Pepperberg reports that her parrot Alex used to say this to her every evening. It has been widely reported not least in the Telegraph. I first came across the story via Despret in both her book and her 2008 paper in Subjectivity.5. Vinciane Despret 2008. The Becomings of Subjectivity in Animal Worlds. Subjectivity, 23 (1), p 135.I humbly borrow this phrasing from Vinciane Despret who has invested a career in figuring out the right questions to ask of animals and most recently published the fabulous book “What would animals say if we asked the right questions?”This is taken from Despret’s book. She cites the following as the source: Griffin, D. (1992). Animal Minds. Chicago: Chicago University Press.Vinciane Despret 2008. The Becomings of Subjectivity in Animal Worlds. Subjectivity, 23 (1), p 125.Irene Pepperberg reports that her parrot Alex used to say this to her every evening. It has been widely reported not least in the Telegraph. I first came across the story via Despret in both her book and her 2008 paper in Subjectivity.Vinciane Despret 2008. The Becomings of Subjectivity in Animal Worlds. Subjectivity, 23 (1), p 135.