Artificial Intelligence: asking the right questions

Nesta kindly invited me to one of their hot top­ics’ events a couple of weeks ago to present a pro­voca­tion on AI and human-computer inter­ac­tion. They also asked for me to write a few words that they’ve now pub­lished on the TheLong+Short” blog here. I append the ori­gin­al text to my pro­voca­tion below.

I came across this photo on my com­puter today (sorry, I’ve looked to see if I can attrib­ute it to someone, but so far failed). It’s a lovely image in it’s own right, play­ing with a vin­tage qual­ity to the future, but in this con­text I think it does invite the ques­tion is this the lim­it of our ima­gin­a­tions?’ I’d like to sug­gest AI might open us up to so much more.

Children with robot


 
Ask­ing the
Right Ques­tions

It seems to me that again and again, in our vis­ions of human com­puter inter­ac­tion, we keep com­ing back to a pecu­li­ar notion of human intel­li­gence as a bench­mark for what we want our machines to behave like — chess play­ers, go play­ers, con­ver­sa­tion­al agents, car drivers, humanoid robots, etc.

To me, this seems to be a ter­ribly restrict­ive idea of intel­li­gence, one that lim­its our ima­gin­a­tions and con­strains what the innov­a­tions in machine learn­ing and AI could offer. To be blunt: our clunky ideas of human intel­li­gence, smart­ness, emo­tion, and so on, are things we’re so deeply inves­ted in, intel­lec­tu­ally and cul­tur­ally, that they dis­tract us from far more prom­ising possibilities.

To get us think­ing dif­fer­ently, I want to exper­i­ment a little with some ques­tions about what we think intel­li­gence is, and what kind of intel­li­gence we might want in the machines we’re build­ing and that we’ll even­tu­ally come to live with. Through a dif­fer­ent way of view­ing the chal­lenges, I want to ask what oth­er kinds of intel­li­gence we might just ima­gine in our inter­ac­tions with machines?

To help devel­op this line of ques­tion­ing, let me begin with a par­able of sorts, one that might seem to diverge from the top­ic at hand, but I hope to show has an appos­ite lesson:

For dec­ades, anim­al beha­vi­our­ists have inves­ted their research ener­gies in assess­ing wheth­er birds can talk and wheth­er, with that talk, they exhib­it higher-functioning cog­nit­ive abil­it­ies, that is ones closer to our own. In the labor­at­ory — through all sorts of exper­i­ment­al con­fig­ur­a­tions — mynah birds, par­rots, macaws etc. have been pushed and prod­ded to talk. What’s hardly sur­pris­ing is that the res­ults point towards con­clus­ive evid­ence that birds have a less soph­ist­ic­ated cog­nit­ive capa­city to let us say oth­er highly evolved non­hu­mans such as prim­ates, and of course ourselves. In exper­i­ment­al con­di­tions, birds per­form badly (as a mat­ter of fact they do their utmost to sab­ot­age the equip­ment. To put it anoth­er way, it turns out that mynahs, macaws, par­rots, and the like just don’t like to talk under the exper­i­ment­al con­di­tions they are sub­ject to.

How­ever, out­side the labor­at­ory, at least in a lim­ited num­ber of cases, it seems that if people invest in devel­op­ing a rela­tion­ship with these birds, one in which the needs and desires are under­stood to be things that are nego­ti­ated and developed over time — what the philo­soph­er and eth­no­lo­gist Vin­ciane Depret refers to as a con­stant move­ment of attun­e­ment”  — they can start to talk and they can at the end of a day (lit­er­ally) end up say­ing things like:

You be good, see you tomor­row.
I love you.

Now, my point here isn’t yet anoth­er argu­ment for or against anthro­po­morph­ising birds, anim­als or even machines. My interest — again, tak­ing from Despret — is in how we might start to ask a dif­fer­ent set of ques­tions: In the case of birds, it’s not wheth­er we can gen­er­al­ise to say that birds are intrins­ic­ally like/unlike humans or, more spe­cific­ally, wheth­er they can talk like humans. Rather, can we ask what are the con­di­tions through which we can begin to talk with them, and that they might talk back?

To bring this back to our top­ic, can we ask what the con­di­tions would be for some­thing akin to intel­li­gence to sur­face in our human-machine inter­ac­tions? What ques­tions do we need to ask of those things we inter­act with to allow an intel­li­gence to sur­face? It’s this turn to a-thinking-about-the-conditions that are cre­ated and what-might-just-be-possible that invites us to ask some very dif­fer­ent ques­tions, ques­tions not about some intrins­ic qual­ity of anim­al or machine intel­li­gence, but of humans and non­hu­mans alto­geth­er, of a wider set of rela­tions and entan­gle­ments that bring intel­li­gence into being.

So what might these dif­fer­ent con­di­tions be? And how might we ima­gine some­thing else through the entan­gle­ments between humans and machines? With ques­tions like these I think we open ourselves up to a vast array of pos­sib­il­it­ies, but let me offer just one idea to illus­trate. I want to sug­gest that through the infra­struc­tur­al capa­cit­ies of vastly dis­trib­uted sys­tems and the pro­duc­tion of data, we could begin to see the con­di­tions for difference.

Take the example of the in IP geo­loca­tion from Max­mind, the US-based pro­vider of “IP intel­li­gence”. Through a sys­tem designed to loc­ate people using their con­nec­ted devices, we see how cer­tain demo­graph­ic cat­egor­ies are sus­tained and cemen­ted: what do people liv­ing here buy? Or where do nefar­i­ous inter­act activ­it­ies ori­gin­ate? And, in some cases, we see how the tech­nic­al par­tic­u­lar­it­ies of such an algorithmic sys­tem can give rise to a so-called glitch where, because of their geo­graph­ic loc­a­tion, people and house­holds are inad­vert­ently accused of crim­in­al activ­ity. If there’s any intel­li­gence here, it’s inves­ted in count­ing and buck­et­ing people into coarse and prob­lem­at­ic socio-economic categories.

My ques­tion would be to ask how this con­fig­ur­a­tion of geo­graphy, people and tech­no­logy might be changed to sur­face some­thing else. How might the con­di­tions be altered so that pop­u­la­tions are seen not in overly simplist­ic and at times error-prone ways, but in ways that open us up to dif­fer­ent pos­sib­il­it­ies. How might they be used, for instance, to under­stand how people could be coun­ted dif­fer­ently, how new clas­si­fic­a­tions might be sur­faced that open us up to the oth­er ways we inhab­it and build rela­tions to spaces. And what if the spe­cif­ic algorithmic lim­it­a­tions of the sys­tem weren’t brack­eted off, and treated as noise, but used to ask who is not being coun­ted here, and how might they be?

I wouldn’t want to pre­tend to have any con­crete or half-baked answers here, but I think we need to take ser­i­ously the invit­a­tion to ask dif­fer­ent ques­tions like these. They will be what sur­face an intel­li­gence that is more than mim­icry and inves­ted, instead, in how we hope to live our lives. Thus, we might ask: What is it each of us might learn from using a sys­tem that responds, intel­li­gently, to the rela­tions between ourselves and place? What, derived from a panoply of data sources and bil­lions of human-machine inter­ac­tions, might each of us devel­op a sense of? How might each of us truly accom­plish some­thing, togeth­er, with an emer­ging back and forth of engage­ments and interactions?

In inter­act­ing with an intel­li­gence of this sort, we may not need to know its inner work­ings, just as we don’t need to know the inner work­ings of someone else (or for that mat­ter of anoth­er spe­cies) to talk to them. What we do need are the con­di­tions to act­ively pro­duce some­thing in com­mon, to bit by bit res­ult in shared per­spect­ives, intel­li­gences and inten­tions, resemb­lances, inver­sions and exchanges of properties.” 

Notes
1. I humbly bor­row this phras­ing from Vin­ciane Despret who has inves­ted a career in fig­ur­ing out the right ques­tions to ask of anim­als and most recently pub­lished the fab­ulous book What would anim­als say if we asked the right questions?”
2. This is taken from Despret’s book. She cites the fol­low­ing as the source: Griffin, D. (1992). Anim­al Minds. Chica­go: Chica­go Uni­ver­sity Press.
3. Vin­ciane Despret 2008. The Becom­ings of Sub­jectiv­ity in Anim­al Worlds. Sub­jectiv­ity, 23 (1), p 125.
4. Irene Pep­per­berg reports that her par­rot Alex used to say this to her every even­ing. It has been widely repor­ted not least in the Tele­graph. I first came across the story via Despret in both her book and her 2008 paper in Sub­jectiv­ity.
5. Vin­ciane Despret 2008. The Becom­ings of Sub­jectiv­ity in Anim­al Worlds. Sub­jectiv­ity, 23 (1), p 135.
I humbly bor­row this phras­ing from Vin­ciane Despret who has inves­ted a career in fig­ur­ing out the right ques­tions to ask of anim­als and most recently pub­lished the fab­ulous book What would anim­als say if we asked the right questions?”
This is taken from Despret’s book. She cites the fol­low­ing as the source: Griffin, D. (1992). Anim­al Minds. Chica­go: Chica­go Uni­ver­sity Press.
Vin­ciane Despret 2008. The Becom­ings of Sub­jectiv­ity in Anim­al Worlds. Sub­jectiv­ity, 23 (1), p 125.
Irene Pep­per­berg reports that her par­rot Alex used to say this to her every even­ing. It has been widely repor­ted not least in the Tele­graph. I first came across the story via Despret in both her book and her 2008 paper in Sub­jectiv­ity.
See the ori­gin­al story from Kash­mir Hill here and a recent fol­lowup Guard­i­an piece here.
Vin­ciane Despret 2008. The Becom­ings of Sub­jectiv­ity in Anim­al Worlds. Sub­jectiv­ity, 23 (1), p 135.

Leave a comment