Artificial Intelligence: asking the right questions

Nes­ta kind­ly invit­ed me to one of their ‘hot top­ics’ events a cou­ple of weeks ago to present a provo­ca­tion on AI and human-com­put­er inter­ac­tion. They also asked for me to write a few words that they’ve now pub­lished on the “TheLong+Short” blog here. I append the orig­i­nal text to my provo­ca­tion below.
I came across this pho­to on my com­put­er today (sor­ry, I’ve looked to see if I can attribute it to some­one, but so far failed). It’s a love­ly image in it’s own right, play­ing with a vin­tage qual­i­ty to the future, but in this con­text I think it does invite the ques­tion ‘is this the lim­it of our imag­i­na­tions?’ I’d like to sug­gest AI might open us up to so much more.
Children with robot
 

Ask­ing the
Right Ques­tions

It seems to me that again and again, in our visions of human com­put­er inter­ac­tion, we keep com­ing back to a pecu­liar notion of human intel­li­gence as a bench­mark for what we want our machines to behave like — chess play­ers, go play­ers, con­ver­sa­tion­al agents, car dri­vers, humanoid robots, etc.
To me, this seems to be a ter­ri­bly restric­tive idea of intel­li­gence, one that lim­its our imag­i­na­tions and con­strains what the inno­va­tions in machine learn­ing and AI could offer. To be blunt: our clunky ideas of human intel­li­gence, smart­ness, emo­tion, and so on, are things we’re so deeply invest­ed in, intel­lec­tu­al­ly and cul­tur­al­ly, that they dis­tract us from far more promis­ing possibilities.
To get us think­ing dif­fer­ent­ly, I want to exper­i­ment a lit­tle with some ques­tions about what we think intel­li­gence is, and what kind of intel­li­gence we might want in the machines we’re build­ing and that we’ll even­tu­al­ly come to live with. Through a dif­fer­ent way of view­ing the chal­lenges, I want to ask what oth­er kinds of intel­li­gence we might just imag­ine in our inter­ac­tions with machines?
To help devel­op this line of ques­tion­ing, let me begin with a para­ble of sorts, one that might seem to diverge from the top­ic at hand, but I hope to show has an appo­site lesson:
For decades, ani­mal behav­iourists have invest­ed their research ener­gies in assess­ing whether birds can talk and whether, with that talk, they exhib­it high­er-func­tion­ing cog­ni­tive abil­i­ties, that is ones clos­er to our own. In the lab­o­ra­to­ry — through all sorts of exper­i­men­tal con­fig­u­ra­tions — mynah birds, par­rots, macaws etc. have been pushed and prod­ded to talk. What’s hard­ly sur­pris­ing is that the results point towards con­clu­sive evi­dence that birds have a less sophis­ti­cat­ed cog­ni­tive capac­i­ty to let us say oth­er high­ly evolved non­hu­mans such as pri­mates, and of course our­selves. In exper­i­men­tal con­di­tions, birds per­form bad­ly (as a mat­ter of fact they do their utmost to sab­o­tage the equip­ment. To put it anoth­er way, it turns out that mynahs, macaws, par­rots, and the like just don’t like to talk under the exper­i­men­tal con­di­tions they are sub­ject to.
How­ev­er, out­side the lab­o­ra­to­ry, at least in a lim­it­ed num­ber of cas­es, it seems that if peo­ple invest in devel­op­ing a rela­tion­ship with these birds, one in which the needs and desires are under­stood to be things that are nego­ti­at­ed and devel­oped over time — what the philoso­pher and eth­nol­o­gist Vin­ciane Depret refers to as “a con­stant move­ment of attune­ment” — they can start to talk and they can at the end of a day (lit­er­al­ly) end up say­ing things like:

You be good, see you tomor­row.
I love you.

Now, my point here isn’t yet anoth­er argu­ment for or against anthro­po­mor­phis­ing birds, ani­mals or even machines. My inter­est — again, tak­ing from Despret — is in how we might start to ask a dif­fer­ent set of ques­tions: In the case of birds, it’s not whether we can gen­er­alise to say that birds are intrin­si­cal­ly like/unlike humans or, more specif­i­cal­ly, whether they can talk like humans. Rather, can we ask what are the con­di­tions through which we can begin to talk with them, and that they might talk back?
To bring this back to our top­ic, can we ask what the con­di­tions would be for some­thing akin to intel­li­gence to sur­face in our human-machine inter­ac­tions? What ques­tions do we need to ask of those things we inter­act with to allow an intel­li­gence to sur­face? It’s this turn to a‑think­ing-about-the-con­di­tions that are cre­at­ed and what-might-just-be-pos­si­ble that invites us to ask some very dif­fer­ent ques­tions, ques­tions not about some intrin­sic qual­i­ty of ani­mal or machine intel­li­gence, but of humans and non­hu­mans alto­geth­er, of a wider set of rela­tions and entan­gle­ments that bring intel­li­gence into being.
So what might these dif­fer­ent con­di­tions be? And how might we imag­ine some­thing else through the entan­gle­ments between humans and machines? With ques­tions like these I think we open our­selves up to a vast array of pos­si­bil­i­ties, but let me offer just one idea to illus­trate. I want to sug­gest that through the infra­struc­tur­al capac­i­ties of vast­ly dis­trib­uted sys­tems and the pro­duc­tion of data, we could begin to see the con­di­tions for difference.
Take the exam­ple of the in IP geolo­ca­tion from Max­mind, the US-based provider of “IP intel­li­gence”. Through a sys­tem designed to locate peo­ple using their con­nect­ed devices, we see how cer­tain demo­graph­ic cat­e­gories are sus­tained and cement­ed: what do peo­ple liv­ing here buy? Or where do nefar­i­ous inter­act activ­i­ties orig­i­nate? And, in some cas­es, we see how the tech­ni­cal par­tic­u­lar­i­ties of such an algo­rith­mic sys­tem can give rise to a so-called glitch where, because of their geo­graph­ic loca­tion, peo­ple and house­holds are inad­ver­tent­ly accused of crim­i­nal activ­i­ty. If there’s any intel­li­gence here, it’s invest­ed in count­ing and buck­et­ing peo­ple into coarse and prob­lem­at­ic socio-eco­nom­ic categories.
My ques­tion would be to ask how this con­fig­u­ra­tion of geog­ra­phy, peo­ple and tech­nol­o­gy might be changed to sur­face some­thing else. How might the con­di­tions be altered so that pop­u­la­tions are seen not in over­ly sim­plis­tic and at times error-prone ways, but in ways that open us up to dif­fer­ent pos­si­bil­i­ties. How might they be used, for instance, to under­stand how peo­ple could be count­ed dif­fer­ent­ly, how new clas­si­fi­ca­tions might be sur­faced that open us up to the oth­er ways we inhab­it and build rela­tions to spaces. And what if the spe­cif­ic algo­rith­mic lim­i­ta­tions of the sys­tem weren’t brack­et­ed off, and treat­ed as noise, but used to ask who is not being count­ed here, and how might they be?
I would­n’t want to pre­tend to have any con­crete or half-baked answers here, but I think we need to take seri­ous­ly the invi­ta­tion to ask dif­fer­ent ques­tions like these. They will be what sur­face an intel­li­gence that is more than mim­ic­ry and invest­ed, instead, in how we hope to live our lives. Thus, we might ask: What is it each of us might learn from using a sys­tem that responds, intel­li­gent­ly, to the rela­tions between our­selves and place? What, derived from a panoply of data sources and bil­lions of human-machine inter­ac­tions, might each of us devel­op a sense of? How might each of us tru­ly accom­plish some­thing, togeth­er, with an emerg­ing back and forth of engage­ments and interactions?
In inter­act­ing with an intel­li­gence of this sort, we may not need to know its inner work­ings, just as we don’t need to know the inner work­ings of some­one else (or for that mat­ter of anoth­er species) to talk to them. What we do need are the con­di­tions to active­ly pro­duce some­thing in com­mon, to bit by bit “result in shared per­spec­tives, intel­li­gences and inten­tions, resem­blances, inver­sions and exchanges of properties.” 

Notes:
1. I humbly bor­row this phras­ing from Vin­ciane Despret who has invest­ed a career in fig­ur­ing out the right ques­tions to ask of ani­mals and most recent­ly pub­lished the fab­u­lous book “What would ani­mals say if we asked the right questions?”
2. This is tak­en from Despret’s book. She cites the fol­low­ing as the source: Grif­fin, D. (1992). Ani­mal Minds. Chica­go: Chica­go Uni­ver­si­ty Press.
3. Vin­ciane Despret 2008. The Becom­ings of Sub­jec­tiv­i­ty in Ani­mal Worlds. Sub­jec­tiv­i­ty, 23 (1), p 125.
4. Irene Pep­per­berg reports that her par­rot Alex used to say this to her every evening. It has been wide­ly report­ed not least in the Tele­graph. I first came across the sto­ry via Despret in both her book and her 2008 paper in Sub­jec­tiv­i­ty.
5. Vin­ciane Despret 2008. The Becom­ings of Sub­jec­tiv­i­ty in Ani­mal Worlds. Sub­jec­tiv­i­ty, 23 (1), p 135.
I humbly bor­row this phras­ing from Vin­ciane Despret who has invest­ed a career in fig­ur­ing out the right ques­tions to ask of ani­mals and most recent­ly pub­lished the fab­u­lous book “What would ani­mals say if we asked the right questions?”
This is tak­en from Despret’s book. She cites the fol­low­ing as the source: Grif­fin, D. (1992). Ani­mal Minds. Chica­go: Chica­go Uni­ver­si­ty Press.
Vin­ciane Despret 2008. The Becom­ings of Sub­jec­tiv­i­ty in Ani­mal Worlds. Sub­jec­tiv­i­ty, 23 (1), p 125.
Irene Pep­per­berg reports that her par­rot Alex used to say this to her every evening. It has been wide­ly report­ed not least in the Tele­graph. I first came across the sto­ry via Despret in both her book and her 2008 paper in Sub­jec­tiv­i­ty.
See the orig­i­nal sto­ry from Kash­mir Hill here and a recent fol­lowup Guardian piece here.
Vin­ciane Despret 2008. The Becom­ings of Sub­jec­tiv­i­ty in Ani­mal Worlds. Sub­jec­tiv­i­ty, 23 (1), p 135.

Leave a Reply

Your email address will not be published.