CHI 2018 papers.

Anja Thieme, Cyn­thia L. Ben­nett, Cecily Mor­ris­on, Edward Cutrell and Alex Taylor (2018) I can do everything but see!” – How People with Vis­ion Impair­ments Nego­ti­ate their Abil­it­ies in Social Con­texts. In Pro­ceed­ings CHI 18. ACM Press.

Ari Schle­sing­er, Kenton O’Hara and Alex Taylor (2018) Lets Talk about Race: Iden­tity, Chat­bots, and AI. In Pro­ceed­ings CHI 18. ACM Press.

Very happy to have con­trib­uted to two papers being presen­ted at the upcom­ing CHI con­fer­ence this year. One reports on work with the blind and vis­ion impaired a few of us have been involved in dif­fer­ent ways (see here). Broadly, we’ve used the piece to reflect on the rela­tions between vis­ion impair­ment and arti­fi­cial intel­li­gence, and set out dir­ec­tions for a pos­sible design space.

The second paper picks up on a new theme for me, but one closely related to past reflec­tions and design work around machine intel­li­gence. With the fant­ast­ic Ari Schle­sing­er (GA Tech) lead­ing the research, we exam­ine the chal­lenges faced in hand­ling race talk (and racism) in human-bot inter­ac­tions. Tak­ing both Tai AI and the black­list as start­ing points, we take ser­i­ously the com­pu­ta­tion­al under­pin­nings of chat bots and con­ver­sa­tion­al agents, to under­score the role they have in sus­tain­ing troub­ling racial cat­egor­ies and the con­di­tions they make pos­sible for more just and equit­able ways for­ward.

Abstract — This research takes an ori­ent­a­tion to visu­al impair­ment (VI) that does not regard it as fixed or determ­ined alone in or through the body. Instead, we con­sider (dis)ability as pro­duced through inter­ac­tions with the envir­on­ment and con­figured by the people and tech­no­logy with­in it. Spe­cific­ally, we explore how abil­it­ies become nego­ti­ated through video eth­no­graphy with six VI ath­letes and spec­tat­ors dur­ing the Rio 2016 Para­lympics. We use gen­er­ated in-depth examples to identi­fy how tech­no­logy can be a mean­ing­ful part of abil­ity nego­ti­ations, emphas­iz­ing how these embed into the social inter­ac­tions and lives of people with VI. In con­trast to treat­ing tech­no­logy as a solu­tion to a sens­ory defi­cit’, we under­stand it to sup­port the tri­an­gu­la­tion pro­cess of sense-making through pro­vi­sion of appro­pri­ate addi­tion­al inform­a­tion. Fur­ther, we sug­gest that tech­no­logy should not try and replace human assist­ance, but instead enable people with VI to bet­ter identi­fy and inter­act with oth­er people in-situ.
Abstract — Why is it so hard for chat­bots to talk about race? This work explores how the biased con­tents of data­bases, the syn­tact­ic focus of nat­ur­al lan­guage pro­cessing, and the opaque nature of deep learn­ing algorithms cause chat­bots dif­fi­culty in hand­ling race-talk. In each of these areas, the ten­sions between race and chat­bots cre­ate new oppor­tun­it­ies for people and machines. By mak­ing the abstract and dis­par­ate qual­it­ies of this prob­lem space tan­gible, we can devel­op chat­bots that are more cap­able of hand­ling race-talk in its many forms. Our goal is to provide the HCI com­munity with ways to begin address­ing the ques­tion, how can chat­bots handle race-talk in new and improved ways?

Leave a comment