The position papers — from a wonderful mix of people — are all online here. My own text was a short but rambling piece on some still underdeveloped ideas. I’ve been trying to think a little more critically about my role as a academician and a Microsoft researcher. Predictably, in combination, the roles raise all sorts of questions and frictions for me. Increasingly, I’ve directed my efforts at thinking about the worlds I’ve helped to enact and asking whether they are kinds of worlds that I would want to live in.
We present findings from a year-long engagement with a street and its community. The work explores how the production and use of data is bound up with place, both in terms of physical and social geography. We detail three strands of the project. First, we consider how residents have sought to curate existing data about the street in the form of an archive with physical and digital components. Second, we report endeavours to capture data about the street’s environment, especially of vehicle traffic. Third, we draw on the possibilities afforded by technologies for polling opinion. We reflect on how these engagements have: materialised distinctive relations between the community and their data; surfaced flows and contours of data, and spatial, temporal and social boundaries; and enacted a multiplicity of ‘small worlds’. We consider how such a conceptualisation of data-in-place is relevant to the design of technology.
Abstract: Computational biology is a nascent field reliant on software coding and modelling to produce insights into biological phenomena. Extreme claims cast it as a field set to replace conventional forms of experimental biology, seeing software modelling as a (more convenient) proxy for bench-work in the wet-lab. In this article, we deepen and complicate the relations between computation and scientific ways of knowing by discussing a computational biology tool, BMA, that models gene regulatory networks. We detail the instabilities and frictions that surface when computation is incorporated into scientific practice, framing the tensions as part of knowing-in-progress — the practical back and forth in working things out. The work exemplifies how software studies — and careful attention to the materialities of computation — can shed light on the emerging sciences that rely on coding and computation. Further, it puts to work a standpoint that sees computation as tightly entangled with forms of scientific knowing and doing, rather than a wholesale replacement of them.
Berman, E. P. (2014). Not Just Neoliberalism: Economization in US Science and Technology Policy. Science, Technology & Human Values, 39(3), 397 – 431.
The title of this paper says it all really. It’s good though to have a cogent argument about the relations between ideology, policy and the changes in how science is being done. I for one very easily slip into an accusatory refrain when talking about and usually criticising what I’ve seen to be the neoliberal (non)interventionist and policy direction in education and science. Elizabeth Berman presents a much more measured position and convinces me that it’s better understood as an economization, as she calls it, where the broader shift is towards prioritising scientific research and innovation vis-a-vis the economy and specifically seeing them as economic inputs. This recognises the tensions and complications and the competing interests that have run through the changing status of the sciences (in the US, but similarly, I think, in the UK).
Something I think Berman leaves open is the relationship between science and innovation. She makes it clear that science and innovation become inexorably linked when science is seen in economic terms. I want, though, to better understand the nexus. Indeed, but conflating science and technology (“S&T” as Berman refers to it), I think there are further complications here that need unraveling, ones pointing to the entanglements of science and technology, and where progress or innovation sits between (or around) them. Can we talk of technology without innovation? If S&T are two-parts of a unit, how can we disentangle innovation?
Abstract: What does the abundance of data and proliferation of data-making methods mean for the ordinary person, the person on the street? And, what could they come to mean? In this paper, we present an overview of a year-long project to examine just such questions and complicate, in some ways, what it is to ask them. The project is a collective exercise in which we – a mixture of social scientists, designers and makers – and those living and working on one street in Cambridge (UK), Tenison Road, are working to think through how data might be materialised and come to matter. The project aims to better understand the specificities and contingencies that arise when data is produced and used in place. Mid-way through the project, we use this commentary to give some background to the work and detail one or two of the troubles we have encountered in putting locally relevant data to work. We also touch on a methodological standpoint we are working our way into and through, one that we hope complicates the separations between subject and object in data-making and opens up possibilities for a generative refiguring of the manifold relations.