Organized Session

SHOT Joint Session

Another Vast Machine II: Data, Models, and Simulations in the Human Sciences

Organizer

Emily Klancher Merchant

UC Davis

Chair

Tabea Cornel

New College of Florida

Metadata

Session Abstract

Ten years ago, Paul Edwards argued that the "vast machine" of meteorology had neared completion. Global systems of weather observation, data manipulation, and data interpretation have not tamed the climate, but they have established (almost) undeniable facts about it, including the existence of global warming. More recently, data, models, and simulations have become essential to the human sciences. Recidivism algorithms, genetic diagnostics, and multivariate pattern analysis are only three examples of the numerous technologies on which scientists rely to establish facts about humanity and predict the future of individual humans. How did these technologies originate? What consequences are there, if any, of applying technologies to humans that had originally been developed for non-human systems? How stable have such technologies and related concepts been across time, space, and cultures? To what extent have these technologies reinforced or eroded power dynamics within and between scientific communities? In what ways have these technologies helped or hindered establishing (almost) undeniable facts about humanity?

Presenter 1

Physicians of the Future: Reconfiguring the Patient's Chart for the Production of Usable Data

Michael J. Neuss

Vanderbilt University Medical Center

Abstract

The electronic health record is now instrumental not just to delivery of individual patient care, but to the generation of knowledge that can have profound significance to drug development, studies of population health, genetic analyses, and myriad other interests. Ownership of that data has become a central concern for patient and privacy advocates, who have drawn attention to the implications of the use of patient data when there are significant financial stakes. This paper examines the origin in the 1960s and 1970s of the computer systems that made today's electronic health record possible, focusing particular attention on reforms to the paper (i.e. analog) record that set the stage for a reorganization of data that was legible to the computer, and thus usable by the other actors (researchers, administrators, payors) with an interest in that data. Specifically, problem-oriented charting (as in Lawrence Weed's famous and now ubiquitous SOAP-style clinical documentation) recast the patient in terms of multiple diagnoses, each needing a more synthetic description of the evidence supporting those diagnoses. Systems including Weed's won praise for their apparent positive effects in clinical care, but an additional effect, I argue, was in situating the physician as a curator of patient data that increasingly was fed into electronic systems.

Metadata

Presenter 2

Excavating the Origins of Sociogenomics

Emily Klancher Merchant

UC Davis

Abstract

In November 2019, a New Jersey start-up called Genomic Prediction announced the first pregnancy achieved using its signature "expanded pre-implantation genomic testing" process. This process tests not only for genetic variants that may impact health, but also for variants that are thought to govern intelligence. The paper proposed here documents the history of the science that has made such testing possible: social science genomics or sociogenomics. It identifies two intersecting origins for sociogenomics. The first is the eugenics of the late nineteenth century, which launched the search for evidence that intelligence, behavior, and socioeconomic status are inherited biologically. This search was reinvigorated by the psychological subfield of behavior genetics in the middle of the twentieth century, and has today been taken up by the economists of the Social Science Genetic Association Consortium, which organized the studies that underpin Genomic Prediction's new product. But without big datasets that include both social and genomic data, such research would not have been possible. The second origin of sociogenomics is the move toward large-scale sociological research on the social determinants of health. Longitudinal cohort studies funded by the National Institutes of Health began to collect saliva samples and other biomarkers in the first decade of the twenty-first century, using genomic data to control for unobserved heterogeneity and epigenetic data as dependent variables. This paper examines how, once social and genomic data had become available for one sociogenomic project (research on the social determinants of health), they were co-opted into a very different sociogenomic project (research on the genetic determinants of socioeconomic status), and from there into an explicitly eugenic consumer product (expanded pre-implantation genomic testing).

Metadata

Presenter 3

The New Face of Race in the Epigenetic Age: Towards a Survey of past Trauma, the Creation of Predictive Technologies and Their Limits

Élodie Grossi

Université Toulouse Jean Jaurès

Abstract

This paper will examine the ways epigenetic technologies have emerged in the last 15 years and the power dynamics that are at work between the various scientific communities (anthropologists, philosophers, geneticists, etc.) that use this new data to advance multiple social, political and scientific claims. In recent years, many studies invoking epigenetic mechanisms have focused on nutrition, studying the epigenetic incidence of stress in African-American populations who suffered the trauma of slavery or on prenatal stress transmitted from African-American mothers to their offspring. According to some studies, this traumatic memory is transmitted according to a transgenerational mechanism and induces a modification of the epigenome (which is to be understood as a key variable in the expression of an individual's genes) of a large number of individuals whose ancestors-still according to these studies-underwent a metabolic change related to slavery, due, notably, to nutritional deprivations. While the transgenerational transmission mechanism of trauma is still being questioned by many researchers today, and thus not universally accepted by the peer community, activists in favor of reparations related to slavery, as well as anthropologists and philosophers, are increasingly citing this reasoning of cause and effect as 'proof'-which allegedly demonstrates that race has indeed entered the body through the epigenome. While most studies have focused on the effects of past historical trauma (slavery) on contemporary populations, epigenetics is also framed as a predictive technology relating to the future health prospects of various population groups. This paper will therefore cast light upon the intertwinement between the field of epigenetic productions mobilizing the concept of race as a bio-social variable and the social sciences and the humanities which theorize race as a social construct or as a cultural variable.

Metadata

Presenter 4

Algorithmic Bias and Norms about the Past: The Thin Line between Predicting and Creating the Future

Emanuele Ratti

University of Notre Dame

Abstract

In the last few years, there has been a growing concern for the problem of 'algorithmic bias'. While 'bias' refers to a systematic error due to a deviation from a norm or standard, in the case of algorithms there are different proposals as to what norms or standards should be considered. In this talk, I will characterize algorithmic bias by characterizing a notion of 'norm' for the algorithmic context. In particular I will show that in algorithmic systems aimed at predicting, 'bias' refers to a negative moral evaluation of the past instances (i.e. data sets used to train algorithms) that are used to generate predictive inferences, even in the absence of a deviation from an epistemic or technical norm. I will articulate this idea by discussing two cases. The first is a case from genomics, where an algorithm is trained in order to generate a model that summarizes the features that are associated to transcription start sites. Here, past instances are used to investigate phenomena because of a basic epistemic norm according to which in the natural sciences, past instances are used as standards to predict the future behavior of natural phenomena. However, in cases where algorithms are applied to humans, epistemic norms may not be the only relevant norms. The second case I will discuss is predictive policing, which is used to target individuals or neighborhoods based on crime statistics. While these algorithms may not be 'biased' in any epistemic sense, they are usually stigmatized as such. This is because the past that they generalize is not seen merely as predicting the future, but in the long-term is seen as creating the conditions for an unwanted future, by fostering the reiteration of patterns of segregation. But the fact that we do not want the past to happen again lies on non-epistemic considerations. Therefore, value-laden considerations on the past instances used to train the system itself may constitute the norm that is being violated.

Metadata