Gerardo Con Diaz
University of California Davis
This session addresses new directions in the study of medical algorithms. Chaired by Mary Mitchell (Purdue), the panel investigates how computerization and algorithmic decision-making have transformed U.S. healthcare from the late 1940s to the present. Speakers draw on methods for the study of 20th century technoscientific development such as the histories of computing and data, the history of medicine, legal studies, and media studies. First, a paper by Andrew Lea (Hopkins) examines the development, in the 1940s, of the Cornell Medical Index, an extensive questionnaire intended to aid the automation of diagnosis. Lea argues that as physicians and engineers worked to computerize diagnosis, they were forced to reckon with fundamental epistemological and ontological questions about the nature of clinical reasoning and the definitions of disease. Next, Jeremy Green (Hopkins) addresses a collaboration between Oakland-based physicians and computer scientists who, in the 1960s, built an entirely new kind of clinic around a mainframe computer. This computer-centered clinic became a model for public health and private practice, until Congress used it as a model to computerized the newly inaugurated Medicaid and Medicare Programs. Third, Con Diaz (UC Davis) addresses how views on software patenting developed in the 1970s shaped the legal frameworks that governed the patenting of diagnosis algorithms by the century's end and beyond. He shows that patent battles surrounding drug delivery methods have infused U.S. patent law with legal-doctrinal conflicts born from the increasing commercial and sociotechnical dependence between the software and pharmaceutical industries. Finally, Hannah Zeavin (UC Berkeley) examines contemporary efforts to automate mental health therapy through app-based technologies such as iHelp. She argues that these applications collapse the categories of wellness, stress, labor management, and mental health culture and discusses their gamification.
Computerizing Diagnosis: Minds, Medicine, and Machines in Twentieth-Century America
Johns Hopkins University
In the mid-twentieth century, physicians, engineers, mathematicians, philosophers, and others started to think about the ways in which a new technology might be applied to an old problem. These interdisciplinary teams designed computer programs that aimed to automate one of the most fundamental tasks of medicine-diagnosis. Using two case studies, this talk explores how early efforts to computerize medical diagnosis and decision-making raised philosophical and moral questions that lie at the heart of medical practice. As physicians and engineers worked to computerize diagnosis, they were forced to reckon with fundamental epistemological and ontological questions-questions about the nature of clinical reasoning and the definitions of disease. In creating computerized diagnostic systems, early developers articulated new ideas about how physicians think and how disease categories are defined. Sometimes these ideas and categories were spawned by their engagements with computers; other times, pre-existing ideas came to be codified within subsequent computer programs. Moral questions travelled together with these philosophical questions. The computerization of diagnosis broached profound moral and professional uncertainties of ceding important medical decisions to a machine.
The Automated Clinic: Computerized Health Testing and the Architecture of Prevention, 1960-1980
Johns Hopkins University
This paper chronicles the first Kaiser-Permanente Automated Multiphasic Health Testing clinic in Oakland, California, designed in 1964 as a new architectural interface between patients, digital sensors, and the IBM 1440 buried in its core. The brainchild of Morris Collen, electrical engineer, physician, and foundational figure within the field of medical informatics, the clinic at 3772 Howe Street soon became a globally visible model for computerized health screening, prevention, and diagnosis, able to process a person off the street into an electronic medical record in under three hours. By revisiting the entire clinic as a man-machine interface, and placing the task of health screening within Kaiser-Permanente's new self-definition as a prototypical Health Maintenance Organization (HMO), the automated multitesting clinic broke down barriers that previously separated the public health act of disease screening from the medical act of diagnosis. In these automated clinics, the computer did not merely store and tabulate data but could also hazard a diagnosis through a continually-adjusted series of diagnostic algorithms, recommend additional forms of testing, and actively generate individualized "normal values" on a patient-by-patient basis over time. As the Oakland clinic became a model for public health and private practice in the late 1960s, the U.S. Public Health Service deployed mobile versions to expand access to healthcare and address health disparities in rural and urban settings. Though support waned over the course of the 1970s, this model of algorithmic prevention, screening, and diagnosis would have powerful impacts on data practices in clinical medicine and public health, and served to illustrate both the promises and limitations of information technology in expanding access to care and reducing health disparities. This paper draws on Collen's papers, archives for Kaiser-Permanente, oral histories, and popular, clinical, and public health literature
Prometheus' Patents: Owning Medical Algorithms in the 21st Century
Gerardo Con Diaz
University of California Davis
In 2012, the U.S. Supreme Court ruled that a diagnostic method that involves a natural correlation between two quantities is not eligible for patent protection. In a unanimous opinion that pharmaceutical and software firms eagerly awaited, the Justices had sided with the Mayo Clinic, an academic medical center that sought to invalidate two patents licensed exclusively to a commercial laboratory called Prometheus Labs. Mayo v. Prometheus, as this conflict is normally cited, has fundamentally reshaped the U.S. patent system's relationships with medical algorithms, personalized medicine, and data processing. Lawyers and legal scholars have thoroughly documented its legacy, but this conflict belongs to a relatively unknown broader history of how patent law has facilitated the joint development of medical algorithms and information technology in the United States.
This paper examines the history of Mayo v. Prometheus to argue that patent battles surrounding the drug delivery systems have infused U.S. patent law with legal-doctrinal conflicts born from the increasing commercial and sociotechnical dependence between the U.S. software and pharmaceutical industries. The paper draws primarily on the court records for U.S. federal courts, trade periodicals, and documents from the U.S. Patent and Trademark Office. It advances the study of medical algorithm patents to connect the interdisciplinary literature on medical informatics with the business and legal history of computing. It also shows how the history of patent law can allow scholars to integrate a range of themes of interest to HSS and SHOT scholars alike: tinkering with data, therapeutics, ownership, and algorithm studies.
Auto-Intimacy: Algorithmic Therapies
University of California Berkeley
This paper engages with therapeutic treatment by automated therapies. I interrogate what therapy becomes when the therapist is a computational actor. It opens with an overview of very early attempts to write a responsive algorithm in the 1960s and 70s. At this moment of experimentation with automated therapies, two strains of work emerged: the simulation and detection of a disordered mind in the hopes of automating intake, diagnosis, and education, and the simulation of a therapist toward the dream of automating therapeutic treatment. In the attempt to simulate the therapist's role, one is already theoretically comfortable with removing a human actor as therapist. I trace these developments into our present, and will move to a brief discussion of the politics and "gamification" of contemporary psychological applications such as "Ellie" and "Joyable" and "iHelp," which attempt to assist persons with a wide range of mental health disorders in managing their behavior and moods. These applications, which are frequently offered by employers to employees, collapse the categories of wellness, stress, labor management, and mental health care. This paper contends that, unlike human therapists, who listen to the speech of their patients as respondents, algorithm-based therapies listen, or read, solely in order to respond; they can't not. They listen via a variety of mechanisms including retrieval-based decision trees, scripts, and paralinguistic vocal monitoring. They offer outputs following inputs, regulated by a governing body of rules and decisions. Yet algorithmic therapies also rely on a most intimate computing in which a rich set of relationships is present between the user and the therapeutic apparatus. Computer-based and computer-guided therapies are, in this way, a reiteration of self-help and join those pre-electronic traditions of instruction and mediation by a non-human other (the workbook, the diary).