Organized Session

Fairness by Calculation: Four Centuries of Algorithmic Aspirations

Organizer

William Deringer

Massachusetts Institute of Technology

Chair

Theodora Dryer

New York University / AI Now Institute

Metadata

Session Abstract

The aspiration to create automated systems to provide "fair" solutions to delicate social decisions is a striking and controversial trend in contemporary public life, shaping how governments provide public resources, banks and insurers rate deserving customers, employers hire, police departments police, and more. Incisive academic and popular critiques have shown that fairness algorithms often exacerbate inequities they purport to remedy. This quest for fairness-by-calculation both sustains and challenges central scholarly narratives in the history of science and technology. For example, algorithmic systems might be seen as simply the newest techniques of "mechanical objectivity" as described by Theodore Porter (1995), part of a longstanding pattern of delegating politically-sensitive decisions to rigid protocols to deflect accusations of bias and bolster trust in institutions. Yet earlier exercises in mechanical objectivity, like cost-benefit analysis in public works projects, succeeded in part by translating contentious choices into putatively transparent numbers open to public review; contemporary algorithms are avowedly opaque, often proprietary, and resistant to outside scrutiny. Historical research can excavate the configuration of technical practices, epistemic virtues, institutional structures, political tensions, and cultural meanings undergirding current algorithmic ambitions. How did political communities come to accept that fraught matters of justice might be resolved through recondite numbers? How has that commitment evolved? How does the present moment continue, or depart from, longer trends? This panel examines three telling episodes, spanning the 17th century to the 21st, in which mathematics and computation were invoked in pursuit of fairness. These include fights between agrarian landlords and tenants, debates over legislative apportionment, and efforts to define fair lending practices and criminal punishments.

Presenter 1

Just Fines: Mathematical Tables, Church Landlords, and Fair Algorithms, c. 1628

William Deringer

Massachusetts Institute of Technology

Abstract

In the 1610s and 1620s, a new computational technology took hold in England: printed mathematical tables for compound interest and discounting ("present value") problems. Historians of finance and accounting have long recognized the arrival of these paper tools, predecessors of ubiquitous modern techniques like "discounted cash flow." Yet the early history of these tables remains hazy. What did early-seventeenth-century users do with these tables? Who used them? Why did those calculations arise when and where they did? This paper recovers the hidden story behind exponential discounting, with one obscure but influential text-Ambrose Acroyd's Tables of Leasses and Interest (1628)-as guide. Two key facts emerge. First, despite the prominence of discounting in modern financial applications like actuarial assessment and asset valuation, these early discounting calculations were not confined to England's nascent financial sector. Rather, their foremost use related to agricultural property, specifically in assessing the upfront "fines" tenants were required to pay landlords for initiating or renewing leases on farms. Second, among the leading "early adopters" were institutions of the Church of England: bishops, cathedrals, and colleges. I argue that discounting tables were not tools of instrumental rationality or products of a new capitalist mentality, but tools of social accommodation aimed at providing "reasonable" solutions to a contentious valuation problem. Yet the particular calculative solutions encoded in the early 17th century had unexpected, and not necessarily fair, consequences into the 18th century and beyond. This early modern tale offers a vivid example of how one community first turned to complicated mathematical calculations-to an algorithm-to resolve social conflicts about fairness and justice, offering genealogical insights for our own algorithmic age.

Metadata

Presenter 2

Voting by Algorithm: The 1930s Fight over Proportional Representation in Cambridge, MA

Alma Steingart

Columbia University

Abstract

In the late 1930s, the residents of Cambridge, Massachusetts were engulfed in controversy while trying to decide whether the city should adopt a system of proportional representation for its city council. Emotions ran high, with opponents and supporters blaming one another for bringing an end to nothing less than democratic rule. The question before the citizens was whether to replace the basic voting mechanism with one that enabled citizens to rank their choices among candidates and then use an algorithm to determine the winning council members. Could fairness, as supporters argued, be "optimized" to ensure residents' votes would not be "wasted"? Or was fairness dependent on intelligibility and transparency? Cambridge was not the only American city to consider and eventually adopt this new voting system, but it is the only one in which the system still remains in place to this day. In this talk, I survey the 1930s fight in Cambridge to ask: how has the methodology of ranked voting been wed to democratic ideals such as fair representation and increased political participation? I pay particular attention to the ways in which the technical mechanisms underlying the new voting system was interpreted by both sides of the political debate.

Metadata

Presenter 3

Algorithmic Fairness and Actuarial Politics

Rodrigo Ochigame

Massachusetts Institute of Technology

Abstract

As national and regional governments form expert commissions to regulate "automated decision-making," a new corporate-sponsored field of research proposes to formalize the elusive ideal of "fairness" as a mathematical property of algorithms and especially of their outputs. Computer scientists, economists, lawyers, lobbyists, and policy reformers wish to hammer out, in advance or in place of regulation, algorithmic redefinitions of "fairness" and such legal categories as "discrimination," "disparate impact," and "equal opportunity." My paper compares this new field to previous efforts to answer questions of fairness through the use of probabilistic and statistical algorithms. In particular, I examine "actuarial" practices of individualized risk classification in private insurance firms, consumer credit bureaus, and police departments during the twentieth century. My analysis focuses on regulatory disputes over the meaning of "fairness" in actuarial systems in the latter half of the century, disputes that involved corporations, governments, and civil rights and feminist activists. I compare the technical arguments, cultural assumptions, and political stakes of these twentieth-century disputes with those of present debates on "algorithmic fairness." The present debates have a broader scope, ranging from older actuarial systems like credit scoring to newer computational technologies such as facial recognition and automated targeting in drone warfare.

Metadata

Commentator

Theodora Dryer

New York University / AI Now Institute