What is a bias? Standard philosophical views of both implicit and explicit bias focus this question on the representations one harbors, e.g., stereotypes or implicit attitudes, rather than the ways in which those representations (or other mental states) are manipulated. I call this approach representationalism. In this paper, I argue that representationalism taken as a general theory of psychological social bias is a mistake because it conceptualizes bias in ways that do not fully capture the phenomenon. Crucially, this view fails to capture a heretofore neglected possibility of bias, one that influences an individual's beliefs about or actions toward others, but is, nevertheless, nowhere represented in that individual's cognitive repertoire. In place of representationalism, I develop a functional account of psychological social bias that characterizes it as a mental entity that takes propositional mental states as inputs and returns propositional mental states as outputs in a way that instantiates social-kind inductions. This functional characterization leaves open which mental states and processes bridge the gap between the inputs and outputs, ultimately highlighting the diversity of candidates that can serve this role.
Algorithmic Bias: on the Implicit Biases of Social Technology, 2020, Synthese, https://doi.org/10.1007/s11229-020-02696-y
Often machine learning programs inherit social patterns reflected in their training data without any directed effort by programmers to include such biases. Computer scientists call this algorithmic bias. This paper explores the relationship between machine bias and human cognitive bias. In it, I argue similarities between algorithmic and cognitive biases indicate a disconcerting sense in which sources of bias emerge out of seemingly innocuous patterns of information processing. The emergent nature of this bias obscures the existence of the bias itself, making it difficult to identify, mitigate, or evaluate using standard resources in epistemology and ethics. I demonstrate these points in the case of mitigation techniques by presenting what I call 'the Proxy Problem'. One reason biases resist revision is that they rely on proxy attributes, seemingly innocuous attributes that correlate with socially-sensitive attributes, serving as proxies for the socially-sensitive attributes themselves. I argue that in both human and algorithmic domains, this problem presents a common dilemma for mitigation: attempts to discourage reliance on proxy attributes risk a tradeoff with judgement accuracy. This problem, I contend, admits of no purely algorithmic solution.
The Psychology of Bias, 2020, in An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind, Erin Beeghly and Alex Madva (eds.), Routledge, Penultimate Draft
What’s going on in the head of someone with an implicit bias? Psychological and philosophical attempts to answer this question have centered on one of two distinct data patterns displayed in studies of individuals with implicit biases: divergence and rational responsiveness. However, explanations focused on these different patterns provide different, often conflicting answers to the question. In this chapter, I provide a literature review that addresses these tensions in data, method, and theory in depth. I begin by surveying the empirical data concerning patterns of divergence and rational responsiveness. Next, I review the psychological theories that attempt to explain these patterns. Finally, I suggest that tensions in the psychological study of implicit bias highlight the possibility that implicit bias is, in fact, a heterogeneous phenomenon, and thus, future work on implicit bias will likely need to abandon the idea that all implicit biases are underwritten by the same sorts of states and process.
Are Algorithms Value-Free? Feminist Theoretical Virtues in Machine Learning, provisionally forthcoming in Journal of Moral Philosophy, special issue on "Justice, Power, and the Ethics of Algorithmic Decision-Making", Annette Zimmerman (ed.), Penultimate Draft
As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from feminist philosophy of science to machine learning programs to make the case that the resources required to respond to these inductive challenges render critical aspects of their design constitutively value-laden. I demonstrate these points specifically in the case of recidivism algorithms, arguing that contemporary debates concerning fairness in criminal justice risk-assessment programs are best understood as iterations of traditional arguments from inductive risk and demarcation, and thereby establish the value-laden nature of automated decision-making programs. Finally, in light of these points, I address opportunities for relocating the value-free ideal in machine learning and the limitations that accompany them.
Works in Progress:
Drafts Available Upon Request
Are Implicit Biases Implicit Biases?
Despite the controversies surrounding implicit bias, it nonetheless seems central to our everyday understanding of implicit bias that it involves unconscious prejudices toward members of marginalized groups. It might then be surprising to learn that the overwhelming sentiment among social psychologists now denies this, and instead asserts that we do have introspective access to our attitudes measured by the Implicit Association Test. In other words, implicit biases are not implicit biases. In this paper, I argue that claims alleging the conscious (in)accessibility of bias are made fraught by ambiguity and confusion with respect to the two central concepts these claims involve: bias and conscious accessibility. I address the question of whether implicit biases are implicit biases by addressing both whether implicit biases are implicit and whether implicit biases are biases.
The Unity and Disunity of Psychological (Social) Bias
What states and processes realize a bias? In this paper, I argue that social biases are not unique to any particular level of cognitive architecture and that the states and processes that constitute biases will depend on the wider psychological system in which they’re embedded. Perceptual social biases, for example, will be constituted by obvious, superficial perceptual attributives, like the perceived lightness of someone’s skin. Theory-based social biases, for another example, will be constituted by a complex inferential pattern that tacitly assumes certain stereotypical properties are causally dependent on some underlying, hidden essence. This analysis of bias has an important practical consequence: since the states and processes they comprise are system-dependent, no one mitigation technique will be universally effective. Our most effective debiasing techniques will be tailored to how mental systems globally operate.
Proxies Aren't Intentional, They're Intentional
This paper concerns 'The Proxy Problem': often machine learning programs utilize seemingly innocuous features as proxies for social sensitive attributes, posing various challenges for the creation of ethical algorithms. I argue that to address this problem, we must first settle a prior question of what it means for an algorithm that only has access to seemingly neutral features to be using those features as ‘proxies’ for, and so to be making decisions on the basis of, protected class features. I argue against theories of proxy discrimination in law and political theory that rely on overly intellectual views of the intentions of the agents involved or on overly deflationary views that reduce proxy use to mere statistical correlation. Instead, using insights from philosophy of language and mind, I adopt an anti-individualist account of representational content to argue for a constitutive account of ‘contentful proxy use’. On this view, proxies are meaningfully about socially sensitive features when and only when they constitutively depend on discriminatory practices against members of marginalized groups.