Research

Publications

"Embracing Self-Defeat in Normative Theory"

Forthcoming in Philosophy and Phenomenological Research

Some normative theories are self-defeating. They tell us to respond to our situations in ways that bring about outcomes that are bad, given the aims of the theories, and which could have been avoided. Across a wide range of debates in ethics, decision theory, political philosophy, and formal epistemology, many philosophers treat the fact that a normative theory is self-defeating as sufficient grounds for rejecting it. I argue that this widespread and consequential assumption is false. In particular, I argue that a theory can be self-defeating and still internally consistent, action-guiding, and suitable as a standard for criticism.

"Coherence as Joint Satisfiability" (with Camilo Martinez)

Forthcoming in Australasian Journal of Philosophy

According to many philosophers, rationality is, at least in part, a matter of one’s attitudes cohering with one another. Theorists who endorse this idea have devoted much attention to formulating various coherence requirements. Surprisingly, they have said very little about what it takes for a set of attitudes to be coherent in general. We articulate and defend a general account on which a set of attitudes is coherent just in case and because it is logically possible for the attitudes to be jointly satisfied in the sense of jointly fitting the world. In addition, we show how the account can help adjudicate debates about how to formulate various rational requirements.

 

Work in Progress (email me for drafts)

[Title redacted]

Revise and resubmit at Journal of Moral Philosophy

Almost all philosophical discussion of collective action problems relies on a counterfactual mode of thinking: whether any given individual should do her part (e.g. vote in a national election) depends on what would happen, were she to do so. This way of thinking is associated with causal decision theory. Plausibly, it would make no morally significant difference whether any given individual did her part in many cases, so counterfactual thinking lets each of us off the hook. However, I argue that if we consider these situations from the perspective of causal decision theory’s main rival, evidential decision theory, the ethics of collective action looks very different. On an evidentialist analysis, whether an individual should do her part depends on what’s likely to happen, given that she does so. When others’ actions are correlated with what you do, they are more likely to do their respective parts, given that you do yours. What you do is evidence for what others will do, even if your action has no causal influence on their actions. In these conditions, evidential decision theory says that you should still do your part even if it wouldn’t make a difference.

[Title redacted] (under review)

How does information about what you would do, of your own volition, bear on what you ought to do? On the one hand, it seems reckless to ignore this information. On the other hand, it’s plausible that what you ought to do is a matter of what is in your control, not of how you would exercise that control. I argue that what you would do is relevant to what you ought to do, precisely because it bears on which possibilities are in your control. Following through on a given course of action is only an option when you can commit to it in some way that guarantees your success. Since there are typically many different ways of committing to an action, we are extremely limited in our knowledge of which possibilities are or are not in our control. To deal with this limitation, I defend a form of deliberative pluralism, according to which we should think of our option sets as tentative, simplified idealizations. Sometimes, we should deliberate more like “Actualists.” Other times, we should reason more like “Possibilists.” However, each approach is useful only as a starting point for further deliberation and self-prediction.

"Knowledgeable Moral Mathematics" (with Errol Lord)

We challenge the assumption—common to all standard decision-theoretic arguments for doing one’s part in a collective action problem—that there is some chance that your action will trigger a change in the morally relevant outcome. We focus on the case of refraining from buying chicken to reduce chicken suffering). We argue that a typical consumer is in a position to know, based on statistical evidence, that her purchase won’t trigger any change in chicken suffering. Given a plausible knowledge-first decision theory, it is not the case that refraining maximizes expected value, because the decision-theoretic likelihood of reducing the suffering of chickens is zero.