Daniel Wodak: Varieties of Normativity

This month we are in conversation with Daniel Wodak. Daniel concluded his PhD in Philosophy at Princeton in 2016. He is now Assistant Professor in Philosophy at Virginia Tech.

image

Welcome to Legal-Phi, Daniel! What brought you to philosophy?

Daniel: When I was in my penultimate year of high school Luke Russell (a philosopher at Sydney University) taught a course, Mind and Morality, for my high school and its sister school. I don’t really know why I enrolled; I was pretty bored of most of my subjects at the time. The course was a revelation. The philosophy of mind was fun and interesting, and the method of arguing fascinated me. But the moral philosophy is what really hooked me. And Luke was a great teacher. He ended up being one of my Honours advisors, years later.

You’ve been working primarily in metaethics. When and why did you start to develop an interest in the field?

Daniel: I think I followed a common trajectory. At first my main interests were in practical ethics, then I thought my commitments there were hopelessly unsystematic and subject to disagreement, so to solve the problems there I needed to do normative ethics; then similar reasoning about normative ethics made me think I needed to do metaethics. I now think that reasoning is flawed: it’s far from obvious that we can or should derive our commitments in normative and practical ethics from commitments in metaethics. But I’m still glad I went down this route, as I really love metaethics.

How about your interest in legal philosophy?

Daniel: This was largely thanks to Kevin Walton. When I was finishing my law degree (after completing my degree in philosophy, before applying to grad school in philosophy) Kevin taught most of the jurisprudence courses at the Sydney Law School. I took all of them. We only had one year where we could take electives; half of my eight subjects that year were with Kevin. The courses covered a lot of the core of general jurisprudence—Austin, Hart, Dworkin, Raz—and then some. And through these courses I finally became gripped by many other philosophical problems that’d never moved me before, such as vagueness.

Some of your work deals with questions about normativity and normative authority. What is it for a standard to be normative? And what does it take for a standard to have normative authority?

Daniel: To my mind, ‘normativity’ and ‘normative authority’ are both technical terms. I use the ‘normative’ very broadly to pick out standards (systems of obligations, permissions, and recommendations) that are expressed using deontic modals (‘must’, ‘can’, ‘ought’). In this sense, I think it’s obvious that morality and etiquette are both normative. I use the term ‘normative authority’ to pick out some special sense in which some standards are (to use common glosses) necessarily or intrinsically important: a common thought is that morality is authoritative, but etiquette is not. Some reserve ‘normative’ for the authoritative stuff. But I don’t think there’s any use arguing over the terminology here. The same philosophical issues will arise, regardless of the terminology that we use to describe them. So we should just use these terms as we think is best, and be as clear as possible about what we mean by them.

That said, one reason why I like my usage is that it helps bring to the fore some neglected philosophical questions. There’s a lot of attention on standards like morality that people think are authoritative. Far less attention is given to standards like etiquette that are thought to be normative in some non-authoritative way—standards that are ‘merely formally normative’, or ‘mere formalities’ for short. This is a shame; mere formalities are fascinating in their own right, and so is the distinction between ‘normativity’ and ‘normative authority’.

In your paper Mere Formalities you claim that standards such as etiquette and grammar are ‘not really normative’. Why is that?

Daniel: Well, it’s not so much that I want to argue that etiquette and grammar in particular are ‘not really normative’. Rather, I take it that it is common claimed that some standards—perhaps etiquette and grammar, but nothing hangs on these examples—are ‘not really normative’ or are ‘not genuinely normative’. Some philosophers want to argue that claims like these can be dismissed: they’re conceptual confused, or just empty rhetoric. I argue against that view in part because—as Michael Ridge notes at one point—these claims seem to be common in folk discourse. We recognize that according to etiquette, we “have to” set the table with the knife on the right, but then we ask: “Do I really have to do that?” And this question doesn’t seem confused, or to be an instance of empty rhetoric. So I think we should seek to find a charitable interpretation of what we mean by them. The same holds, I think, for similar claims that are common amongst very different theorists (positivists, natural law theorists) in philosophy of law: many are committed to the thought that legal obligations give us ‘legal reasons’, then say that those ‘legal reasons’ are ‘not genuine reasons’. We should find a charitable interpretation of what they mean.

There are various options here, but the best one, I think, starts by noticing how similar these claims are to uses of ‘genuine’ and ‘real’ to demarcate reality from fiction in sentences like “Sherlock is a detective, but he’s not really a detective.” From here, I argue for a form of fictionalism about mere formalities. I’m not sure that this form of fictionalism works. But I think it has a lot going for it. I should also note that one of the reasons why I think fictionalism has a lot going for it is that there are many different options and resources available to fictionalists—far more than people often think. Philosophers often write as if fictionalists must endorse a very specific view (one like the error-theoretic fictionalism in Richard Joyce’s The Myth of Morality). And I don’t think that’s right, in part because I think fictionalists should be building their views off of the rich philosophical literature on fictions themselves (including novels, legal fictions, and make-believe, etc.). In other words, I think fictionalists should take fictions more seriously, and so should their detractors.

In the same paper you also hold that given that standards such as etiquette and grammar are not really normative, scepticism about the possibility of normative authority should be rejected. Could you explain why this is so?

Daniel: The broader dialectic here is a bit complex, but here’s a way in. A very common way of thinking about normative authority is that we start with an account of what all normative standards have—what David Copp calls ‘generic normativity’—and add the special feature that makes some of those standards authoritative. What we add, standardly, is a normative feature, like ‘necessary reason-giving force’. So etiquette et al. are mere formalities because they lack this feature, and morality et al. are authoritative because they have this feature.

Enter the skeptic. She takes at face value folk normative discourse, in which we relativize normative terms to a plurality of standards. Philippa Foot formalized this by subscripting ‘ought’ et al.: ‘oughtM’ and ‘oughtE’ stand for the moral ought and the ought of etiquette. Then we ask: if we start with these relativized notions, which aren’t authoritative, how can we use them to explain a non-relativized notion that is authoritative? How will we get what Foot called a “free floating and unsubscripted” sense of ‘ought’ or ‘reason’ out of ‘oughtM’ and ‘oughtE’ (or ‘reasonM’ and ‘reasonE’)? Then the skeptics—especially Evan Tiffany and Derek Baker—can press even further. A large part of the conceptual role for a notion of normative authority was meant to be that it can non-arbitrarily resolve conflicts between standards: when oughtM tells you to do one thing and oughtE tells you to do another, we’re meant to ask what we ought you do in a free and unsubscripted sense. But now, they press, the problem re-emerges. If the unsubscripted ought sides with oughtM and against oughtE, we now we have another conflict: between ‘ought’ and ‘oughtE’. How do we non-arbitrarily resolve that conflict? Do we need to appeal to yet another sense of ‘ought’? And so on.

My answer to this question is quite long already, so I won’t go into how fictionalism helps address the skeptical challenge. (You can read about it here.) But I hope to have at least explained the challenge and why we should find it gripping. Far too many philosophers, to my mind, either ignore this challenge entirely, answer it glibly with vague metaphors, or act as if they can wave the word ‘reason’ around as magical wand that makes any further need for explanation dissipate. The skeptical challenge works just as well in terms of ‘reason’.

We often see philosophers making domain-relative ought claims: ‘we legally ought to do x, but we morally ought not to do x’. Despite this, it is still plausible to think that ‘ought’ has a single, invariant, meaning across different domains. In Expressivism and Varieties of Normativity you state that expressivists cannot provide a good explanation for this invariant character of ‘ought’. Why do you think this is so?

Daniel: It’s not just philosophers who do this. It’s an obvious feature of everyday discourse. Often the standard to which ‘ought’ (or ‘must’, or ‘can’, or whatever) is relativized is left implicit: in saying ‘You can’t double dribble’, that double-dribbling is forbidden is implicitly relative to the rules of basketball, rather than to etiquette or morality. Despite this, there are many good reasons to think that there’s a single meaning of ‘ought’—and ditto for ‘must’, and ‘can’, and so on—when it is relativized to different standards. But expressivists have said very little about this because they focus on the moral ‘ought’, or on some unrelativized or all-things-considered sense of ‘ought’. So for starters, we should ask expressivists to explain how their view captures this single, invariant meaning of ‘ought’ when it is relativized to etiquette or basketball or any other member of this open-ended set of normative standards.

I argue that expressivists face a serious challenge here. Take a view on which ‘ought’ always expresses the speakers’ plans (that’s its invariant meaning), but the object of those plans changes depending on the relevant standard. For Gibbard, for instance, “Rationally, A ought to φ” expresses the speaker’s plan for A to φ, whereas “Morally, A ought to φ” expresses the speaker’s plan to blame A if A does not φ. This is a bit rough, and as stated it is incomplete (what about “legally” and so on?). But it’s enough to put the general problem on the table. Consider a statement like this: “Morally, Antigone ought not bury Polynices; but if she buries Polynices, I rationally ought not blame her”? The claim about morality expresses the speaker’s plan to blame Antigone if she buries Polynices. The claim about rationality expresses the speaker’s plan not to blame Antigone if she buries Polynices. Those plans are inconsistent. So for Gibbard, what the speaker says is inconsistent: it’s as if she’s said p and not-p. This is counterintuitive, and it points to a general problem for expressivists that is hard to avoid. To get the result that ‘ought’ expresses a single, invariant meaning they need to posit that it expresses the same type of desire-like attitude (plans, approval, whatever); but once they posit that, their views end up predicting that speakers express inconsistent desire-like attitudes when making claims about what they ought to do according to different standards. That is, expressivist views that explain univocality end up making bad predictions like this: if “Morally, A ought to φ” and “Morally, A ought not φ” are inconsistent, then “Morally, A ought to φ” and “Legally, A ought not φ” turn out to be inconsistent too.

There could be a good solution for expressivists here. But I don’t know of one. And to my mind, this issue points to a broader lesson: expressivists shouldn’t be allowed to analyse normative language in a piecemeal fashion (focusing only on, say, the moral ‘ought’); they need to offer us a broader view about all of varieties of normativity.

In What does ‘Legal Obligation’ Mean?  you also talk about the view according to which terms like ‘ought’ and ‘obligation’ have a univocal meaning. Nowadays, not many legal philosophers seem to hold this view. But would anything in legal philosophy change if this view turns out to be correct?

Daniel: Historically, many philosophers of law held this view, as H.L.A. Hart noted. But the dominant view in contemporary philosophy of law—associated most with Joseph Raz and Scott Shapiro, as well as many others, including Leslie Green and Jules Coleman—is that ‘obligation’ has a distinctly moral meaning in legal contexts. This view is somewhat obscure. But I offer a good way to clarify it. Think of modifiers like ‘utilitarian’. When we talk of someone’s ‘utilitarian’ obligations, what we mean is their moral obligations according to utilitarianism. In this sense, ‘utilitarian’ modifies ‘obligation’ by picking out a (theoretical) perspective on morality. According to Raz and Shapiro, that’s how ‘legal’ modifies ‘obligation’ too. My main idea in ‘What Does ‘Legal Obligation’ Mean?’ was that we should seek to determine whether this view is true by appealing to linguistic data (which has not been done in this context). Here’s a way to do so: look at how these modifiers can be stacked. We can say: ‘Antigone has a utilitarian obligation to bury Polynices’, or say ‘Antigone has a utilitarian moral obligation to bury Polynices’; both are acceptable, and obviously equivalent. If Raz and Shapiro are right that ‘legal’ modifies ‘obligation’ in the same way that ‘utilitarian’ does, we should find the same pattern when we stack ‘legal’ and ‘moral’. But we don’t. Consider ‘Antigone has a legal obligation not to bury Polynices’ and ‘Antigone has a legal moral obligation not to bury Polynices’; I’m not sure that the second sentence is acceptable at all, but if it is it’s clearly not equivalent to the first. If I’m right about this point, it doesn’t just give us good reason to endorse the common view outside of philosophy of law that ‘obligation’ is univocal and ‘legal’ and ‘moral’ modify it in the same way. It also has implications for the many arguments from Raz, Shapiro, Coleman and so on that invoke their view that ‘obligation’ has a moral meaning in legal contexts: none of them work.

What is normative quietism? And why do you think it is incompatible with realism about normativity?

Daniel: Normative quietism is a view that’s difficult to characterize. According to its proponents, it’s a form of non-skeptical non-naturalist moral realism: normative quietists are committed to the view that there are normative facts and properties, and they don’t reduce to natural facts and properties, and we can know about them. But its proponents also insist that this commitment doesn’t carry the metaphysical and epistemological commitments that everyone else has thought. That’s the part that makes the view ‘quietist’. There’s some onus on the quietist to provide a positive view about what it is for moral properties to exist and how we know about them. The best view here that I know of is T.M. Scanlon’s view in Being Realistic About Reasons. On this view, there are different domains of statements (about math, or about morality, or …), and they have their own internal standards for when those statements are true, and hence when the entities mentioned in those statements exist; we just use those internal standards, so long as the domains don’t license inconsistent commitments. So for numbers to exist just is for statements about numbers to be true according to the standards of math, and we can know they exist by applying the standards of math. And that’s it: numbers don’t have to exist in some Platonic realm, or what have you. This view raises many objections, the main one being that it makes existence come too cheaply. But as stated, quietists are often willing to just bite the bullet on this. Why, they think, should we care about some heavyweight notion of EXISTENCE, when their lightweight notion serves us just fine?

I’ve argued that quietism—or at least, Scanlon’s version thereof—is incompatible with realism because it makes the existence of reasons too cheap. Scanlon might not care about some heavyweight notion of EXISTENCE. But he cares a lot about reasons, and so do other normative quietists: they’re committed to the realist view that reasons can’t just depend on our attitudes, or our conventions, or anything like that. But that commitment is hard to maintain once we apply a quietist view to domains like etiquette, where we have statements about reasons that are true according to the relevant internal standards, and where it’s clear that the truth of those statements about reasons does depend on our attitudes or conventions. I also argue that these standards can license new statements about reasons, without generating inter-domain conflicts (between morality and etiquette, say). But even if that latter argument fails, I’m not sure that it undermines the broader point. Scanlon’s view about inter-domain conflict is pretty unsatisfying. If scientific and supernatural discourse turns out to conflict, why—for a quietist like Scanlon—should we resolve that conflict in favor of science? It’s hard to supply a good answer that doesn’t give up the game. In any case, the basic move in my paper is to argue that if normative quietism is true, we can create reasons just by changing our attitudes or conventions, in a way that is incompatible with realism about reasons. And I think that argument is more likely to put dialectical pressure on quietists who will bite many bullets about EXISTENCE, but want reasons to be sacrosanct.

You also have done some work on the morality of the use of pronouns. In a paper co-authored with Robin Dembroff you defend a pretty radical claim: that we have a duty not to use any gender specific pronoun. Why do you think we have this duty?

Daniel: Robin and I give three arguments for this claim in the paper (‘He/She/They/Ze’). The simplest, and perhaps the strongest, comes from thinking about how gender-specific pronouns generate dilemmas where people have to either deceive others or disclose private information. Think of all the cases where people use ‘he’ or ‘she’ to refer to others, or others’ sexual partners, without actually knowing the gender identity of the referent. You might by default end up using ‘she’ to refer to a transgender man, or a genderqueer person, or the male partner of a gay man. And in any such case, you put someone in a position where they must either tacitly affirm something false (that they or their sexual partner is a woman), and in that sense deceive others, or explicitly disclose information that they wanted to keep to themselves. People should have autonomy over whether, when, and where they disclose information about their gender identity or sexuality, especially in a world where disclosing that information can put them at great risk. And they should be able to exercise that autonomy without tacitly deceiving others, for the sake of their own integrity, and for the sake of their reputations. Encoding a gender binary into English third-person pronouns generates dilemmas here quite pervasively. The best solution here, Robin and I argue, is to use a gender-neutral pronoun, like ‘they’ or ‘ze’, for everyone. That way, you don’t presuppose anything about someone’s gender identity when you refer to them, and don’t presuppose anything about their sexuality when you refer to someone’s sexual partner. Pronouns become neutral about gender, in the way that they are (and should be!) neutral about other aspects of our social identities, such as race, class, religion, and so on.

This proposal, Robin and I have both found, generates a lot of pushback. Some of it tracks important considerations. But some of it is downright silly. So many smart people have objected (to us, or in other contexts) that ‘they’ is a plural pronoun, so it’d just be terrible for me to say about Robin, for instance, “They are a great philosopher, and a great friend”: ‘they’ and ‘are’ are syntactically plural, but I’d be using these words in relation to Robin, who is one person. What a calamity! That’s the objection. Strikingly, none of those people who make this point seem to notice that on our view third-person pronouns would function the exact same way that second-person pronouns already function. If I say to Robin “You are a great philosopher, and a great friend”, ‘you’ and ‘are’ are syntactically plural, but Robin is still one person. If that is perfectly tolerable, why would it be intolerable to use ‘they’ the same way? The fact that so many smart people make such obviously bad objections should cause them to reconsider whether they’re just grasping for post hoc rationalizations for their views.

What have you been working on lately? Any future projects you can tell us about?

Daniel: In the first part of this summer I’ll be trying to finish up five papers. The one that’s probably most relevant to this crowd is a co-authored paper with Sam Chilovi about Hume’s is/ought gap and what relevance, if any, it has to philosophy of law. Hume’s is/ought gap is often framed as a central challenge to legal positivism. (This is a central motif in Scott Shapiro’s Legality, for instance.) Our basic point is that Hume’s is/ought gap is best understood as a thesis about logical entailment relations between statements, whereas legal positivism is best understood as a thesis about grounding relations between facts, so there’s a significant gap between the two. In other words, if no set of descriptive ‘is’ statements can entail any normative ‘ought’ statements, it doesn’t follow that normative facts about what you legally ought to do cannot be grounded in descriptive facts about what is the case. We set out what further theses you have to subscribe to in order for Hume’s is/ought gap to pose a genuine problem for legal positivism, and then show that if those theses are indeed true the problem that’s actually generated is much broader than many people would think: it also undermines, for instance, many prominent versions of moral non-naturalism too.

Could you name two papers in legal or moral philosophy that, in your view, haven’t received as much attention as they should?

Daniel: I’m going to name two papers that have received a lot of attention, but less attention from legal or moral philosophers than they should. The first is Angelika Kratzer’s 1977 ‘What ‘Must’ and ‘Can’ Must and Can Mean’. This paper is a classic in the literature on the semantics of modals in general, and it includes explicit discussion of ‘must’ and ‘can’ in legal contexts. It’s had a lot of uptake in a lot of the more formal discussions of the semantics of normative terms (like deontic modals) in metaethics. But it has been neglected in discussions of the semantics of those same normative terms in legal philosophy, and I think that’s regrettable. If philosophers of law are going to discuss the meaning of the legal ‘must’ and ‘can’, they should engage with the voluminous and sophisticated broader literature on deontic modals.

The second is Sarah-Jane Leslie’s 2017 ‘The Original Sin of Cognition’. This paper, along with Rae Langton’s work on pornography, is what first sparked my interest in social and political philosophy of language, which is both theoretically fascinating and politically pertinent. But while Langton’s work has generated a lot of discussion within moral philosophy itself (as well as in work on speech act theory), Leslie’s work has mostly been discussed in relation to the semantics and psychology of ‘generic generalizations’ (statements like, to borrow two of her examples, “Mosquitos carry the West Nile virus” and “Muslims are terrorists”). This is a shame, in part because the moral issues raised warrant serious attention in discussions of discrimination and prejudice. I also think it’s harder for people to keep screaming that objections to how people speak are all instances of political correctness gone mad once they engage with sophisticated discussions of why small linguistic differences in how we describe social groups have dramatic differences in the (often false and harmful) information that they communicate, including to small children. (Full disclosure: Sarah-Jane and I co-authored two papers on these issues, one of which was also with Marjorie Rhodes.)

What kind of writer are you? Do you usually plan each step carefully before writing?

Daniel: Not really. I think about a paper a lot before putting pen to paper, or fingers to keyboard, or whatever. I might scrawl a little on some blank pages, but that’s about all I do by way of planning. Then I’ll write a very rough first draft very quickly—I just force myself to write out a fairly complete version of the idea I’ve been playing around with, as that helps me see the parameters of the whole project far more clearly than outlines ever will. That process takes me a few days. It often takes me a long time to fix these drafts up though.

My impression is that that this is a somewhat unusual writing process. But for me it feels very natural. It works well in the winters and summers—when I can write a lot uninterrupted. But it’s not well suited to writing new papers during semesters when I’m teaching; I can edit in dribs and drabs, but I struggle to write new papers that way.

What was the most helpful advice someone gave you while you were in grad school?

Daniel: One piece of advice that really stuck with me was from my advisor, Michael Smith, right after I finished Generals (my qualifying exams). I didn’t do brilliantly on those exams, and I wasn’t entirely sure where I’d gone wrong. Michael said that I was trying to write for professional philosophers, but what I needed to do was write for smart novices: that is, don’t try to speak to and impress the cognoscenti; explain and motivate everything from the ground up. It’s a hard ideal to meet, especially when journals have strict word limits, but it’s still an important goal to strive towards.

Could you name two books in philosophy and two books outside philosophy that have influenced you the most?

Daniel: Within philosophy, probably Michael Smith’s The Moral Problem and Robert Nozick’s Anarchy, State, and Utopia. David Braddon-Mitchell taught a class on The Moral Problem while I was at Sydney. I ended up writing my undergraduate thesis on it. It lays out the central issues and the available positions so clearly. I love how systematic it is. Anarchy, State, and Utopia is completely different. In a sense, it’s systematic: Nozick is building a case for libertarianism. But what’s wonderful to me about that book is that it’s full of so many interesting distinctions, asides, thought experiments, and one-off arguments. In other words: I love how creative and wide-ranging it is. I’m not sure if these are the two books in philosophy influenced me the most. But I’ve been influenced a great deal by both in the way that I aspire to do philosophy, far more so than in the specific views that either book defends.

Outside of philosophy, I’m even less sure what’s influenced me the most. But here are two books that influenced me a lot. One is Jose Saramago’s The Gospel According to Jesus Christ. I read this when I was a teenager, and I mulled over it for long while afterwards: it was such a provocative and rewarding investigation of guilt, and so full of interesting instances where characters like the Pastor push back, in a Socratic fashion, on common moral views.

The other is What We See When We Read by Peter Mendelsund. It’s fairly recent, but it influenced me a lot in two ways. First, it made me realize quite vividly that the way I read isn’t unusual or defective. I’m pretty literal. I don’t form mental pictures when I read. I’d had the impression, from talking to others, that there were basically little movies playing in their heads when they read novels. (When, say, the Harry Potter films came out, so many people told me that the characters didn’t look the way they’d pictured them. I hadn’t pictured them.) This book made me realize that other people are far more like me than I’d thought: if they’re forming mental pictures at all, the pictures are either incomplete or inconsistent. The second point of influence was more indirect. I’ve long been attracted to the view that our own minds are mostly opaque to us: we don’t have very immediate and reliable access to what we want, intend, believe, feel, and so on. (Some of this, incidentally, comes from being a migraineur. A common experience for me is the sudden realization that a migraine has crept up on me, and I’ve been in progressively increasing pain for the past several hours.) Prior to reading What We See When We Read I hadn’t thought much about opacity in relation to the imagination. It’s an interesting case because imagining is an active enterprise.

You can find more information about Daniel here.

 

Advertisements