One project that I have been pottering away at recently is an attempt to cash out what could be meant with something like Phillip Pettit's theory of 'freedom as non-domination' (meaning that we are free to the extent in that we can do what we go about our affairs without the arbitrary intervention of another party). To do so I develop a framework for highlighting some important modal considerations that are involved in our political deliberation. One of the dimensions on which we evaluate courses of action is on how likely they are to achieve their aims. I believe that we could provide a framework, perhaps even a semantics, of this evaluative dimension. My proposal is to discriminate amongst courses of action according to their 'resilience to counterfactuals' (as I call it), meaning the range of contingencies wherein they would succeed. One course of action is more resilient to counterfactuals than another if there are more possible worlds (within some specified degree of similarity to the actual world) where that course could successfully be pursued. The idea is that this might be a useful way to articulate how we distinguish between secure state of affairs and precarious ones – in the secure state of affairs, there are more contingencies in which we can accomplish what we might set out to do. Secure state of affairs lead to plans with greater resilience to counterfactuals.
An application of the project, a straightforward and I hope informative one, is that it would allow us to discriminate between the status of different political rights based on how secure those rights are (meaning, how resilient to counterfactuals they are). I also think, and am willing to defend, that considerations of this type is one of the reasons why political rights are such a central feature of our social lives (and would be even if there were no natural rights): legal protections of those kind cut down the range of contingencies that you need to secure yourself against, and thus gives you a greater safety margin within which to deliberate towards your chosen end.
This approach has important advantages over the decision-theoretic approach for judging among possible courses of action. Attempts to do feasibility studies in the usual decision-theoretic way -- looking at the probability that certain situations would arise, assigning outcomes to the situations and deriving their expected utilities -- run into serious problems, given that the result of such studies depends on what people intend to do, but that people very often radically change their courses of action given the circumstances. This means that you have to redo the feasibility study for each change in plan of anybody in the environment, leading to a combinatorial explosion of things to consider (if re-doing the sums is even possible at all), very quickly making the results of such a study intractable. The resilience-to-counterfactuals approach handles this feature of situations much better, since the fact that there are a number of different ways someone could try to achieve their goals is a premise of the analysis.
There is more to be said here, like how a formal semantics for such a framework might look -- a possible-worlds semantics is very suggestive here, but there are kinks to work out. I also want to see how this compares to other approaches in the literature. I have a decent idea of how it relates to Pettit's proposal, but given the influence of the 'freedom as non-domination' theory, there is a lot of different ways this gets cached out in the literature. One development in particular I want to take a closer look at is Dowding and Van Hees on 'counterfactual success', which seems similar but importantly different. There is also the question of how much we can conclude about some particular plan's resilience to counterfactuals. If A has a plan to acquire X that is relatively resilient to counterfactuals, does this have significance beyond just that particular plan? Does it tell us something about all A's possible X-involving plans? (I would hope so.) Does it tell us something about A's security in general? Or, perhaps only of A's security in his or her X-involving domains. (What is an X-involving domain, then?) These questions are so suggestive to me that I've proposed this as a possible PhD project. However it might turn out, this is something I've spent some time thinking on in the past, and I'm likely to keep coming back to this issue for the foreseeable future.
Monday, January 31, 2011
Wednesday, January 19, 2011
My purpose here is to present a formal framework for the relationship between intuitions and concrete particular judgements in moral cases and the general theories which are supposed to capture them. This framework is to give that relationship a suitable interpretation in intuitionistic (or constructivist) logic, with which I hope to give an interesting way to articulate the difference between theoretical and anti-theoretical approaches to ethics. On this interpretation, the propositions to be evaluated are particular moral judgements, and they are evaluated in terms of whether they are endorsed by the general theory in question. What the particular judgements or the theory are is unimportant for my present purposes, since what I am after here is to make a general point about the way these things might relate.
The feature of intuitionistic logic which interests me here is that it gives a way to articulate how theories might only imperfectly account for the particular cases they are supposed to address. On intuitionistic logic, a theory is gradually constructed over successive stages of reasoning, where there are conclusions you can reach at later stages which would be unwarranted at earlier ones. In the paradigmatic interpretation, concerning mathematical reasoning, propositions
stand for mathematical theorems are mathematical propositions, and to assert a proposition is to claim that there is a mathematical proof of it (and asserting a negation means that there is a proof that the theorem proposition can't be true). The theorems proved at stage n can then be used to prove further theorems at stage n+1 which could not be proved or disproved with the resources of stage n-1. Of particular interest here is that intuitionistic logic allows for a final stage of reasoning, where there exists either a proof or disproof for every theorem. At this final stage intuitionistic logic is equivalent to classical logic (for instance, the law of excluded middle then holds, and, more importantly, the logic becomes complete, but only at the final stage).
My claim here is that theoretical approaches to ethics are ones where, if we give a suitable a intuitionistic interpretation to the field of particular judgements and the general theory, the approach asserts that there is some (actual or possible) final stage of theory construction where every particular judgement is comprehensively captured in the theory. Anti-theoretical approaches deny this. The interpretation would be as follows: there is some theory T at stage s which validate inferences to certain moral judgements (A,B,C, etc.) – this is meant as a straightforward analogy to the mathematical case. This allows for an interesting contrast between approaches which allow for a final stage of theory construction, and ones that don't. If we were at the final stage of our theory T (let's call it TΩ), then the content and import of every particular judgement would be accounted for by T. In contrast, if we aren't (and might never be) at TΩ, then there remains some particular judgement on the strength of which T can be modified, and needs to be if T is to be a comprehensive guide to our moral lives. This seems to me to be equivalent to the anti-theoretical claim in ethics: that there is no systematisation of our ethical practice which can hope to capture the variety and complexity of that practice, but that there will always be a remainder which is not accounted for in the theory.
My claim is very limited – I am not proposing something like a logic in which to couch our moral reasoning. But, even in the limited role as a way to make an important distinction, my proposal here has its virtues. Casting the developing of an ethical theory as a series of constructive stages matches very well with the view, for instance, of our understanding of ethics as an expanding circle of moral concern, where as we develop our moral sensibilities we see duties to things we hadn't considered important earlier. It also is a natural ally of something like the reflective equilibrium to approach to moral reasoning, where we have a to-and-fro between general theories and particular moral judgements, each being revised on the strength of the other. But intuitionistic logic is not the same as reflective equilibrium: for one thing, the latter isn't a logic; another, more troubling point is that intuitionistic logic doesn't allow for the type of oscillation between different versions of particular claims that reflective equilibrium requires.
What I mean is that, once we reach some stage Tn where moral claims A, B and C are validated (given comprehensive and convincing support by T), then at no T after Tn will A, B and C fail to be validated. Reflective equilibrium isn't like that: up until the final equilibrium is reached, everything is up for modification. This leads to what I see as the greatest problem for my proposal: interpreting particular moral claims in the way we do theorems in mathematics might not be very convincing, especially because we are often very willing to give up particular judgements or intuitions which seemed certain to us earlier.
I have two replies to this worry, one ambitious and another more modest. The ambitious reply would try to dodge this problem by being very careful about what the scope of the particular moral claims are. Instead of having a proposition A like 'intentionally killing another human being is always wrong', which we might believe to a greater or lesser extent given variations in our background theories and our moral theory as a whole, we can see the pronouncements at earlier stages of T to be on propositions like B = 'intentionally killing of a human being without special societal sanction is always wrong' and C = 'there is some appropriate societal sanction which would allow for the intentional killing of another human being', where T1 might validate B but not C, whereas T2 might also validate C's negation (if our moral theory ended up being a pacifist one, for instance). Having finer-grained judgements of this type makes it far more believable that we can at some stage of the reasoning set some of them in stone (at least regarding the further development of T). The more modest reply, in contrast, would be to point out that the distinction I'm trying to make between theory and anti-theory holds even if there are just two or three stages of reasoning, where when we go to TΩ we settle almost every moral question in one bound. This is not a fantastic state of affairs, since if we take many defenders of various theories at their word, when we carefully and conscientiously apply the theory in question, we at least have a procedure for answering any moral question (difficult to apply as it might be).
There is a further worry, about whether the claim that a theoretical approach to ethics would entail asserting that there is some final stage for the theory in question is too strong a claim – it at least seems to assert the existence of a complete decision procedure which either asserts or denies any possible particular moral judgement, which many people who see themselves as opposing moral anti-theory would also deny. But I will leave that question unanswered at the moment.