Pages

Tuesday, April 15, 2014

Will sex workers be replaced by robots? (A Precis)


Daryl Hannah, Blade Runner

I recently published an article in the Journal of Evolution and Technology on the topic of sex work and technological unemployment (available here, here and here). It began by asking whether sex work, specifically prostitution (as opposed to other forms of labour that could be classified as “sex work”, e.g. pornstar or erotic dancer), was vulnerable to technological unemployment. It looked at contrasting responses to that question, and also included some reflections on technological unemployment and the basic income guarantee.

I hate to say this myself, but I thought the arguments in the paper were interesting, and I’d like to hear what other people think about them. But since people are busy, and may not be inclined to read the full 8,000 words, I thought I would provide a brief precis of the main arguments here. That might persuade some to read the full thing, and others to offer their opinions. So that’s what I’m going to do. I’m going to focus solely on the arguments relating to the replacement of sex workers by robots, leaving the basic income arguments out.

This is the first time I’ve ever tried to summarise my own work on the blog — I usually focus on the work of others — and it comes with the caveat that there is much more detail and supporting evidence in the original article. I’m just giving the bare bones of the arguments here. No doubt everyone else whose work I’ve addressed on this blog wishes I added a similar caveat before all my other posts. In my defence, I hope that such a caveat is implied in all these other cases.


1. The Case for the Displacement Hypothesis
Those who think that prostitutes could one day be rendered technologically unemployed by sophisticated sexual robots are defenders of something I call the “displacement hypothesis”:

Displacement Hypothesis: Prostitutes will be displaced by sex robots, much as other human labourers (e.g. factory workers) have been displaced by technological analogues.

As I note in the article, a defence of the displacement hypothesis is implicit in the work of several writers. The most notable of these is, perhaps, David Levy, whose 2007 book Love and Sex with Robots remains the best single-volume work on this topic. In the article, I try to clarify and strengthen the defence of the displacement hypothesis.

I argue that it depends on two related theses:

The Transference Thesis: All the factors driving demand for human prostitutes can be transferred over to sex robots, i.e. the fact that there is demand for the former suggests that there will also be demand for the latter.
The Advantages Thesis: Sex robots will have advantages over human prostitutes that will make them more desirable/more readily available.

I then proceed to consider the arguments in favour of both.

The argument for transference thesis depends on a close analysis of the factors driving demand for human prostitution. Extrapolating from several empirical studies of human demand, these factors can be reduced to four general categories: (i) people demand prostitutes because they are seeking the kind of emotional connection/attachment that is typical in romantic human sexual relationships; (ii) people demand prostitutes because they are seeking sexual variety (both in terms of partners and types of sex act); (iii) people demand prostitutes because they desire sex that is free from the complications and expectations of non-commercial sex (basically, the inverse of the first reason); and (iv) people demand prostitutes because they are unable to find sexual partners through other means.

To defend the transference thesis, one simply needs to argue that sex robots can cater to all of these demands. So you must argue that it will be possible to create sex robots that develop emotional bonds with their users (or not, if that is the user preference); it will be possible to create sex robots that cater to the need for variety; and it will be possible to supply sex robots to those who are unable to find sexual partners by other means.

The argument for the advantages thesis depends on identifying all the ways in which sex robots could be more desirable and more readily available than human prostitutes. In the article, I list four types of advantage that sex robots could have over human sex workers. First, there are the legal advantages: prostitution is illegal in several countries whereas the production of sex robots is not (I also suggested that sex robots could cater to currently illegal forms of sexual deviance, though this is more controversial). Second, there are the ethical advantages: less need to worry about trafficking or objectification. Third, there are the health risk advantages: less risk of contracting STDs (though this depends on sanitation). And fourth, and finally, there are the advantages of production and flexibility: it might be easier to produce sex robots en masse to cater for demand, and to re-programme them to cater to new desires.

When combined, I suggest that the transference thesis and the advantages thesis present a good case for the displacement hypothesis. An argument diagram summarising what I have said and clarifying the logical connections is provided below.




2. The Case for the Resiliency Hypothesis
Although I accept that there is a reasonable case for the displacement hypothesis, one of my primary goals in the article is to suggest that there is also a case to be made for the contrasting view. Thus, I introduce something I call the “resiliency hypothesis”:

Resiliency Hypothesis: Prostitution is likely to be resilient to technological unemployment, i.e. demand for and supply of human sexual labour is likely to remain competitive in the face of sex robots.

As with the displacement hypothesis, the case for the resiliency hypothesis rests on two theses:

The Human Preference Thesis: Ceteris Paribus, if given the choice between sex with a human prostitute or a robot, many (if not most) humans will prefer sex with a human prostitute.
The Increased Supply Thesis: Technological unemployment in other industries is likely to increase the supply of human prostitutes.

In retrospect, I possibly should have called the second of these, the “Increased Supply and Competitiveness Thesis” since the claim is not just that there will be an increased supply but that those drawn into sex work will do everything they can to remain competitive against sex robots (thereby countering some of the advantages robots have over humans). I think this is clear in how I defend the thesis in the article, just not in the name I gave it.

Anyway, I rested my defence of the human preference thesis on three arguments and bits of evidence. The first was largely an argument from philosophical intuition. I suggested that it seems plausible to suppose that we would prefer human sex partners to robotic ones. I based this on the belief that ontological history matters to us in matters both related and unrelated to sex. Thus, for example, we care about where food or fine art comes from: it’s more valuable if it has the ontological right history (not just because it looks or tastes better). We also seem to care about where our sexual partners come from: witness, for example, the reaction to transgendered persons, who are sometimes legally obliged to disclose their gender history. (I’m not saying that this reaction is a good thing, just that it is present).

It has been pointed out to me — by Michael Hauskeller — that my ontological history argument may simply the beg the question. It assumes that sex robots will have an ontological history that fails to excite us as much as the ontological history of human sex workers, but in a way that is the very issue under debate: would we prefer humans to robots. On reflection, Hauskeller looks to be right about this. Additional evidence is needed to show that the ontological history we desire is a human one. I would also add that if our concern with ontological history is irrational or prejudiced, it may be possible to overcome it. Thus, even if humans are preferred in the short term, they may not be in the long term.

Fortunately, there were two other arguments for the human preference thesis. One was based on some polling data suggesting that humans were not all that willing to have sex with a robot (though I did critique the poll as well). The other was based on the uncanny valley hypothesis. I reviewed some of the recent empirical literature suggesting that this is a real effect, and argued that it might not even be a valley.

The defence of the increased supply thesis rested an a simple argument (the numbering may look a bit weird here but remember that’s because everything I’ve said is going into an argument diagram at the end):


  • (16) An increasing number of jobs, including highly skilled jobs, are vulnerable to technological employment. 
  • (17) If an increasing number of jobs are vulnerable to technological unemployment, people will be forced to seek other forms of employment (all else being equal). 
  • (18) When making decisions about which form of employment to seek, people are likely to be attracted to forms of employment: (i) in which there is a preference for human labour over robotic labour; (ii) with low barriers to entry; and (iii) which are comparatively well-paid. 
  • (19) Prostitution satisfies all three of these conditions (i) - (iii). 
  • (11) Therefore, there is likely to be an increased supply of human prostitution.


I looked at each of the premises of this argument in the paper, though I focused most attention on premise (19). In support of this, I considered evidence from economic studies of prostitution. I also followed this with some argumentation on the way in which human prostitutes could address the advantages of sex robots.

That gives us the following argument diagram.



That’s it then. I hope this clarifies the case for the displacement and resiliency hypotheses. For more detail and supporting evidence please consult the original article. There is also some follow-up in the article about the implications of all this for the basic income guarantee.

Monday, April 14, 2014

Should we bet on radical enhancement?



(Previous Entry, Series Index)

This is the third part of my series on Nicholas Agar’s book Truly Human Enhancement. As mentioned previously, Agar stakes out an interesting middle ground on the topic of enhancement. He argues that modest forms of enhancement — i.e. up to or slightly beyond the current range of human norms — are prudentially wise, whereas radical forms of enhancement — i.e. well beyond the current range of human norms — are not. His main support for this is his belief that in radically enhancing ourselves we will lose certain internal goods. These are goods that are intrinsic to some of our current activities.

I’m offering my reflections on parts of the book as a read through it. I’m currently on the second half of Chapter 3. In the first half of Chapter 3, Agar argued that humans are (rightly) uninterested in the activities of the radically enhanced because they cannot veridically engage with those activities. That is to say: because they cannot accurately imagine what it is like to engage in those activities. I discussed this argument in the previous entry.

As Agar himself notes, the argument in the first half of the chapter only speaks to the internal goods of certain human activities. In other words, it argues that we should keep enhancements modest because we shouldn’t wish to lose goods that are intrinsic to our current activities. This ignores the possible external goods that could be brought about by radical enhancement. The second half of the chapter deals with these.


1. External Goods and the False Dichotomy
It would be easy for someone reading the first half of chapter 3 to come back at Agar with the following argument:

Trumping External Goods Argument: I grant that there are goods that are internal and external to our activities, and I grant that radical enhancement could cause us to lose certain internal goods. Still, we can’t dismiss the external goods that might be possible through radical enhancement. Suppose, for example, that a radically enhanced medical researcher (or team of researchers) could find a cure for cancer. Wouldn’t it be perverse to forgo this possibility for the sake of some internal goods? Don’t certain external goods (which may be made possible by radical enhancement) trump internal goods?

The proponent of this argument is presenting us with a dilemma, of sorts. He or she is saying that we can stick with the internal and external goods that are possible with current or slightly enhanced human capacities, or we can go for more and better external goods. It would seem silly to opt for the former when the possibilities are so tantalising, especially given that Agar himself acknowledges that new internal goods may be possible with radically enhanced abilities.

The problem with this argument is that it presents us with a false dilemma. We don’t have to pick and choose; we can have the best of both worlds. How so? Well, as Agar sees it, we don’t have to radically enhance our abilities in order to secure the kinds of external goods evoked by the proponent of the trumping argument. We have other kinds of technology (e.g. machines and artificial intelligences) that can help us to do this.

What’s more, as Agar goes on to suggest, these other kinds of technology are far more likely to be successful. Radical forms of enhancement need to be integrated with the human biological architecture. This is a tricky process because you have to work within the constraints posed by that architecture. For example, brain-computer interfaces and neuroprosthetics, currently in their infancy, face significant engineering challenges in trying to integrate electrodes with neurons. External devices, with some user-friendly interface, are much easier to engineer, and don’t face the same constraints.

Agar illustrates this with a thought experiment:

The Pyramid Builders: Suppose you are a Pharaoh building a pyramid. This takes a huge amount of back-breaking labour from ordinary human workers (or slaves). Clearly some investment in worker enhancement would be desirable. But there are two ways of going about it. You could either invest in human enhancement technologies, looking into drugs or other supplements to increase the strength, stamina and endurance of workers, maybe even creating robotic limbs that graft onto their current limbs. Or you could invest in other enhancing technologies such as machines to sculpt and haul the stone blocks needed for construction. 
Which investment strategy do you choose?

The question is a bit of a throwaway since, obviously, Pharaohs are unlikely to have the patience for investment of either sort. Still, it seems like the second investment strategy is the wiser one. We’ve had machines to assist construction for a long time now that aren’t directly integrated with our biology. They are extremely useful, going well beyond what is possible for a human. This suggests that the second option is more likely to be successful. Agar argues that this is all down to the integration problem.


2. Gambling on radical enhancement: is it worth it?
I think it’s useful reformulate Agar’s argument using some concepts and tools from decision theory. I say this because many of Agar’s arguments against radical enhancement seem to rely on claims about what should we be willing (or unwilling) to gamble on when it comes to enhancement. So it might be useful to have one semi-formal illustration of the decision problems underlying his arguments, which can then be adapted for subsequent examples.

We can do this by for the preceding argument by starting with a decision tree. A decision tree is, as the name suggests, a tree-like diagram that represents the branching possibilities you confront every time you make a decision. The nodes in this diagram either depict decision points or points at which probabilities affect different outcomes (sometimes we think of this in terms of “Nature” making a decision by determining the probabilities, but this is a just a metaphor).

Anyway, the decision tree for the preceding argument works something like this. At the first node, there is a decision point: you can opt for radical enhancement or modest (or no) enhancement. This then branches out into two possible futures. In each of those futures there is a certain probability that we will secure the kinds of external goods (like cancer cures) alluded to by the proponent of the trumping argument, and a certain (complementary) probability that we won’t. So this means that either of our initial decisions leads to two further possible outcomes. This gives us four outcomes in total:

Outcome A: We radically enhance, thereby losing our current set of internal goods, and fail to secure trumping external goods.
Outcome B: We radically enhance, thereby losing our current set of internal goods, but succeed in securing trumping external goods.
Outcome C: We modestly enhance (or don’t enhance at all), keeping our current set of internal goods, and fail to secure trumping external goods through other technologies.
Outcome D: We modestly enhance (or don’t enhance at all), keeping our current set of internal goods, but succeed in securing trumping external goods through other technologies.

This is all depicted in the diagram below.




With the diagram in place, we have a clearer handle on the decision problem confronting us. Even without knowing what the probabilities are, or without even having a good estimate for those probabilities, we begin to see where Agar is coming from. Since radical enhancement always seems to entail the loss of internal goods, modest enhancement looks like the safer bet (maybe even a dominant one). This is bolstered by Agar’s argument we have good reason to suppose that the probability of securing the trumping external goods is greater through the use other technologies. Hence, modest enhancement really is the better bet.

There are a couple of problems with this formalisation. First, the proponent of radical enhancement may argue that it doesn’t accurately capture their imagined future. To be precise, the proponent could argue that I haven’t factored in the new forms of internal good that may be made possible with radically enhanced abilities. That’s true, and that might be a relevant consideration, but bear in mind that those new internal goods are, at present, entirely unknown. Is it not better to stick with what we know?

Second, I think I’m being a little too-coarse grained in my description of the possible futures involved. I think it’s odd to suggest, as the decision tree does, that there could be a future in which we never achieve certain trumping external goods. That would suppose that there could be a future in which there is no progress on significant moral problems at our current level of technology. That seems unrealistic to me. Consequently, I think it might be better to reformulate the decision tree with a specific set of external goods in mind (e.g. things like a cure for cancer, or for world hunger, childhood mortality etc. etc.).


3. The External Mind Objection
There is another objection to Agar’s argument that is worth addressing separately. It is one that he himself engages with. It is the objection from the proponent of the external mind thesis. This thesis can be characterised in the following manner:

External Mind Thesis: Our minds are not simply confined to our skulls or bodies. Instead, they spill out into the world around us. All the external technologies and mechanisms (e.g. calculators, encyclopedias) we use to help us think and interact with the world are part of our “minds”.

The EMT has been famously defended by Andy Clark (and David Chalmers). Clark argues that the EMT implies that we are all cyborgs because of the way in which technology permeates in our lives. The EMT can be seen to follow from a functionalist theory of mind.

The thing about the EMT is that it might also suggest that the distinction Agar draws between different kinds of technological enhancement is an unprincipled one. Agar wants to argue that technologies that enhance by being integrated with our biology are different from technologies that enhance by providing us with externally accessible user interfaces. An example would be the difference between a lifting machine like a forklift and a strength enhancing drug that allows us to lift heavier objects. The former is external and non-integrated; the latter is internal and integrated. The defender of the EMT argues that this is a distinction without a difference. Both kinds of technological assistance are part of us, part of how we interact with and think about the world.

Agar could respond to this by simply rejecting the EMT, but he doesn’t do this. He thinks the EMT may be a useful framework for psychological explanation. What he does deny, however, is its usefulness across all issues involving our interactions with the world. There may be some contexts in which the distinction between the mind/body and the external world count for something. For example, in the study of the spread of cancer cells, the distinction between what goes on in your body, versus what goes on in the world outside it, is important (excepting viral forms of cancer). Likewise, the distinction between what goes on in our heads and what goes on outside, might count for something. In particular, if we risk losing internal goods through integrated enhancement, why not stick with external enhancement? This doesn’t undermine Clark’s general point that we are “cyborgs”; it just says that there are different kinds of cyborg existence, some of which might be more valuable to us than others.

I don’t have any particular issue with this aspect of Agar’s argument. It seems correct to me to say that the EMT doesn’t imply that all forms of extension are equally valuable.

That brings us to the end of chapter 3. In the next set of entries, I’ll be looking at the arguments in chapter 4, which have to do with radical enhancement and personal identity.

Sunday, April 13, 2014

Veridical Engagement and Radical Enhancement



(Previous Entry) (Series Index)

This is the second post in my series on Nicholas Agar's new book Truly Human Enhancement. The book offers an interesting take on the enhancement debate. It tries to carve out a middle ground between bioconservatism and transhumanism, arguing that modest enhancement (within or slightly beyond the range of human norms) is prudentially valuable, but that radical enhancement (well beyond the range of human norms) may not be.

As noted in the previous entry, the purpose of this series is to share my reflections on the book as I work my way through the chapters. Today's post is the first of two on the contents of chapter 3. To follow that chapter, you need to familiarise yourself with the conceptual framework set out in chapter 2. Fortunately, I covered that in the previous entry. I recommend reading that before proceeding with this post. I'm serious about this: if you don't know what is meant by terms like "prudential value", "intrinsic value" or "internal goods", then you will miss out on aspects of this discussion.

Anyway, assuming you are familiar with these concepts, we can proceed. Chapter 3 is entitled "What interest do we have in superhuman feats?". It is an appropriate title. The chapter itself looks at two related arguments that respond to that question. The first holds that we have little interest in superhuman feats, at least in terms of their relationship to intrinsically valuable internal goods. The second holds that we might have great interest in them, if they were the only way of bringing about certain external goods, but as it happens they aren't the only way of doing this.

I'm going to look at each of these arguments over the next two posts, starting today with the first.


1. Are we uninterested in superhuman sports and games?
To support the first argument, Agar uses some illustrations from the world of human sports and games. The illustrations supposedly demonstrate that we do as a matter of fact lack an interest in superhuman versions of these activities. This is then used as the springboard for an argument about why we lack this interest.

The first example is that of the marathon, specifically Haile Gebrselassie's victory in the Berlin marathon in 2008. Gebrselassie ran that marathon in 2hr 03mins 59secs, which then sparked a debate about whether we would soon seen a sub-two-hour marathon. Agar suggests that Gebrselassie's achievement and the subsequent debate are interesting to us; that we can relate to and value these possibilities.

Contrast this with a (for now) hypothetical superhuman marathon. Agar refers to Robert Freitas's idea of the respirocyte. This is a one-micron-wide nanobot that could be used to replace human haemoglobin. This could massively increase the oxygen-carrying capacity of our blood, allowing us to run at sprint speed for 15 minutes or more. If we enhanced ourselves with respirocytes, the traditional 26.2 mile marathon would no longer be of interest. We would have to invent a new race, perhaps a 262 mile marathon, to create a challenge worthy of our abilities. Agar's suggestion is that we are less interested and less excited by this possibility.

That example might not work for you. So here's another, with a much starker contrast. Consider the game of chess. As you all know, Gary Kasparov -- probably the greatest human chess player of all time -- was defeated by the IBM computer Deep Blue in 1997. Since then, computers have been decisively better than humans at chess (though teams of computers and humans are still better than computers alone).

Nevertheless, despite the clear superiority of computers over human beings, we are not interested in or engaged by the prospect of computer-against-computer competitions (unless, perhaps, we are computer programmers). Human competitions still take place and still dominate the popular imagination. Why is this?


2. Veridical Engagement and Simulation Theory
Agar answers this question by appealing to the concept of veridical engagement. We can define this in the following manner:

Veridical Engagement: We veridically engage with an activity or state of being when we can (more or less) accurately imagine ourselves performing that activity or being in that state.

This definition is mine, not Agar's. I based it on what he wrote but there may be some differences. He speaks solely to activities since the two examples he uses (marathon running and chess) are activities, but I've broadened it out to cover states of being since they would also seem to fit with his argument, and to be relevant to the enhancement debate. I've also added the "more or less" bit before "accurately imagine". When he initially introduces the concept, Agar only refers to "accurately imagine", but later he acknowledges that this comes in degrees. So I think, for him, the imagining does not need to be perfect, just close to reality.

Why is the concept helpful? In essence, Agar argues that our lack of interest in superhuman feats can be explained by our inability to veridically engage with those feats. We have no interest in the achievements of Deep Blue because we cannot think like a computer. To think like Deep Blue would require us to compute 200,000,000 positions per second. We could at best perform a very poor facsimile of this. That's very different from how we engage with Kasparov's achievements. As Agar himself puts it:

No matter how soundly Deep Blue beats Kasparov, a human player will always play chess in ways that interests human spectators to a greater degree than Deep Blue and its successors. Human chess players of modest aptitude can read Kasparov's annotations and thereby gain insight into his stratagems. Kasparov's chess play is vastly superior to that of his fans. But he, presumably as a very young player, passed though a stage in his development [that was]...similar to that of his fans. 
(Agar, 2014, p. 41)

Agar offers us a psychological theory that accounts for our ability to veridically engage with certain activities and states of being. This is simulation theory which argues that the way in which we understand the behaviour of other human beings is by performing a simulation of the mental processes that lie behind that behaviour. Gregory Currie has used this to explain how we engage with fiction. It also helps to explain why we resort to anthropomorphism when imagining non-human animal behaviour.

So the upshot here for Agar is that we don't care about superhuman endeavours because we can't veridically engage with them. Agar is quick to point out that this doesn't mean that superhuman feats are devoid of intrinsic value. It could be that once we become superhuman, we will find our new capacities thrilling and begin to appreciate a whole new set of goods (like how we appreciate new things when we transition from childhood to adulthood). Nevertheless, it does suggest, to him at least, that superhuman activities and states of being lack intrinsic value to us, right now, as ordinary human beings. It'd be better to stick with the intrinsic goods that currently excite our imaginations.


3. Some thoughts and criticisms
I can appreciate what Agar is trying to do in this part of chapter 3. He is trying to flesh out his anthropocentric ideal of enhancement. He is trying to explain how it could be that enhancement up to, or slightly beyond, the current range of human norms is prudentially valuable, but enhancement outside of that range is not. I do, however, have a couple of critiques and queries.

The first has to do with the nature of the argument being presented. I take it that Agar is trying to present an argument from prudential axiology. That is: from premises about what we ought to prudentially value to conclusions about how radical enhancement might negatively impact on those values. That would be consistent with his stated aims from earlier chapters. The problem is that the argument he presents doesn't seem to be like that. It seems to be a purely factual argument about what interests or excites us and why. It's an explanation of one of our psychological quirks, not a defence of a principled normative distinction. At least, it reads that way to me.

Agar could perhaps respond by suggesting that his argument is based on intuitions about particular cases. In other words, he could argue that we intuitively find superhuman feats less prudentially valuable, as is obvious from our reaction to these cases. Arguments from intuition are certainly venerable in axiological debates, but he doesn't seem to adopt this approach directly. Furthermore, if this is what he is doing, it renders the explanation in terms of veridical engagement somewhat superfluous, however interesting it may be. Or, at least, it does so provided that Agar doesn't think that the notion of veridical engagement is itself axiologically significant. Might he believe that? I'm not sure, and I'm not sure why it would be.

This brings me to another point, which has to do with making claims about our capacity to veridically engage with certain activities. This is a dangerous game since what seems experientially out of reach to some may seem less so to others. I certainly have this feeling in relation to the superhuman marathon runners that Agar imagines. I just don't see what's so difficult to imagine about their experiences. I can imagine running at sprint speed; and I can imagine running for a very long time. Why couldn't I imagine both together? Seems like it just requires adding together experiences that I'm already capable of veridically engaging with. It just requires more of the same.

Now, you may respond by saying that this is just one example: Agar's case doesn't stand or fall on this one example. And I happen to think that this is right (I certainly think Agar hits the nail on the head with respect to computer chess: I don't think we can veridically engage with that style of chess-play). My only point is that my reaction to the superhuman marathon could indicate that cases of truly radical enhancement are harder to find than we might think. For example, hyperextended lifespans might be deemed "radical" enhancements by some, but it would seem possible to veridically engage with them: they are longer versions of what we already have. Admittedly, Agar has a chapter on this later in his book where he will no doubt argue that this view of hyperextended lifespan is wrong. I haven't read that yet.

Anyway, that's what I'm thinking so far. In the next post, I'll look at the second argument from chapter three. That argument claims that not only would radical enhancement deprive us of certain intrinsic goods, it would also be unnecessary for achieving certain external goods.

Saturday, April 12, 2014

The Badness of Death and the Meaning of Life (Series Index)



Albert Camus once said that suicide is the only truly serious philosophical question. Is life worth living or not? Should we fear our deaths or hasten them? Is life absurd or overflowing with meaning? These are questions to which I am repeatedly drawn. Consequently, I have written quite a few posts about them over the years. Below, you'll find a complete list, in reverse chronological order, along with links.

Enjoy.


1. The Achievementist view of Meaning in Life
My most recent foray into the debate about the meaning of life was my analysis of Steven Luper's "achievementist" account of meaning in life. Although I find the account intriguing, I'm not entirely convinced.




2. William Lane Craig and the "Nothing But" Argument
This post critiques William Lane Craig's argument that, because humans are nothing but collections of molecules, their lives are devoid of moral value. Although ostensibly framed as a contribution to the debate on morality and religion, the argument also has significance for those who are interested in the meaning of life.


3. Scientific Optimism, Techno-utopianism and the Meaning of Life
This post looks at an argument from Dan Weijers. The argument claims that if we combine naturalism with a degree of techno-utopianism we arrive a robust account of meaning in life. This contrasts quite dramatically with Craig's belief that naturalism entails the end of meaning.
4. Are we Cosmically Significant?
If you look up at the stars at night, it's easy to become overawed at the vastness of our universe. It is so mind-bogglingly large and we are so small. Does this fact make our lives less significant? Guy Kahane argues that it doesn't. This post analyses his argument. 


5. Must we Pursue Good Causes to Have Meaningful Lives?
Philosopher Aaron Smuts defends the Good Cause Account (GCA) of meaning in life. According to this account, our lives are meaningful in virtue of and in proportion to the amount of objective good for which they are causally responsible. These two posts cover his defence of the GCA.


6. Revisiting Nagel on the Absurdity of Life
Thomas Nagel has probably written the most famous paper on the absurdity of life. Many people refer to this paper for knockdown critiques of "bad" arguments for the absurdity of life, while ignoring the fact that Nagel himself thinks that life is absurd. In this two-part series I revisit Nagel's famous paper. I suggest that some of his knockdown critiques are not-so good, and I outline Nagel's own defence of the absurdity of life.


7. Should we Thanatise our Desires?
The ancient philosophy of Epicureanism has long fascinated me. Epicureans developed some interesting arguments about our fear of death and developed a general philosophy of life. One key element of this philosophy was that we should live in a way that is compatible with our eventual deaths. One way to do this was to thanatise our desires, i.e. render them immune to being thwarted or unfulfilled by death. This post asks whether this is sensible advice.


8. The Lucretian Symmetry Argument
Lucretius was a follower of Epicureanism. In one of the passages from his work De Rerum Natura, he defends something that has become known as the symmetry argument. This argument claims that death is not bad for us because it is like the period of non-existence before our births. In other words, it claims that pre-natal non-being is symmetrical to post-mortem non-being. Many philosophers dispute this claim of symmetry. In these two posts, I look at some recent papers on this famous argument.
9. Would Immortality be Desirable?
If we assume that death is bad, does it follow that immortality is desirable? Maybe not. Bernard Williams's famous paper - "The Makropulos Case: Reflections on the Tedium of Immortality" famously makes this case. In these three posts, I look at Aaron Smuts updated defence of this view. Smuts rejects Williams's argument, as well of the arguments of others, and introduces a novel argument against the desirability of immortality.


10. Is Death Bad or Just Less Good?
This is another series of posts about Epicureanism. In addition to the Lucretian symmetry argument, there was another famous Epicurean argument against the badness of death. That argument came from Epicurus himself and claimed that death was nothing to us because it was an experiential blank. In these four posts, I look at Aaron Smuts's defence of this Epicurean argument.
11. Theism and the Meaning of Life
The links between religion and the meaning of life are long-standing. For many religious believers, it is impossible to imagine a meaningful life in a Godless universe. One such believer is William Lane Craig. These two posts look at Gianluca Di Muzio's critique of Craig's view.
12. Harman on Benatar's Better Never to Have Been
Anti-natalism is arguably the most extreme position one can take on the value of life and death. Anti-natalists believe that coming into existence is a great harm, and consequently we have duty not to bring anyone into being. The most famous recent defence of anti-natalism is David Benatar's book Better Never to Have Been (Oxford: OUP, 2006). In these three posts, I look at Benatar's arguments and Elizabeth Harman's critiques thereof:


13. Podcasts on Meaning in Life
Back when I used to do podcasts, I did two episodes on meaning in life. One looking at a debate between Thomas Nagel and William Lane Craig on the absurdity of life. The other looking at the possibility of living a transcendent life without God.


14. Wielenberg on the Meaning of Life
This is a frustratingly incomplete series on Erik Wielenberg's arguments about the meaning of life. In my defence, it was my earliest foray into the topic, and I've covered many similar arguments since. One for the die-hards only I suspect:


Friday, April 11, 2014

The Objective and Anthropocentric Ideals of Enhancement



Nicholas Agar has written several books about the ethics of human enhancement. In his latest, Truly Human Enhancement, he tries to stake out an interesting middle ground in the enhancement debate. Unlike the bioconservatives, Agar is not opposed to the very notion of enhancing human capacities. On the contrary, he is broadly in favour it. But unlike the radical transhumanists, he does not embrace all forms of enhancement.

The centrepiece of his argument is the distinction between radical forms of enhancement — which would push us well beyond what is normal or possible for human beings — and modest forms of enhancement — which work within the extremes of human experience. Agar argues that in seeking radical forms of enhancement, we risk losing our entire evaluative framework, i.e. the framework that tells us what is good or bad for beings like us. That is something we should think twice about doing.

I'm currently working my way through Agar's book, and I thought it might be worth sharing some of my reflections on it as I do. This is something I did a few years back when reading his previous book, Humanity's End?. In my reflections, I'm going to focus specifically on chapters 2, 3 and 4 of the book. I will write these reflections as I read the chapters. This means I will be writing from a position of ignorance: I won't know exactly where the argument is going in the next chapter when I write. I think this can make for a more interesting experience from both a writer's and a reader's perspective.

Anyway, I'll kick things off today by looking at chapter 2. In this chapter, Agar introduces some important conceptual distinctions, ones he promises to put to use in the arguments in later chapters. This means the chapter is light on arguments and heavy on definitional line-drawing. But that's okay.

The main thrust of the chapter is that there is a significant difference between two ideals of enhancement: (i) the objective ideal and (ii) the anthropocentric ideal. The former is embraced by transhumanists like Ray Kurzweil and Max More; the latter is something Agar himself embraces. To understand the distinction, we first need to look at the definition of enhancement itself, and the at the concept of prudential value. Let's do that now.


1. What is enhancement?
The definition of enhancement can be contentious. This is something I've covered in my own published work. Some people equate enhancement with "improvement", but that equation tends to stack the deck against the opponents of enhancement. After all, who could object to improving human beings? If we want to engage with the debate in a more meaningful and substantive way, we can't simply beg the question against the opponents of enhancement like this.

For this reason, Agar tries to adopt a value-neutral definition of enhancement:

Human Enhancement: Is the use of technology - usually biotechnology - to move our capacities beyond the range of what is normal for human beings.

This definition does two important things. First, it focuses our attention on our "capacities", whatever they may be. This is important because, as we'll see below, capacities and their connection to certain goods, is an essential part of Agar's conceptual framework. Second, it defines enhancement in relation to human norms or averages, not moral norms or values. This is important because it is what renders Agar's definition value-free.

Still, as Agar himself seems to note (I say "seems" because he doesn't make this connection explicit), there is something oddly over-inclusive about this definition. If it really were the case that pushing human capacities beyond the normal range sufficed to count as enhancement, then we would have some pretty weird candidates for potential human enhancement technologies. For example, it would seem to imply that a drug that allowed us to gain massive amounts of weight -- well beyond the normal human range of weight gain -- would count as an enhancing drug. Surely that can't be right?

For this reason, Agar seems to endorse the approach of Nick Bostrom, which is to assert that there are certain kinds of human capacity that are "eligible" candidates for enhancement (e.g. intelligence, beauty, height, stamina) and certain others that are not (e.g. the capacity to gain weight). The problem is that this re-introduces value-laden assumptions. Ah well. Definitions are tough sometimes.


2. Prudential Value: Between Intrinsic and Instrumental Value
Agar's argument is about the prudential value of enhancement. That is to say: the value of being enhanced from an individual's perspective. The question he asks is: is enhancement good for me? His argument is not about the permissibility or moral value of enhancement. If we focus on enhancement from those perspectives — for example, if we were to focus on enhancement from the perspective of the public good — different issues and arguments would arise.

As Agar notes, there are two aspects to prudential value:

Instrumental Value: Something is instrumentally prudentially valuable if it brings about, or causes to come into being, other things that are good for the individual.
Intrinsic Value: Something is intrinsically prudentially valuable if it is good for the individual in and of itself, not because it brings about something else.

To add more complexity to the distinction, Agar also introduces the concepts of external and internal goods. This is something he derives from the work of Alasdair MacIntyre, who explains the difference with an analogy to the game of chess.

MacIntyre says that playing chess can produce certain external goods. For example, if I am a successful chess player, I might be able to win prize money at chess tournaments. The prize money would be an external good: a causal product of my success at chess. But there are other goods that are internal to the game itself. In playing the game, I experience the good of, say, strategic planning, careful rational thought about endgame and opening, and so forth. These goods are instantiated by the process of playing chess. They are not mere causal products of it.

Why is this important? Well, because Agar urges us to evaluate our human capacities in terms of both their instrumental value (i.e. their tendency to produce external goods) and their intrinsic value (i.e. their tendency to help us instantiate internal goods). This is where the contrast between the objective and anthropocentric ideals of enhancement becomes important.

I have one comment about Agar's view of capacities and goods before proceeding to discuss the differences between the objective and anthropocentric ideals. I think the relationship between our capacities and external goods is tolerably clear. Agar is simply saying that our capacities are instrumentally valuable when they help us to bring about certain external goods (e.g. greater wealth, happiness, artwork, scientific discoveries and so forth). The relationship between capacities and internal goods is less clear. Agar says "we assign intrinsic value to a capacity according to the internal goods it yields", but I wonder what he means by "yields" here. It can't be (can it?) that our capacities themselves instantiate internal goods? Rather, it would seem to be that our capacities allow us to do things, engage in certain activities (like chess playing), that instantiate certain internal goods. At least, that's how I understand the relationship.


3. The Objective Ideal of Enhancement
It is possible to measure objective degrees of enhancement. For example, if we take a capacity like stamina or intelligence, we can measure the amount of improvement in those capacities by adopting widely used metrics (e.g. bleep tests and IQ tests). We might quibble with some of those metrics, but it is still at least possible to measure objective rates of improvement along them. Other capacities or attributes might be more difficult to measure objectively (e.g. can we measure capacity for moral insight when the concept of morality is so contested?), but even in those cases it might be possible to come up with an objective measurement. It will just be a highly contentious one.

These contentions need not concern us here. All that matters is that there is some possibility of objective measurement. Provided that there is, we can understand the objective ideal of enhancement. This ideal has a very straightforward view of the relationship between human enhancement and prudential value. It says that as we increase the objective degree of enhancement (i.e. as we go up the scale of intelligence, moral insight, stamina, beauty, lifespan etc.), so too do we go up the scale of prudential value. There may be diminishing rates of marginal return — e.g. the first 400 years of added lifespan might count for more than the second 400 — but, and this is the critical point, there is never a negative relationship between the degree of enhancement and the degree of prudential value. This is illustrated in the diagram below.




Agar argues that many in the transhumanist community embrace the objective ideal of enhancement. They think that the more enhanced we become, the more prudential value we will have. He cites Ray Kurzweil and Max More as two exemplars of this attitude. His suggestion is that this comes from an instrumentalist approach to the value of our capacities; a belief that they matter because they help us to realise certain external goods; not because they instantiate intrinsic goods.


4. The Anthropocentric Ideal of Enhancement
This sets up the contrast with the anthropocentric ideal. This ideal has a different view of the relationship between enhancement and prudential value. Instead of it being the case that prudential value always increases in direct relation to increases in objective degrees of enhancement, it is sometimes the case that the relationship reverses. For example, an extra 100 IQ points might increase the degree of prudential value, but an extra 500 might actually decrease it. This idea is illustrated in the diagram below.



Agar's suggestion is that the anthropocentric ideal allows for this kind of relationship because it includes intrinsic value and internal goods in its assessment of prudential value. The anthropocentric ideal suggests that there are certain things that are good for us now (as human beings) that might be lost if we push the objective degree of enhancement too far. These are goods that are internal to some of our current types of activity.

Agar is adamant that the anthropocentric and objective ideals are not alternatives to one another. That is to say: it is not the case that one of those ideals is right and one is wrong. They are both simply different ways of looking at enhancement and measuring its value. Furthermore, the anthropocentric ideal doesn't necessarily assume that all forms of enhancement reach a point of decline. This is something that needs to be assessed on a case by case basis.

Despite these admonitions, it seems clear that his goal is to argue that the anthropocentric ideal is too often neglected by proponents of enhancement; and to argue that the negative relationship does arise in some important cases. The purpose of chapters 3 and 4 is to flesh out these arguments.

I'm interested in seeing where all of this goes. I appreciate the conceptual framework Agar is building, but I'm concerned about his use of the external/internal goods distinction and how it maps onto our understanding of human capacities. It seems to me like an objective ideal of enhancement (one that accepts the positive relationship) need not deny or obscure internal goods. But that depends on how exactly we understand the relationship between capacities and internal goods. I'll hold off on any judgment until I've read the subsequent chapters.

Wednesday, April 9, 2014

Equality, Fairness and the Threat of Algocracy: Should we embrace automated predictive data-mining?



I’ve looked at data-mining and predictive analytics before on this blog. As you know, there are many concerns about this type of technology and the increasing role it plays in our lives. Thus, for example, people are concerned about the oftentimes hidden way in which our data is collected prior to being “mined”. And they are concerned about how it is used by governments and corporations to guide their decision-making processes. Will we be unfairly targetted by the data-mining algorithms? Will they exercise too much control over socially important decision-making processes? I’ve reviewed some of these concerns before.

Today, I want to switch tack and, instead of focusing on the moral and political concerns with these technologies, I want to look at a moral and political argument in their favour. The argument comes from Tal Zarsky. It claims that the increasing use of automated predictive analytics should be welcomed because it can help to the eliminate racial and ethnic biases that permeate our social decision-making processes. It also argues that resistance to this technology could be attributable to a fear amongst the majority that they will lose their comfortable and privileged position within society.

This strikes me as an interesting and provocative argument. I want to give it a fair hearing in this post. To do this, I’ll break my discussion down into three subsections. First, I’ll clarify the nature of the technology under debate. Second, I’ll outline Zarsky’s argument. Third, I’ll look at some potential problems with this argument.

The discussion is based on two articles from Zarsky, which you can find here and here.


1. What exactly are we talking about?

Zarsky’s argument is about the way in which data-mining algorithms can be used to make predictions about individual behaviour. The argument operates in a world dominated by jargon like “data-mining”, “big data”, “predictive analytics” and so forth. This jargon is often ill-defined and poorly understood. Fortunately, Zarsky takes the time out to define some of the key concepts and to specify exactly what his argument is about.

The first key concept is that of “data-mining” which Zarsky defines in the following manner:

Data-Mining: The non-trivial process of identifying valid, novel, potentially useful and ultimately understandable patterns in data.

There is a sense in which we all engage in a degree of data-mining, so defined. The difference nowadays comes from the fact that we are living in the era of “big data”, in which vast datasets are available, and which cannot be mined without algorithmic assistance.

As Zarsky notes, there are several different kinds of data-mining. At a first pass, there is a distinction between descriptive and predictive data-mining. The former is used simply to highlight and explain the patterns in existing datasets. For example, data-mining algorithms could be used to identify significant patterns in experimental data, which can in turn be used to confirm or challenge scientific theories. Predictive data-mining is, by way of contrast, used to make predictions about future events on the basis of historical datasets. Classic examples might be the mining of phone records and internet activity to predict who is likely to carry out a terrorist attack, or the mining of historical purchasing decisions to predict future purchasing decisions. It is the predictive kind of data-mining that interests Zarsky (I call this, along with others, “predictive analytics” as it is about analysing datasets to make predictions about the future).

In addition to this, there is a distinction between two different kinds of data “searches”:

Subject-based searches: Search datasets for known/predetermined patterns (typically relating to specific people or events).
Pattern-based searches: Search datasets for unknown/not predetermined patterns.

Zarsky’s argument is concerned with pattern-based searches. These are interesting insofar as they grant a greater degree of “autonomy” to the algorithms sorting through the data. In the case of pattern-based searches, the algorithms find the patterns that human analysts and governmental agents might be interested in; they tell the humans what to look out for.

All of which brings us to the thorny issue of human involvement. Again, as Zarsky notes, humans can be more or less involved in the data-mining process. At present, they are still quite heavily involved, constructing datasets to be mined and defining (broadly) the parameters within which the algorithms work. Furthermore, it is typically the case that humans review the outputs of the algorithms and decide what to do with them. Indeed, in the European Union, this is a legal requirement. Article 15 of Directive 95/46/EC requires human review of any automated data-processing that could have a substantial impact on an individual’s life.

There are, however, exceptions to this requirement and it is certainly technically feasible to create systems that reduce or eliminate human input. Part of the reason for this comes from the existence of two different styles of data-mining process:

Interpretable Processes: This refers to any data-mining process which is based on factors and rationales that can be reduced to human language explanations. In other words, processes which are interpretable and understandable by human beings.
Non-interpretable Processes: This refers to any data-mining process which is not based on factors or rationales that can be reduced to human language explanations. In other words, processes which are not interpretable and understandable by human beings.

The former set of processes allow for significant human involvement, both in terms of setting out the rationales and factors that will be used to guide the data-mining, and in terms of explaining those rationales and factors to a wider audience. The latter set of processes reduce, and may ultimately eliminate, human involvement. This is because in these cases the software makes its decision based on thousands (maybe hundreds of thousands) of variables which are themselves learned through the data analysis process, i.e. they are not set down in advance by human programmers.

In his writings, Zarsky sometimes suggests that interpretable processes are preferable, at least from a transparency perspective. That said, in order for his fairness and equality argument to work, it’s not clear that interpretable processes are required. Indeed, as we are about to see, minimising the ability of humans to interfere with the process seems to be the motivation for that argument. I return to this issue later. For the time being, let’s just look at the argument itself.


2. The Equality and Fairness Argument
To get off the ground, Zarsky’s argument demands that we make an assumption. We must assume that predictive analytics can, as a matter of fact, be useful, i.e. that it can successfully identify likely terrorist suspects, tax evaders, violent criminals, or whatever. If it can’t do that, then there’s really no point in discussing it.

Furthermore, when assessing the merits of predictive analytics we must take care not to consider it in isolation from its alternatives. In other words, we can’t simply focus on the merits and demerits of predictive analytics by itself, without also considering the merits and demerits of policies that are likely to be used in its stead. This is an important point. Governments have legitimate aims in trying to reduce thinks like terrorism, tax evasion and violent crimes. If they are not using predictive analytics to accomplish those aims, they’ll be using something else. The comparators must be factored into the argument. If it turns out that predictive analytics is comparatively better than its alternatives, then it may be more desirable than we think.

But that simply raises the question: what are the comparators? In his most detailed discussion, Zarsky identifies five alternatives. For present purposes, I’m going to simplify and just talk about one: any system in which humans decide who gets targetted. This could actually cover a wide variety of different policies; all that matters is that they share this one feature. This is to be contrasted with an automated system that runs entirely on the basis of predictive data-mining algorithms.

With all this in mind, we can proceed to the argument proper. The argument works from a simple motivating premise: it is morally and politically better if our social decision-making processes do not arbitrarily and unfairly target particular groups of people. Consider the profiling debate in relation to anti-terrorism and crime-prevention. One major concern with profiling is that it is used to arbitrarily target and discriminate against certain racial and ethnic minorities. That is something that we could do without. If people are going to be targetted by such measures, they need to be targetted on legitimate grounds (i.e because they are genuinely more likely to be terrorist or to commit crimes).

Working from that motivating premise, Zarsky then adds the comparative claim that automated predictive analytics will do a better job of eliminating arbitrary prejudices and biases from the process. That gives us the following argument:


  • (1) It is better, ceteris paribus, if our social decision-making processes do not arbitrarily and unfairly target particular groups of people.
  • (2) Social decision-making processes that are guided by automated predictive analytics are less likely to do this than processes that are guided by human beings.
  • (3) Therefore, it would be better, ceteris paribus, to have social decision-making processes that are guided by automated predictive analytics.


Let’s probe premise (2) in a little more depth. Why exactly is this likely to be true? To back it up, Zarsky delves into the literature on implicit and unconscious biases. Those who are familiar with this literature will know that a variety of experiments in social psychology reveal that even when decision-makers don’t think they are being racially or ethnically prejudiced, they often are. This is because they subconsciously and implicitly associate people from certain racial and ethnic backgrounds with other negative traits. If you like, you can perform an implicit association test (IAT) on yourself to see whether you exhibit such biases.

Zarsky’s point is simply that the algorithms at the heart of predictive analytical programmes will not be susceptible to the same kinds of hidden bias, especially if they are automated and the capacity of human beings to override them is limited. As he himself puts it:

[A]utomation introduces a surprising benefit. By limiting the role of human discretion and intuition and relying upon computer-driven decisions this process protects minorities and other weaker groups. 
(Zarsky, 2012, pg. 35)

Zarsky builds upon this by suggesting that one of the sources of opposition to automated, algorithm-based decision-making could be the privileged majorities who benefit from the current system. They may actually fear the indiscriminate nature of the automated process. If the process is guided by a human, then the majorities can appeal to human prejudices in order to secure more favourable, less intrusive outcomes. If the process is guided by a computer, they won’t be able to do this. Consequently, some of the burden of enforcement and prevention mechanisms will be shifted onto them, and away from the minorities who currently bear their brunt.


3. Problems and Conclusions
That’s the argument in outline form. The next question is whether it is persuasive. That’s a difficult question to answer in the space of a blog post like this, and it is one I am still pondering. Nevertheless, there are a few obvious, general, points of criticism.

The first is that premise (2) might actually be wrong. It may be that predictive analytics is just as biased and prejudiced as human decision-making. This could arise for any number of reasons, some of which Zarsky acknowledges. For example, the datasets that are fed into the algorithms could themselves be the products of biased human policies on data collection. Likewise, the sorting algorithms might have built in biases that we can’t fully understand or protect against. This is something that could be exacerbated if the whole process is non-interpretable.

All of which brings me to another obvious point of criticism. The “ceteris paribus” clause in the first premise is significant. While it is indeed true that — all else being equal — we prefer to have unbiased and unprejudiced decision-making systems, all else may not be equal here. Elsewhere on this blog, I have outlined something I call the “threat of algocracy”. This is a threat to the legitimacy of our social decision-making processes that is posed by the incomprehensibility, non-interpretability and opacity of certain kinds of algorithmic control. The threat is important because, according to most theories of procedural justice, any public procedure that issues coercive judgments should be understandable by those who are affected by it. The problem is that this may not be the case if we hand control over to the automated processes recommended by Zarsky.

He himself acknowledges this point by highlighting how we prefer to have human decision-makers because at least we can engage with them at a human level of rational thought and argumentation: we can identify their assumptions and spot their faulty logic (if indeed it is faulty). But Zarsky has a response to this worry. He can fall back on the desirability of interpretable predictive analytics. In other words, he can argue that we can have the best of both worlds: unbiased decision-making, coupled with human comprehensibility. All we have to do is make sure that the rationales and factors underlying the automated predictive algorithms can be explained to human beings.

That might be a satisfactory solution, but I’m not entirely convinced. One reason for this is that I think having interpretable processes might re-open the door to the kinds of biased human decision-making that originally motivated Zarsky’s argument. The more humans can understand and shape the process, the more scope there is for their unconscious biases to affect its outputs. So perhaps the lack of bias and the degree of comprehensibility are in tension with one another. Perhaps additional solutions are needed to get the best of both worlds (e.g. moral enhancement)?

I think that question is a nice point on which to end.

Tuesday, April 8, 2014

The Achievementist View of Meaning in Life



What makes for a meaningful life? There are many proposed answers to this question. Some argue that God is necessary for a meaningful life; some argue that objectively fulfilling projects are necessary; some argue that the satisfaction of desires is enough; and some argue that nothing could make our lives meaningful. In today’s post I want to take a look at Steven Luper’s answer to that question.

Luper defends something he calls the “Achievementist View” of meaning in life. According to this, meaning is dependent upon our achieving certain goals. This is a highly subjectivist theory of meaning, and it distinguishes meaning from other related properties such as “purpose” and “well-being”. It also highlights the connections between meaning and seemingly unrelated properties like “identity”.

In what follows, I outline the main constituents of Luper’s theory. I generally refrain from overly-critical comments. I’m primarily interested in just setting out the theory and eliciting feedback from readers, not in critiquing it. This is because I find the theory both perplexing and intriguing. I find it perplexing because it seems to fall foul of several obvious objections. But I’m nevertheless intrigued because Luper is well aware of these objections, and brushes them aside with conviction.

Consequently, I’m left wondering whether there isn’t more to the theory than first meets the eye. In particular, I’m left wondering whether it doesn’t actually accurately capture what a meaningful life looks like from the “inside”, i.e. from the perspective of the one who lives it. I should add that I’m also interested in the theory because it suggests that certain forms of technological assistance can actually undermine the meaningfulness of our lives.

I base this discussion on Luper’s contribution to the Cambridge Companion to Life and Death.


1. A Quick Overview of the Achievementist View
The Achievementist View, at least as Luper defines it, is based on two key ideas:

The Whole Life Thesis: What bears meaning is the entirety of one’s life, not just particular parts or aspects thereof.
The Achievementist Thesis: What confers meaning on the whole of one’s life is whether one has achieved one’s aims.

The first of these is interesting insofar as it is denied by others. Some think that meaning arises out of particular moments or temporal slices of one’s life. Some think a combination of both is needed. For example, Thaddeus Metz, in his recent book about the meaning of life, argues that both the whole life and particular parts thereof constitute its meaningfulness. Interesting though this debate is, it need not concern us greatly here (except at the end when we look at some arguments for the absurdity of life).

It is the second thesis that is the important one. It claims that in order to have meaning, one must have a life plan: a set of coherent goals or ends that one wishes to achieve. It is only if those ends are achieved that one lives a meaningful life. Luper is adamant that this is very different from a desire-fulfillment theory of meaning. One can have one’s desires fulfilled without actually achieving anything.

Consider a simple example. One of my desires might be to laugh and have a good time. Going to see the stand-up comedian Louis CK could enable me to do both. But this wouldn’t mean that I had achieved those desires. Quite the contrary in fact. It is the other party — Louis CK in this instance — that is doing all the desire-fulfilling work for me. I am simply a passive recipient and beneficiary of his achievements.

The achievementist rejects this passive model. In order to achieve one’s ends, some active agency-like involvement in the task is required. Thus, for example, suppose one of my aims is to become completely self-sufficient in the production and maintenance of my own food supply. So I go out and buy the necessary animals and plant seeds. I dig up my land, plant the seeds, house and feed the animals, look after them through good times and bad. At the end of this process I can be said to have achieved something. If I simply hire another person to do all the work, I’ll have achieved nothing.

I find this view particularly interesting in light of the (increasing) role of technology in aiding our desire-fulfillment. At the moment this role is still limited. A satellite navigation system will help me to get to my destination, but for the time being I’m still doing the driving. Thus, for the time being I’m still playing an active part in achieving my goal of getting to that place. But what if technology completely takes over? What if we each have a team of robot assistants to cater to our every desire? Will that rob us of meaning in life? If the achievementist view is to be believed, it would. Perhaps this is something we should guard against.


2. Achievements and Purposes
The concept of an achievement is closely-related to that of a purpose. A purpose represents some object or end of one’s life; an achievement is an object or end that confers meaning. Nevertheless, purposes are distinct from achievements.

One major reason for this is that “purpose” has a faintly “externalist” or “objectivist” ring to it. In other words, people often talk about life’s purpose when they mean to refer to something that is external to and larger than the agent him or herself. Luper rejects any attempt to collapse the achievementist view into such an objectivist view. For him, the purposes at the heart of the achievementist view are dependent on self-directed goals.

This raises an obvious issue: can anyone (e.g. God) dictate to you what makes your life meaningful? In other words, can another agent set goals for you and can your achievement of those goals confer meaning on your life? Luper’s answer is a nuanced (and I presume religiously agnostic) one. He rejects Kurt Baier’s view that purposes conferred by God turn us into mere instruments or tools in His own life plan. Instead, Luper thinks that we could, meaningfully, form part of another being’s life plan. But this would require joint planning. Our achievements could involve work with a community of like-minded individuals. Nevertheless, we are always, on Luper’s view, gatekeepers of our own meaning. We must always play an active role in deciding what the goals of our lives will be.


3. Meaning and Identity
There is also a close and important relationship between meaning and identity, but not in the sense that “identity” is typically debated by philosophers. As it is typically debated by philosophers, the concept of identity is understood in terms of numerical identity, i.e. in terms of that set of properties (if any) that makes it true to say that “I” am the same person now as I was five years ago. This conception of identity has no direct bearing on the issue of meaning, except in the limited sense that existence over time might be important to our achievements.

There is, however, another concept of identity which has an important bearing on the issue of meaning. To avoid confusion, Luper introduces a new label for this concept, that of critical identity. This the set of personal properties that makes our lives worth living. More precisely, it is the set of critical features, i.e. personal properties, the loss of which would make us indifferent to our continued survival.

Luper breaks this concept of critical identity down into several parts. In particular, he highlights the notion of a conative identity, an identity we take on that gives purpose and direction to our lives (as the achievementist view demands). The conative identity is essential for meaning, but it is not the only thing that is critical. The cultivation of a moral identity might also be critical, so Luper leaves the door open to possibilities like these in his account of critical identity.

The important thing for Luper is that the critical identity is not something we are born with, nor is it something that we necessarily acquire. It is something that we need time to develop and must choose to take on. Hence it is possible, on his account, to live a completely directionless and purposeless life, one utterly devoid of meaning or critical identity. Furthermore, it is possible on his account for “us” — in the sense of our numerically identical selves — to survive the loss of our critical identities. But that loss will, as far as Luper’s concerned, be phenomenologically equivalent to our deaths: once we lose our critical identities, we lose the will to live.


4. Meaning and Welfare
There is often felt to be a close connection between meaning and well-being. Indeed, some theorists think that meaning reduces to well-being. Luper encourages us to resist this reduction. He argues instead that there are important differences between meaning and well-being.

He illustrates this by referring to one of the main (but not sole) constituents of our well-being, namely: our happiness. This is often interpreted in terms of our conscious pleasure or amusement. It is one of the things that is intrinsically good for us. There could, of course, be many other things that are intrinsically good for us. And our well-being will be determined by our share of this total set of intrinsically good (for us) things. But we’ll focus on the happiness example for now because everyone seems to agree that, even if there are other intrinsic goods, happiness must be part of the picture.

Luper accepts that achievements and intrinsic goods often go hand-in-hand, hence why it is so tempting to reduce meaning to welfare. But there are at least two important distinctions. The first is that meaning is not summative in the same way as welfare. Generally speaking, and ceteris paribus, it is better to have more welfare than less. In other words, the more happy experiences you can add to your life, then the more welfare that life will be said to have had. But an achievement confers meaning on life even if the individual whose life hangs it is had one merely one goal to be achieved. Quantity does not matter.

The other important distinction has to do with the obvious potential for meaning and welfare to diverge. For example, it is possible, on Luper’s account, to live a life full of well-being and happiness, and yet devoid of achievements. Luper thinks we should try to avoid such a life. He argues, using Nozick’s experience machine as his starting point, that meaning is a greater good than happiness. He also argues that although a certain minimum degree of happiness might be needed in order to make life worth living, we are better off if we aim for happiness indirectly through the pursuit of our goals. For it is often in achieving our aims that we experience the greatest satisfaction.

We must also accept two unwelcome implications of the achievementist view. The first is that it allows for a meaningful life to be a very unhappy one (as mentioned above). The second is that it allows for a meaningful life to be a partially evil one. This second implication in interesting. It follows because on the achievementist view all that matters is the achievement of our self-directed goals. These goals could include ones that involve the shirking of our moral responsibilities and duties. Luper illustrates by reference to the life of Paul Gaguin, a famous artist who shirked his responsibilities to his wife and family by moving to Tahiti to paint.


5. Meaning and Absurdity
A common sticking point in the debate about meaning in life is the belief that nothing could provide us with meaning; that our lives are fundamentally absurd. Luper identifies two separate strands of argument underlying the absurdist case and claims that the achievementist can resist both.

The first argument is the argument from fragility or precariousness. This stems from the observation that our lives are far too fragile to sustain meaning. We can have as many goals or projects as we life, but they can all be snuffed out in an instant. Luper gives the poignant example of the children who were permanently entombed by the lava flowing from the eruption of Mt. Vesuvius in AD 79. But that is simply a poignant example. We are all, in a sense, living in the shadow of the volcano: constrained, limited and ultimately expunged by factors beyond our control. True, the strength of those factors can wax and wane over time, but they are always there.

Luper thinks the achievementist can easily sidestep this worry about fragility. Again, what matters from the achievementist perspective is that our self-directed goals are achieved. All we need to do is to insulate those goals from the constraints and limitations we face. Thus we can pick modest goals, ones that are tailored to our particular circumstances, and focus on those. The meaningfulness of our lives will not be diminished.

This answer raises another worry. It seems to allow for extremely modest or trivial goals to count as meaningful. For example, someone whose life project is to count all the grains of sand on a particular patch of beach could, on this view, live a meaningful life (provided the goal is achieved). But that seems wrong. Many people think that some goals are meaning-conferring and some are not. To be precise, they think one has to pursue goals of objective worth in order to live a meaningful life. This view is shared by many of the leading contemporary theorists of meaning, e.g. Susan Wolf, Thaddeus Metz, Aaron Smuts and Erik Wielenberg.

Luper rejects their theories by arguing that the objectivist view is “difficult to defend” (he never says why). He also tries to neutralise the problem by arguing that even if it is true that trivial goals count on the achievementist view, people who think about their life plans and try to create a critical self, will tend to pick more serious goals anyway. So it seems like Luper is trying to have it both ways: objectively valuable goals aren’t needed on his account, but they’ll tend to be pursued by those who take it seriously. I find this problematic.

The second argument adopted by the absurdist is the argument from finitude or mortality. This stems from the common concern that our lives are finite; that our goals, even when achieved, will not be permanent; and that permanency is needed to make our lives meaningful. This is a common belief among the religious. Unsurprisingly, Luper rejects it. Part of the reason for this is that the argument may arise solely because we have faulty goals or aims, such as the goal of permanency or immortality. Since these cannot be achieved, we should drop them and focus on things that are attainable. They will provide us with the meaningfulness that we need.

Luper accepts that the length of life can have an impact on meaning. The shorter time we have, the less opportunity for achievement. Nevertheless, he thinks the impact of mortality on welfare and happiness is more significant. As noted, these goods tend be summative: the more the merrier. And finitude definitely impacts on the volume of positive experiences we can have.

One final point emerges from this discussion of mortality and meaning. Luper notes that many people feel that their lives are less meaningful as the spectre of death approaches, and they become experientially absorbed in the process of dying. While not wishing to deny the reality of those subjective experiences, he argues that the achievementist view resists any claim that life is less meaningful as a result of those experiences. What counts for the achievementist is whether goals have been achieved across the totality of one’s life (the Whole Life Thesis). Those achievements are not diminished by the process of dying.


6. Conclusion
Okay, so that brings us to the end of this summary of the achievementist view. As you can see, it offers a highly subjectivist theory of meaning in life. It claims that meaning is entirely determined by the achievement of self-directed goals. These goals can be trivial, selfish and even partly evil. That does not matter. All that matters is that the subject pursuing them perceives them as being worth his or her time.

It is this last point that makes me wonder whether Luper’s theory captures what it is to live a meaningful life from the “inside”. Maybe this is something that the more common objectivist theories neglect?