Posthuman dilemma?
Kip Werking's moral dilemma from the fourth anthropocentric conceit seems to present a problem for the posthuman. I don't think there is one.
Kip focuses purely on reproduction as the source of goals and values. He doesn't mention the other things that are part and parcel of reproductive fitness. In particular he fails to consider the need to survive long enough to reproduce.
Transhumanism threatens our utilitarian sensibilities further in the limiting case of "universal orgasm."
For that limit case to come to pass, what is needed to support us orgasmic beings? We need energy for orgasm, even if it is only to move the right chemicals to the right receptors. If transhumans are organic, who ensures we are fed and kept free from disease? If virtual, who provides runtime? Rust never sleeps. Mutation never sleeps. Who is it that wages the ongoing war on the second law of thermodynamics?
With advances in science, the cost of supporting survival might drop, but I'm skeptical that it will ever be zero. We do not have infinite resources. There will always be competition for those limited resources. Any person that spends life in permanent brain reward, with no motivation to do anything but enjoy it, will be out-competed. The Humans -> Happy Grey Goo scenario is a completely unrealistic limiting case unless the hypothetical goo can survive, spread and dominate without the support of a biologically diverse environment. Any artificial reward system that reduces real fitness within a system will be actively selected against (at least in the long term).
When Kip discusses ethics and presents two apparent alternatives that result in a dilemma, we see that both of the alternatives assume we can break the connection between happiness and genes, i.e., between motivation and survival. We find ways to speed up our own evolution, but it is wrong to think that we are free from selection pressures. Breaking that connection can only ever be a short term strategy.
We now understand that most of the things that make us happy and things that make us feel morally right have resulted from our own evolution. Our crude reward systems and moral feelings have been honed for survival as a communal species. Many of nature's experiments fail. Too much aggression and anger: fail. Too contented and unmotivated: fail. These traits can survive in a population, but only at limited proportions, and human societies work out methods to keep them under control because failure to do so is fatal.
Now consider that the means of changing the nature of humans is no longer limited to nature's tedious pace. We make the changes, when we're ready, but, there are 6 billion people here. We won't all change at once. What happens if one part of the world's population changes its own reward systems to disconnect them from survival mechanisms? Survival comes with a cost. Creatures that don't pay it in some way won't survive. Whatever changes we make must remain compatible with survival. We can try to break our happiness subgoal from the genetic supergoal, but that can never be a long-term successful strategy.
Kip needs to ask why the happiness subgoal is so strongly coupled to the genetic supergoal in the first place. It didn't appear by magic. The existence of a happiness subgoal is a predictable outcome of evolution, but it's not an outcome of the mechanism for generating change - that's effectively random. The predictable part comes from selection. When humans manipulate their own nature, they're only adding another mechanism for generating change, not changing the rules that determine survival. Selection still applies. Survival still has costs.
But that's all about the long term. Is there no moral dilemma in the short term?
The predicted moral dilemma that Kip describes in detail only arises if we think about making completely arbitrary changes to our reward systems. If we create beings who are rewarded for non-adaptive behaviour or anti-social behaviour, then we who implement those changes are the maladaptive ones. Creating a conscious, intelligent creature with a lack of connection from supergoal to subgoal to behaviour is an activity that would be short-sighted in the least, and grossly immoral at the worst.
Kip says:
I will show that the utilitarian arguments that ethicists use to justify human behavior would just as well justify the behavior of HS2, HS3, and HS4. Yet the behavior of these others is intuitively wrong.
and later:
HS3 becomes interesting when we consider the spectrum of possibilities for X1. X1 might be CMB or similar behavior. Alternatively, X1 could be positively maladaptive behaviors at the local scale. For example, the BRM of HS3s might be such that HS3s feel rewarded not for CMB but for being destitute, anorexic, insomniac, sexually abstinent child murderers. HS3s might delight in setting themselves on fire and laugh while their families burn.
So why would any scientist devise X1 to reward behaviours like that? In fact, how is that different from a despot choosing to reward cruel and inhuman behaviour in his/her minions? Rewarding people to make them do immoral things is immoral. You can't distance yourself from the person who behaves under your influence and deny that you bear any moral responsibility. If a scientist who manipulates a person's reward system at an intrinsic level in such a way that causes that person to want to do maladaptive stuff, that scientist is behaving immorally/maladaptively. The screwed up behaviour of the victim is a predictable outcome of the action.
The "moral dilemma" supposedly intrinsic to the fourth anthropocentric conceit is not intrinsic to the conceit, not intrinsic to the human state, but is a result of assuming that the transhuman project is somehow likely to create beings who are rewarded for maladaptive behaviours. Given the stated intent of the project to improve the human condition and reduce suffering, why should we even consider such a course? It seems to me to reduce to:
"Doctor, it hurts when I do this."
"Well, don't do that."
- Virge's blog
- Log in to post comments