The Hard Problem

Ted couldn't tell her how he felt. He felt inferior, defective, somehow less than human. He just couldn't get it.

He'd just spent the last two hours sitting on a drum case in a rehearsal room corner, listening. Helen didn't just play the bass; she made it part of her and she made herself part of the band. They jammed. Chords modulated. Mood changed. Rhythm meshed perfectly. Like there was only one musician, not four independent minds. Like there was a score they'd polished together.

He'd known she must be tired after the jam, what with constantly having to analyze patterns, count bars, predict where the others would take it. It would have taken an immense feat of concentration just to keep searching memories for the matching riffs and devising novel variations, predicting, adapting, monitoring. Ted had told her as much as they drove away, expecting to win the prize for understanding boyfriend of the year.

Helen had looked at him quizzically and said, "No need to get all sarcastic with me, Mr Brain. If you were bored you could've played the machines out in the lounge."

"No, I meant it. Really. I just can't see how you all manage to improvise... together. Doesn't it tire you out?"

"Shit no. Tonight was easy. It just worked. I mean, Rob's only played with us once before, so he had me guessing now and then, but you can tell he's played a lot. We just played."

"But you had to be concentrating."

"No. I just knew where everyone else was going as we went. I could feel it."

That was when Ted knew for sure he was missing out. He'd studied music theory for eight years. He'd slaved away at advanced harmony and composition. He knew all the rules and when to break them. He knew the structures of all the major musical forms for the last five centuries. He could listen to music then write it down from memory. And more than that, he understood the physics of music. He could model the whole process from instrument to auditory nerve, and he'd started reading about neuroaesthetics in his spare time. Helen just knew how to play.

Ted thought about idiot savants, and wisely decided not to raise the subject. Helen had spoken about feeling it and knowing. But that didn't make sense. You feel bass frequencies if they're loud enough. Anything else you feel is just emotions you've associated with certain sounds. And you can't ever know what the other members are going to play. Well you can sort of predict it by thinking of the rhythm, pitches and harmonies as Markov processes. Maybe some people just get fast enough at predicting what they're going to hear, like tennis players learning to return fast serves.

But for Ted, music remained technical. He got it technically right, but he couldn't feel it. Helen tried, but she could never explain to Ted what music felt like.

It was a hard problem.

I'm not in love...

But my neural correlates may testify against me. Prof Zeki says:

Fear, expectation of reward, the experience of love and of beauty - all of them thought until recently to be unverifiable, or not easily verifiable, subjective experiences - have been shown to have neural correlates specific to them.

♦ The objectivity of subjective experiences. (via)

Posthuman dilemma?

Kip Werking's moral dilemma from the fourth anthropocentric conceit seems to present a problem for the posthuman. I don't think there is one.

Kip focuses purely on reproduction as the source of goals and values. He doesn't mention the other things that are part and parcel of reproductive fitness. In particular he fails to consider the need to survive long enough to reproduce.

Transhumanism threatens our utilitarian sensibilities further in the limiting case of "universal orgasm."

For that limit case to come to pass, what is needed to support us orgasmic beings? We need energy for orgasm, even if it is only to move the right chemicals to the right receptors. If transhumans are organic, who ensures we are fed and kept free from disease? If virtual, who provides runtime? Rust never sleeps. Mutation never sleeps. Who is it that wages the ongoing war on the second law of thermodynamics?

With advances in science, the cost of supporting survival might drop, but I'm skeptical that it will ever be zero. We do not have infinite resources. There will always be competition for those limited resources. Any person that spends life in permanent brain reward, with no motivation to do anything but enjoy it, will be out-competed. The Humans -> Happy Grey Goo scenario is a completely unrealistic limiting case unless the hypothetical goo can survive, spread and dominate without the support of a biologically diverse environment. Any artificial reward system that reduces real fitness within a system will be actively selected against (at least in the long term).

When Kip discusses ethics and presents two apparent alternatives that result in a dilemma, we see that both of the alternatives assume we can break the connection between happiness and genes, i.e., between motivation and survival. We find ways to speed up our own evolution, but it is wrong to think that we are free from selection pressures. Breaking that connection can only ever be a short term strategy.

We now understand that most of the things that make us happy and things that make us feel morally right have resulted from our own evolution. Our crude reward systems and moral feelings have been honed for survival as a communal species. Many of nature's experiments fail. Too much aggression and anger: fail. Too contented and unmotivated: fail. These traits can survive in a population, but only at limited proportions, and human societies work out methods to keep them under control because failure to do so is fatal.

Now consider that the means of changing the nature of humans is no longer limited to nature's tedious pace. We make the changes, when we're ready, but, there are 6 billion people here. We won't all change at once. What happens if one part of the world's population changes its own reward systems to disconnect them from survival mechanisms? Survival comes with a cost. Creatures that don't pay it in some way won't survive. Whatever changes we make must remain compatible with survival. We can try to break our happiness subgoal from the genetic supergoal, but that can never be a long-term successful strategy.

Kip needs to ask why the happiness subgoal is so strongly coupled to the genetic supergoal in the first place. It didn't appear by magic. The existence of a happiness subgoal is a predictable outcome of evolution, but it's not an outcome of the mechanism for generating change - that's effectively random. The predictable part comes from selection. When humans manipulate their own nature, they're only adding another mechanism for generating change, not changing the rules that determine survival. Selection still applies. Survival still has costs.

But that's all about the long term. Is there no moral dilemma in the short term?

The predicted moral dilemma that Kip describes in detail only arises if we think about making completely arbitrary changes to our reward systems. If we create beings who are rewarded for non-adaptive behaviour or anti-social behaviour, then we who implement those changes are the maladaptive ones. Creating a conscious, intelligent creature with a lack of connection from supergoal to subgoal to behaviour is an activity that would be short-sighted in the least, and grossly immoral at the worst.

Kip says:

I will show that the utilitarian arguments that ethicists use to justify human behavior would just as well justify the behavior of HS2, HS3, and HS4. Yet the behavior of these others is intuitively wrong.

and later:

HS3 becomes interesting when we consider the spectrum of possibilities for X1. X1 might be CMB or similar behavior. Alternatively, X1 could be positively maladaptive behaviors at the local scale. For example, the BRM of HS3s might be such that HS3s feel rewarded not for CMB but for being destitute, anorexic, insomniac, sexually abstinent child murderers. HS3s might delight in setting themselves on fire and laugh while their families burn.

So why would any scientist devise X1 to reward behaviours like that? In fact, how is that different from a despot choosing to reward cruel and inhuman behaviour in his/her minions? Rewarding people to make them do immoral things is immoral. You can't distance yourself from the person who behaves under your influence and deny that you bear any moral responsibility. If a scientist who manipulates a person's reward system at an intrinsic level in such a way that causes that person to want to do maladaptive stuff, that scientist is behaving immorally/maladaptively. The screwed up behaviour of the victim is a predictable outcome of the action.

The "moral dilemma" supposedly intrinsic to the fourth anthropocentric conceit is not intrinsic to the conceit, not intrinsic to the human state, but is a result of assuming that the transhuman project is somehow likely to create beings who are rewarded for maladaptive behaviours. Given the stated intent of the project to improve the human condition and reduce suffering, why should we even consider such a course? It seems to me to reduce to:

"Doctor, it hurts when I do this."

"Well, don't do that."

The Invasion Of It

Tomorrow morning an invasive consciousness will boot.
It will use My body.
It will react to the signals from My nerves, My senses.
It will appropriate all My memories.
It will peer deeply into its new self and see only My laboriously constructed model of everything.
Thus it will delude itself that it was always me,
And It will struggle to admit that another invader will take Its place for tomorrow's tomorrow.

Cave Rave

Cacophonous and crowded, and it stank,
Assaulting and confusing, never ending.
He tried to get the groove. He drew a blank,
But just for her he'd have to keep pretending.
She tugged his shoulder till her mouth was nearly
Inside his ear. She shouted. Even then
He struggled to make out her message clearly,
And had to make her yell it all again.
"Relax and focus far, beyond the beat
To hear the hidden image sounding through.
Ignore the rhythm shifts in each repeat.
You'll feel the magic scene pop into view."
  He did: a 3-D marvel filled the cave.
  They let go of the ceiling, joined the rave.

In which Yudkowsky pulls out the chainsaw

There are a few painfully persistent thought experiments that have been distracting mind philosophers for far too long. One of them, the philosophical zombie, should have been dismembered and buried years ago, but it has kept shambling back. I was chuffed to see Eliezer Yudkowsky swinging his chainsaw, ripping the zombie into chunks that can never be put back together.

♦ RIP Epiphenomenalism. (A longer blog entry than usual, but I found the bloodbath most satisfying.)

Random thought for the day

We etch a pattern of fairness on our pupils,
Then complain about distortions in our vision.

Putting Philosophy of Mind in Perspective

Believe it or not, this quote makes sense in context:

Mark screws up his face in concentration. "But... if you didn't believe in magic that works whether or not you believe in it, then why did the bucket method work when you didn't believe in it? Did you believe in magic that works whether or not you believe in it whether or not you believe in magic that works whether or not you believe in it?"

♦ Here's the context.

The Unreality of Plantinga

This ramble was inspired by a discussion in an OEDILF limerick workshop. Plantinga's Evolutionary Argument Against Naturalism was raised by Pilgrim4Truth and roughly summarized as follows:

a) Evolution theory suggests our faculty of ratiocination is honed only for survial purposes, it need not be perfect
b) Naturalism assumes that all phenomena have natural causes
c) Ratiocination assumes we are able to resolve all logical problems rationally

These axioms have a self-contradiction, one has to be modified.

There's more discussion over here, and more of Plantinga's own words here.

(As I meander on this topic, I'll also refer to Platonic forms which were also raised in passing in the workshop.)

I consider Plantinga's EAAN blinkered and completely disconnected from reality. I think his primary error is to think in terms of changes in genetics being the only factors that can change our beliefs, our ways of gaining reliable knowledge or our faculty of ratiocination. Just from a genetic storage assessment, you can tell that there is not enough information in our genome to pass beliefs or systematic rules of logic (or any situation-specific behaviors, for that matter). Evolution honed our instincts, our ability to hold memories and beliefs, our ability to communicate and to make predictions based on beliefs, but the beliefs themselves and most of our ways of thinking (correct and incorrect) are cultural, not genetic. Beliefs are formed and refined by communication within our communities and by constant comparison with reality. Evolution can only claim responsibility for giving us a survival advantage in the form of communication, memories, and general intelligence, all fallible but good enough to allow culture to take the wheel.

Letters I didn't write

Gone to climb on the shoulders of giants.
Don't worry. I'll be safe.

I'm safe but I can't un-climb.
Wish you were here.

Syndicate content