Sunday, April 17, 2011

A Really Elegant Aplogia For Transhumanism


"A Really Elegant Apologia For Transhumanism"

(c) 2007, 2011 by Jordan S. Bassior



It's here:

I advise anyone who has ever thought about this issue, from any perspective, to read the original post (and the comments are also fascinating), but to summarize:

The essential argument being made is that "transhumanism" is simply "humanism" (or humane-ness) extended to deal with new technological possibilities. One would, all other things being equal, rather save or improve life, rather than take or degrade it? In that case, then it is moral to apply new technologies (again, all other things being equal) to save and improve life as much as possible. There is no difference in essence between carrying someone out of the path of a train, giving them penecillin to treat an infection, employing gene therapy to cure a hereditary disease, or employing nanotechnology to render someone immortal. There is no set point of life or happiness at which the value of further life or happiness switches from positive to negative.

I have always believed this, intuitively: Mr. Yudkowsky has explained it logically, and put it better than could I.  The core of his argument is outlined here:

Transhumanism is simpler - requires fewer bits to specify - because it has no special cases. If you believe professional bioethicists (people who get paid to explain ethical judgments) then the rule “Life is good, death is bad; health is good, sickness is bad” holds only until some critical age, and then flips polarity. Why should it flip? Why not just keep on with life-is-good?

...

As far as a transhumanist is concerned, if you see someone in danger of dying, you should save them; if you can improve someone’s health, you should. There, you’re done. No special cases. You don’t have to ask anyone’s age.

You also don’t ask whether the remedy will involve only “primitive” technologies (like a stretcher to lift the six-year-old off the railroad tracks); or technologies invented less than a hundred years ago (like penicillin) which nonetheless seem ordinary because they were around when you were a kid; or technologies that seem scary and sexy and futuristic (like gene therapy) because they were invented after you turned 18; or technologies that seem absurd and implausible and sacrilegious (like nanotech) because they haven’t been invented yet. Your ethical dilemma report form doesn’t have a line where you write down the invention year of the technology. Can you save lives? Yes? Okay, go ahead. There, you’re done.

and he deals elegantly with the obvious objection from most people

But - you ask - where does it end? It may seem well and good to talk about extending life and health out to 150 years - but what about 200 years, or 300 years, or 500 years, or more? What about when - in the course of properly integrating all these new life experiences and expanding one’s mind accordingly over time - the equivalent of IQ must go to 140, or 180, or beyond human ranges?

Where does it end? It doesn’t. Why should it? Life is good, health is good, beauty and happiness and fun and laughter and challenge and learning are good. This does not change for arbitrarily large amounts of life and beauty. If there were an upper bound, it would be a special case, and that would be inelegant.

and he sums it up beautifully

So that is “transhumanism” - loving life without special exceptions and without upper bound.

Which was always my sense of things.  If it's ok for you to live at age 25, then it's ok for him to live at 50, or 100 or 200, or 400, or for that matter since the Big Bang until the heat-death of the Universe.  Life is of value, and it does not cease to be of value when it is extended.  If it is good to have the mind of a dog, then it is better to have the mind of an ape, and still better to have the mind of an average human, or an average genius, or average supergenius, or Transcendent being.  Intelligence is of value, and its value does not turn negative above a certain level.

As Yudkowsky points out, there may be physical limits to life extension or intellectual augmentation.  If so, that is simply the nature of the Universe.  But if we can extend a person's life or increase his intelligence, then (with the person's extent, of course) all other things being equal it is better to do than not to do it.

It's simple humanity to value transhumanity.

END.

2 comments:

  1. Isn't immortality part of the reason society is so stagnant in The City and the Stars?

    ReplyDelete
  2. Yes, and one of the drawbacks of a society of immortals probably would be that it would change more slowly. On the other hand, its citizens could afford to have slower change (since they would be immortal), and if they were faced with a crisis, they would have the tremendous advantage that members of high levels of skill and intelligence, once they appeared in the first place, would tend to stick around forever. It balances out -- and wouldn't you want to keep on living until you wanted to die, rather than dying randomly of inevitable physical decay?

    ReplyDelete