Embracing Death


I wrote this presentation for a weekend futurist getaway. With so much focus currently on overcoming death, I thought I’d take the naturalist perpective for a moment and look at the evolutionary advantages of death, while integrating machine learning models to support their claims.

I argue that death is necessary, and a futuristic version of death would be to die a little each day(and be born a little each day). Over a period of time all of you would die and a new you(including massive brain restructures and new limbs) would come into being. This is less resource intensive than the current natural ‘starting from scratch’ system we have going, while escaping the other option of simply living forever and gaining complexity into eternity. It avoids overtraining, and enables evolution within life, instead of by creating new life.

There were two major rebuttals of my arguments after I gave the talk and I discuss them here:

  • Surely it’s best to store all information you’ve ever experienced instead of removing it from your being? More information is better yes?

I disagree, better information is better. Too many regulations slow down innovation. There comes a point where information is no longer relevant. The individual doesn’t need it and it’s a burden to search and carry it with us. It’s fine if the collective conscious stores an archived memory of it(historical documents in a library/internet) that we can refer to in deep searches, but to be maximally dynamic/agile both mentally and physically we should limit ourselves to on information storage. Garbage collection(DEATH) is important. Memory capacity is important and select memories from all history and now should be kept and weighted according to their importance and independence from time. The game of Go is complex with only 4 rules, if you can distill life to deep underlying principles that are timeless, you can afford shorter memory capacity. Processing power(focus of attention) is key to be able to do this.

  • How do I ensure goodness in my future self or prevent the evil AI? I want to put in safeguards so that my future self is moral.

I disagree and give up control to my future self - they can make moral decisions at the time. In the past it was the right thing to do to kill your own meat in order to survive, but in future this may be seen as evil. ‘Goodness’ is subjective and must change with the environment one exists in.

comments powered by Disqus