Immediately after the publication of this article, I received an invitation to join the scientific board of the Lifeboat Foundation. The invitation was informal in style, as if written by an old friend familiar with my day-to-day research, calling me by my first name, Raphaël (with the accent, well done!), and noting that “We just had Ray Kurzweil and Nobel Laureates Sir Clive W.J. Granger, Eric S. Maskin, and Wole Soyinka join our Board so you would have some good company!”.
I suspect that anyone publishing a scientific paper receiving some media coverage in the area of nanotechnology may receive similar invitations. And some may have joined such a good company without noting the problematic nature of the foundation claims.
The Lifeboat Foundation promotes a set of ideas which can be summarized as follows:
1) a technological “singularity” is coming and humanity is facing existential risks, e.g. risks that non-friendly superintelligence or misuse of molecular nanotechnology “annihilate Earth-originating intelligent life or permanently and drastically curtail its potential”;
2) to avoid this risk, the Foundation has assembled “some of the best minds on the planet working on programs to enable our survival” so please by our books and donate to fund our research;
Here is Richard Jones on the “singularity” in a post entitled “Our faith in technology”
“Belief in the singularity, then, as well as being a symptom of a particular moment of rapid technological change, should perhaps be placed in that tradition of millennial, utopian thinking that’s been a recurring feature in Western thought for many centuries.”