FYI: The Oxford Future of Humanity Institute is holding a conference on global catastrophic risks on July 17-20, 2008, at Oxford (in the UK).
I'll be there, as will Robin Hanson and Nick Bostrom.
Deadline for registration is May 26th, 2008. Registration is £60.
While on the topic of conferences that might interest this crowd, Aging 2008 will take place on June 27th in Los Angeles at UCLA.
"leading scientists and thinkers in stem cell research and regenerative medicine will gather in Los Angeles at UCLA for Aging 2008 to explain how their work can combat human aging, and the sociological implications of developing rejuvenation therapies.
Aging 2008 is free, with advance registration required"
More details here:
For what it's worth I'm posting my thoughts about the future of mankind on B.Goertzel's AGIRI forum tomorrow. The content may be of interest to the FHI.
Does sound good, and not far to go for me. I'll see if I can fish out £60 and someone who wants to go to a conference on how wrong it could all go. I fear task one might be the easier of the two.
My thoughts on the future of mankind:
1) Near-term primary goal to maximize productive peron/yrs. 2) Rearrange capital flows to prevent productive person/yrs from being lost to obvious causes (ie. UN Millenium development goals and invoking sin-taxes), with effort to offer pride-savings win-win situations. Re-educate said workforce. Determine optimum resource allocation towards civilization redundancy efforts based upon negative externality accounting revised (higher) economic growth projections. Isolate states exporting anarchy or not attempting to participate in globalized workforce. Begin measuring purchasing parity adjusted annual cost to provide a Guaranteed Annual Income (GAI) in various nations. 3) Brainstomring of industries required to maximize longeivty, and to handle technologies and wield social systems essential for safely transitioning first to a medical/health, then to a leisure society. 4) Begin reworking bilateral and global trade agreements to reward actors who subsequently trend towards #3. Begin building a multilateral GAI fund to reward actors who initiate #5. 5) Mass education of society towards health/medical and other #3 sectors. Begin dispensing GAI to poor who are trending towards education/employment relevant to #3 sectors. 6) Conversion of non-essential workforces to health/medical R+D and other #3 sectors. Hopefully the education GAI load will fall and the fund can focus upon growing to encompass a larger GAI population base in anticipation of the ensuing leisure society. 7) Climax of medical/health R+D workforce. 8) Mature medical ethics needed. Mature medical AI safeguards needed. Education in all medical AI-relevant sectors. Begin measuring AI medical R+D advances vs. human researcher medical R+D advances. 9) Point of inflection where it becomes vastly more efficient to develop AI medical R+D systems rather than educating researchers (or not if something like real-time human trials bottleneck software R+D). Subsequent surplus medical/health labour-force necessitates a global GAI by now at the latest. AI Medical R+D systems become a critical societal infrastructure and human progress in the near-term will be limited by the efficacy and safety (ie. from computer virii) of these programs. 10) Leisure society begins. Diminishing returns from additional resource allocations towards AI medical R+D. Maximum rate of annual longevity gains. 11) Intensive study of mental health problems in preparation for #13. Brainstorming of surveillence infrastructures needed to wield engineering technologies as powerful as Drexler-ian nanotechnology. Living spaces will resemble the nested security protocols of a modern microbiology lab. Potentially powerful occupations and consumer goods will require increased surveillence. Brainstorming metrics to determine the most responsible handlers of a #13 technology (I suggest something like the CDI Index as a ranking). 12) Design blueprints for surveillence tools like quantum-key encryption and various sensors must be ready either before powerful engineering technologies are developed, or be among the first products created using the powerful technology. To maintain security for some applications it may be necessary to engineer entire cities from scratch. Sensors should be designed to maximize human privacy rights. The is a heighten risk of WWIII from this period on until just after the technology is developed. 13) A powerful engineering technology is developed (or not). The risk of global tyranny is highest since 1940. Civilization-wide surveillence achieved to ensure no WMDs unleashed, and no dangerous technological experiments. A technology like the ability to cheaply manufacture precision diamond products, could unleash many sci-fi-ish applications including interstellar space travel and the hardware required for recursively improving AI software (AGI). This technology would signal the end of capitalism and patent regimes. A protocol for encountering technologically inferior ETs might be required. Safe AGI/AI software programs would be needed before desired humane applications should be used. Need mature sciences of psychology and psychiatry to assist the benevolent administration of this technology. Basic Human Rights, goods and services should be administered to all where tyrannical regimes don't possess military parity. 14) Weaponry, surveillence, communications and spacecraft developed to expand the outer perimeter of surveillence beyond the Solar System. Twin objectives: to ensure no WMDs such as rogue AGI/AI programs, super high energy physics experiments, kinetic impactor meteors,etc., are created; and to keep open the possibility of harvesting resources required to harness the most powerful energy resources in the universe. The latter objective may require the development of physics experiments and/or AGI that conflicts with the former objective. The latter objective will require a GUT/TOE. Developing a GUT may require the construction of a physics experimental apparatus that should be safe to use. Need a protocol for dealing with malevolent ETs at approximate technological parity with humanity. Need a protocol to accelerate the development of dangerous technologies like AGI and Time Machines if the risks from these are deemed less than the threat from aliens; there are many game-theoric encounter scenarios to consider. This protocol may be anthropomorphic to how to deal with malevolent/inept conscious or software actors that escape the WMD surveillence perimeter. 16) If mapping the energy stores of the universe is itself safe/sustainable or if using the technologies needed to do so is safe, begin expanding a universe energy survey perimeter, treating those who attempt to poison future energy resources as pirates. 17) If actually harnessing massive energy resources or using the technologies required to do so is dangerous, a morality will need to be defined that determines a tradeoff of person/yrs lost vs. potential energy resources lost. The potential to unleash Hell Worlds, Heavens and permanent "in-betweens" is of prime consideration. Assuming harnessing massive energy resources is safe (doesn't end local universe) and holds a negligible risk of increasing odds of a Hell World or "in betweens", I suggest at this point invoking a Utilitarian system like Mark Walker's "Angelic Heirarchy", whereby from this point on, conscious actors begin amassing "survival credits". As safe energy resources dry up towards the latter part of a closed universe (or when atoms decay), trillions of years from now, actors who don't act to maximize this dwindling resource base will be killed to free up resources required to later mine potentially uncertain/dangerous massive energy resources. Same thing if the risk of unleashing Hell Worlds or destroying reality is deemed too high to pursue mining the energy resource: a finite resource base suggests those hundred trillion yr old actors with high survival credit totals, live closer to the end of the universe, as long as enforcing such a morality is itself not energy intensive. A Tipler-ian Time Machine may be the lever here; using it or not might determine net remaining harvestable energy resources and the quality-of-living hazard level in taking different courses of action. 18a) An indefinite Hell World. 18b) An indefinite Heaven World. 18c) End of the universe for conscious actors, possibly earlier than necessary because of a decision that fails to harness a dangerous energy source. If enforcing a "survial credit" administrative regime is energy intensive, the Moral system will be abandoned at some point and society might degenerate into cannabalism.
Moderate danger now: One kind of a rather unknown phenomenon, for e.g., traveling black holes in space that could make a bang like the impact of a big asteroid. Great danger as a slowly accelerating process: Changes of the Moderate danger now: One kind of a rather unknown phenomenon, for e.g., traveling black holes in space that could make a bang like the impact of a big asteroid. Great danger as a slowly accelerating process: Changes of the ecosystem which make the atmosphere lacking a secure amount of oxygen after 10000 years.
Greatest danger as a quickly accelerating development: Artificial Intelligence in war. The World-model http://www.singinst.org/ourresearch/publications/GISAI/mind/worldmodel.html of General Intelligence and Seed AI is the reason why I'm posting this. I'd like to ask if Modelearth - http://www.modelearth.org/ - could be something of the World-model? Anyway, I'm a fan of what Hearthstone, the creator of Modelearth, has published. This isn't real science, so it's rather off-topic for a conference like Global Catastrophic Risks - http://www.global-catastrophic-risks.com/ -. Anyway, before I make any estimates in terms of computer-generated models, I'd like to ask: Could this become a risk in approximately ten years, due to progress in computer technologies?
Basically, three very different scenarios for a model can be outlined: 1st "best-case", 2nd "mean-case", and 3rd "worst-case". Someone who watches http://www.modelearth.org/artics.html finds the elementary concept of a scenario on which Modelearth is based upon. Thus, Modelearth is definitely the Best-Case Scenario in the form of web-pages.
Air travel as usual, including occurrences which support airplanes with jet-engines, is based on the Mean-Case Scenario. But since the engines of normal airplanes are run with non-renewable energy resources, the tendency goes towards the Worst-Case Scenario. The current way of long cruising in Sondola Airship http://shintoist.com doesn't support Modelearth in reality, but its tendency goes towards the opposite direction of air travel as usual. It's only the alternative in progress towards a sustainable ecology.
Actually, there are many countries with an ever growing population. Simply put: The more people, the more food, the more transportation. Not anyone is a farmer, so we must either travel to the food or the food must be transported to us. Industrialized countries cannot produce technologies to circumvent the transportation of goods over long distances without great changes in society. The economic change may be a reduced value of money. Thus, when consumers can't buy food, they die as a result of malnutrition. Is this economic solution against overpopulation good for Modelearth? Well, most people don't think so!
Overpopulation is the keyword which rational thinkers must defend against the agenda of some influential political leaders. Certain law makers restrict various kinds of marriage and keep control over the way in which humans mate and multiply. Thus, it's bad for that meme-shaping business to address overpopulation as the very cause of many dangers, with World War Three as a real existential risk. Also see "Extinction Risk: Demonstration Necessary?" - http://www.acceleratingfuture.com/michael/blog/?p=827 -. No thanks, an interactive model can be scary enough!
Cross-posted: June 10th, 2008 http://lifeboat.com/blog/?p=133#comments
What's up with some of these weird-ass comments?
MGR: that already happened Ben Jones: you just heard about this now? Phillip and Robomoon: ever considered getting your own blog?
Phillip and Robomoon deserve reputation points for putting this much thought-work into the topics of existential risk and optimizing our future reality (Phillip in particular).