the running 11 year average of global temperature has not flattened since 1990, but continued upward at almost the same pace with only a moderate decrease in slope since the outlier 1998 year. The 11 years 2000-2010 global mean temperature is significantly higher than the 10 years 1990-2000.
That is not "flat since the 90s". The only way to get "flat since the 90s" is to compare 1998 to various more recent years noting that it was nearly as hot as 2005 and 2010 etc. and slightly hotter than other years in the 2000s, as if 1 year matters as much as 10 in a noisy data set.
If he had said "flat since 1998" that might be technically true in a way, but it's a little like saying the stock market has been flat since 2007.
That doesn't even consider using climate knowledge to adjust for some of the variance, for instance that El Niño years are hotter, and that 1998 was the biggest El Niño year on record.
Don't worry, I just did reread it, and it is just as I remembered. A lot of applause lights for the crowd that believes that the current state of climate science is driven by funding pressure from the US government DoE. His "argument" is based almost exclusively on the tone of popular texts, and anecdotal evidence that Joe Romm was an asshole and pushing bad policy at DoE during the Clinton administration. Considerations of what happened during the 8 years of a GWB administration that was actively hostile to the people JoeR favored are ignored.
Temperatures are described as "flat since the 90s" which is based on a massive misreading of the data, giving one exceptionally hot year (1998) the same evidentiary weight as the 8 of 10 hottest years on record which have occurred since then. Conveniently, when he wants to spread FUD about the current state of climate science, he will talk about natural variability and uncertainty in the climate. OTOH, he judges the shape of the data since the 1990s in a way that completely ignores that variability and uncertainty.
Bollocks is spot on and I absolutely treat his writings on global warming as evidence against his other opinions. That said, I am hardly a fan, and consider his argumentation logically weak, full of applause lights and other confusing nonsense across the board. Generally in a agreement with lukeprog.
I've read as much as I have, because he is from a vastly different tribe, and willing to express taboo opinions, which include some nuggets of truth or interesting mistakes worth thinking about.
Taken.
As last year, I would prefer different wording on the P(religion) question. "More or less" is so vague as to allow for a lot of very different answers depending on how I interpret it, and I didn't even properly consider the "revealed" distinction noted in a comment here.
I appreciate the update on the singularity estimate for those of us whose P(singularity) is between epsilon and 50+epsilon.
I still wonder if we can tease out the differences between current logistical/political problems and the actual effectiveness of the science on the cryonics question. Once again I gave an extremely low probability even though I would give a reasonable (10-30%) probability that the science itself is sound or will be at some point in the near future. Or perhaps it is your intention to let a segment of the population here fall into a conjunctiveness trap?
On the CFAR migraine treatment question I thought as follows:
Gur pbeerpg nafjre jbhyq qrcraq ba jung lbh xarj nobhg gur crefba. Sbe nalbar noyr gb cebprff naq haqrefgnaq gur hgvyvgl genqrbssf naq jub jnf fhssvpvragyl ybj vapbzr gung O pbhyq pbaprvinoyl or n orggre pubvfr, V jbhyq tvir gurz obgu bcgvba N naq O naq rkcynva gur genqrbss pnershyyl, be nggrzcg gb nfpregnva gurve $inyhr bs 1 srjre zvtenvar ol bgure dhrfgvbaf naq gura znxr gur pbeerpg erpbzzraqngvba onfrq ba gung.
Gjb guvatf ner dhvgr pyrne gb zr:
1: pubbfvat gur zbfg rssvpvrag gerngzrag va grezf bs zvtenvarf erzbirq cre qbyyne, vf irel pyrneyl gur jebat nafjre.
2: sbe >90% bs crbcyr va gur evpu jbeyq, gur pbeerpg nafjre fubhyq or N.
I am a massive N on the meyers briggs astrology test, yes I scored 96% for openness on the big-5.
I suspect our responses to questions like "I am an original thinker" have a lot to do with our social context. Right now, the people I run into day to day are fairly representative of the general population with little to skew toward toward the intellectual or original other than "people who hold down decent jobs, or did so until they retired". It doesn't take a great lack of humility to realize that compared to most of these people, I am a brilliant and original thinker.
OTOH, it's not like I'm Feynman or something. If I were working somewhere that filtered strongly for intelligence, like a hot tech startup or academe and had done so for long enough, I would probably feel relatively average and very focused on how to bridge the gap between me and those at the level or two above, vs. a dim awareness of the vast intellectual and originality gap between my associates and the typical person.
You say that "There will never be any such thing", but your reasons tell only why the problem is hard and much harder than one might think at first, not why it is impossible. Surely the kind of tech needed for self-driving cars, perhaps an order of magnitude more complicated, would make it possible to have safe, convenient, cheap flying cars or their functional equivalent.
At worst, the reasons you state would make it AI-complete, and even that seems unreasonably pessimistic.
It's only a crazy thing to do if you are pretty sure you will need/want the insurance for the rest of your life. If you aren't sure, then you are paying a bunch of your investment money for insurance you might decide you don't need (and in fact, you definitely won't need financially once you have self-funded).
If you are convinced that cryonics is a good investment, and don't have the money to fund it out of current capital, then that seems like a good reason to buy some kind of life insurance, and a universal life policy is probably one of the better ways to do it.
It's probably a bit more expensive than buying term life and investing the difference[1], if you can and will invest reasonably well (it's not actually all that complicated, but it is just enough so to be vulnerable to akrasia problems). Someone who geeks out on financial decisions and doesn't find them uncomfortable or boring work may be better off doing it themselves. Others should go for the UL policy.
If you have the money to fund it, some kind of trust is likely to be a much cheaper option for legal protection than an insurance policy.
[1] there are some tax advantages to investing within the UL that can make it less expensive than term+invest for those who have already maxed out their tax-deferred savings in 401(k)/IRA/etc.
" It is the view that if the only ways Z and A differ is that Z has a higher population, and lower quality of life, then Z is preferable to A. This may not be how Parfit is correctly interpreted, but it is a common enough interpretation that I think it needs to be attacked."
Generally it's a good idea to think twice and reread before assuming that a published and frequently cited paper is saying something so obviously stupid.
Your edit doesn't help much at all. You talk about what others "seem to claim", but the argument that you have claimed Parfit is making is so obviously nonsensical, that it would lead me to wonder why anyone cites his paper at all, or why any philosophers or mathematicians have bothered to refute or support it's conclusions with more than a passing snark. A quick google search on the term "Repugnant Conclusion" leads to a wikipedia page that is far more informative than anything you have written here.
Not even close. The primary content of the OP is based on a straw man due to a massive misunderstanding of the mathematical arguments about the Repugnant Conclusion.
The conclusion of what Partfit actually demonstrated goes something more like this:
For any coherent mathematical definition of utility such that there is some additive functions which allows you to sum the utility of many people to determine U(population), the following paradox exists:
Given any world with positive utility A, there exists at least one other world B with more people, and less average utiity per person which your utility system will judge to be better, i.e.: U(B) > U(A).
Parfit does not conclude that you necessarily reach world B by maximizing reproduction from world A nor that every world with more people and less average utility is better. Only worlds with a higher total utility are considered "better". This of course implies either more resources, or more utility efficient use of resources in the "better" world.
The cable channel analogy would be to say "As long as every extra cable channel I add provides at least some constant positive utility epsilon>0, even if it is vanishingly small, there is some number of cable channels I can put into your feed that will make it worth $100 to you." Is this really so hard to accept? It seems obviously true even if irrelevant to real life where most of us would have diminishing marginal utility of cable channels.
Parfit's point is that it is hard for the human brain to accept the possibility that some world with uncounted numbers of people with lives just barely worth living could possibly be better than any world with a bunch of very happy high utility people (he can't accept it himself), even though any algebraically coherent system of utility will lead to that very conclusion.
John Maxwell's comment gets to the heart of the issue, the term "just barely worth living". Philosophy always struggles where math meets natural language, and this is a classic example.
The phrase "just barely worth living" conjures up an image of a life that is barely better than the kind of neverending torture/loneliness scenario where we might consider encouraging suicide.
But the taboos against suicide are strong. Even putting aside taboos, there are large amounts of collateral damage from suicides. The most obvious is that anyone who has emotional or family connections to a suicide will suffer. Even people who are very isolated, will have some connection, and suicide could trigger grief or depression in any people who encounter them or their story. There are also some very scary studies about suicide and accident rates going up in the aftermath of publicized suicides or accidents, due to social lemming like programming in humans.
So it is quite rational for most people to not consider suicide until their personal utility is highly negative if they care at all about the people or world around them. For most of us, a life just above the suicide threshold would be a negative utility life and a fairly large negative utility.
A life with utility positive epsilon is not a life of sadness or pain, but a life that we would just barely choose to live, as a disembodied soul given a choice of life X or non-existence. Such a life, IMO will be comfortably clear of the suicide threshold, and would, in my opinion, represent an improvement in the world. Why wouldn't it? It is by definition, a life that someone would choose to have rather than not have! How could that not improve the world?
Given this interpretation of "just barely worth living", I accept the so-called Repugnant conclusion, and go happily on my way calculating utility functions.
RC is just the mirror image of the tortured person versus 3^^^^3 persons with dust specks in their eyes debate.
Tabooing "life just barely worth living", and then shutting up and multiplying led me to realize that the so-called Repugnant conclusion wasn't repugnant after all.
My understanding is that the "appeal to authority fallacy" is specifically about appealing to irrelevant authorities. Quoting a physicist on their opinion about a physics question within their area of expertise would make an excellent non-fallacious argument. On the other hand, appealing to the opinion of say, a politician or CEO about a physics question would be a classic example of the appeal to authority fallacy. Such people's opinions would represent expert evidence in their fields of expertise, but not outside them.
I don't think the poster's description makes this clear and it really does suggest that any appeal to authority at all is a logical fallacy.
I wouldn't necessarily read too much into your calibration question, given that it's just one question, and there was something of a gotcha.
One thing I learned from doing calibration exercises is that I tended to be much too tentative with my 50% guesses.
When I answered the calibration question, I used my knowledge of other math that either had to, or couldn't have come before him, to narrow the possible window of his birth down to about 200 years. Random chance would then give me about a 20% shot. I thought I had somewhat better information than random chance within that window so I estimated my guess (IIRC) at 30%. I was, alas wrong, but I'm pretty confident that I would get around 30% of problems with a similar profile correct. If this problem was tricky, then it is more likely than average to be a problem that people get wrong in a large set. But this will be balanced by problems which are straightforward.
Not to suggest that this result isn't evidence of LW's miscalibration. In fact, it's strong enough evidence for me to throw into serious doubt the last survey's finding that we were better calibrated than a normal population. OTOH neither bit of evidence is terribly strong. A set of 5-10 different problems would make for much stronger evidence one way or the other.