All of philosophytorres's Comments + Replies

It's pretty telling that you think there's no chance that anyone who doesn't like your arguments is acting in good faith. I say that as someone who actually agrees that we should (probably, pop. ethics is hard!) reject total utilitarianism on the grounds that bringing someone into existence is just obviously less important than preventing a death, and that this means that longtermist are calling for important resource to be misallocated. (That is true of any false view about how EA resources should be spent though!). But I find your general tone of 'people... (read more)

Possible worst outcomes of the coronavirus epidemic

Also worth noting: if the onset of global catastrophes is better, then global catastrophes will tend to cluster together, so we might expect another global catastrophe before this one is over. (See the "clustering illusion.")

3avturchin2yI once took a look [https://philarchive.org/rec/TURGIT] into the clustering illusion, and found a research that in the interconnected systems it is not an illusion: any correlation increases the probability of clustering significantly: Downarowicz, T., & Lacroix, Y. (2011). The law of series. Ergodic Theory and Dynamical Systems, 31(2), 351–367.
Is life worth living?

It's amazing how many people on FB answered this question, "Annihilation, no question." Really, I'm pretty shocked!

3turchin4yI have been working on life extension industry for years and the main problem is that people want to die. It is the root of all other problems with funding or research, regulation etc. Technical problems are smallest.
Is there a flaw in the simulation argument?

“I'm getting kind of despairing at breaking through here, but one more time.” Same here. Because you still haven’t addressed the relevant issue, and yet appear to be getting pissy, which is no bueno.

By analogy: in Scenario 2, everyone who has wandered through room Y but says that they’re in room X is wrong, yeah? The answer they give does not accurately represent reality. The “right” call for those who’ve pass through room Y is that they actually passed through room Y. I hope we can at least agree on this.

Yet, it remains 100% true that if everyone at any g... (read more)

Is there a flaw in the simulation argument?

"The fact that there are more 'real' at any given time isn't relevant to the fact of whether any of these mayfly sims are, themselves, real." You're right about this, because it's a metaphysical issue. The question, though, is epistemology: what does one have reason to believe at any given moment. If you want to say that one should bet on being a sim, then you should also say that one is in room Y in Scenario 2, which seems implausible.

0WalterL4yI'm not sure what you mean by 'it is a metaphysical issue', and I'm getting kind of despairing at breaking through here, but one more time. Just to be clear, every sim who says 'real' in this example is wrong, yeah? They have been deceived by the partial information they are being given, and the answer they give does not accurately represent reality. The 'right' call for the sims is that they are sims. In a future like you are positing, if our universe is analogous to a sim, the 'right' call is that we are a sim. If, unfortunately, our designers decide to mislead us into guessing wrong by giving us numbers instead of just telling us which we are...that still wouldn't make us real. This is my last on the subject, but I hope you get it at this point.
Is there a flaw in the simulation argument?

"Like, it seems perverse to make up an example where we turn on one sim at a time, a trillion trillion times in a row. ... Who cares? No reason to think that's our future." The point is to imagine a possible future -- and that's all it needs to be -- that instantiates none of the three disjuncts of the simulation argument. If one can show that, then the simulation argument is flawed. So far as I can tell, I've identified a possible future that is neither (i), (ii), nor (iii).

0WalterL4ySo, like, a thing we generally do in these kinds of deals is ignore trivial cases, yeah? Like, if we were talking about the trolley problem, no one brings up the possibility that you are too weak to pull the lever, or posits telepathy in a prisoner's dilemma. To simplify everything, let's stick with your first example. We (thousand foks) make one sim. We tell him that there are a thousand and one humans in existence, one of which is a sim, the others are real. We ask him to guess. He guesses real. We delete him and do this again and again, millions of time. Every sim guesses real. Everyone is wrong. This isn't an example that proves that, if we are using our experience as analogous to the sim, we should guess 'real'. It isn't a future that presents an argument against the simulation argument. It is just a weird special case of a universe where most things are sims. The fact that there are more 'real' at any given time isn't relevant to the fact of whether any of these mayfly sims are, themselves, real. If there are more simulated universes, then it is more likely that our universe is simulated.
Could the Maxipok rule have catastrophic consequences? (I argue yes.)

"My 5 dollars: maxipoc is mostly not about space colonisation, but prevention of total extinction." But the goal of avoiding an x-catastrophe is to reach technological maturity, and reaching technological maturity would require space colonization (to satisfy the requirement that we have "total control" over nature). Right?

0turchin4yI am not sure that the goal of x-risks prevention is reaching technological maturity, and may be it is about preserving Homo sapience, our culture and civilization indefinitely long, but it may be unreachable without some level of technology. But technology is not the terminal goal.
Could the Maxipok rule have catastrophic consequences? (I argue yes.)

Yes, good points. As for "As result, we only move risks from one side equation to another, and even replace known risks with unknown risks," another way to put the paper's thesis is this: insofar as the threat of unilateralism becomes widespread, thus requiring a centralized surveillance apparatus, solving the control problem is that mush more important! I.e., it's an argument for why MIRI's work matters.

0turchin4yI think that unilateralist biological risks will be soon here. I modeled their development in my unpublished article about multipandemic, and compare their number with the historical number of computer viruses. There was 1 virus a year in the beginning of 1980s, 1000 a year in 1990, millions in 2000s, millions of malwares a day in 2010s, according to some report on CNN. But the peak of damage was in 1990s as viruses were more destructive at the time and aimed on data deletion, and not much antiviruses were available. Thus it needs around 10 years to move from the technical possibility of creating a virus at home to global mulripandemic.
Agential Risks: A Topic that Almost No One is Talking About

What do you mean? How is mitigating climate change related to blackmail?

0SithLord135yThis discussion was about agential risks, the part I quoted was talking about extreme ecoterrorism as a result of environmental degradation. In other words, the main post was partially about stricter regulations on CO2 as a means of minimizing the risk of a potential doomsday scenario from an anti global warming group.
Agential Risks: A Topic that Almost No One is Talking About

I actually think most historical groups wanted to vanquish the enemy, but not destroy either themselves or the environment to the point at which it's no longer livable. This is one of the interesting things that shifts to the foreground when thinking about agents in the context of existential risks. As for people fighting to the death, often this was done for the sake of group survival, where the group is the relevant unit here. (Thoughts?)

Agential Risks: A Topic that Almost No One is Talking About

Totally agree that some x-risks are non-agential, such as (a) risks from nature, and (b) risks produced by coordination problems, resulting in e.g. climate change and biodiversity loss. As for superpowers, I would classify them as (7). Thoughts? Any further suggestions? :-)

0turchin5y"Rogue country" is outside evaluative characteristic. Lets try to define "rogue country" by its estimation-independent characteristics: 1) It is country which fight for world domination 2) It is a country which is interested in worldwide promotion of its (crazy) ideology (USSR, communism) 3) Its a country which survival is threatened by risks of aggression 4) It is a country which is ruled by crazy dictator. I would like to say that superpowers is the type of "rogue countries", as they sometimes combine some of listed above properties. The difference is mainly that we always had two (or three) superpowers which fight for the world domination. Sometimes one of them was on the first place and another one was challenging its position as world leader. The second superpower is more willing to create global risk, as it may rise it "status" or chances to overpower "alpha-superpower". The topic is interesting, and there a lot what could be said on it including current political situation and even war in Syria. Just read an article today which explained this war from this point of view.
0turchin5yI would also add Doomsday blackmailers. These are rational agents which would create Doomsday Machine to blackmail the world with the goal of world domination. Another option worth considering is arogant scientists, who benefit personally from dangerous experiments. Example is CERN proceeded with LHC before its safety was proven. Another group of bioscientists excavated 1918 pandemic flu, sequenced it and posted it in the internet. And another scientist deliberately created new superflu studying genetic variation which could make birds flu stronger. We could imagine a scientist who would to increase personal longevity by gene therapy, even if it poses 1 per cent pandemic risk. And if there are many of them... Also there is a possible class of agents who try to create smaller catastrophe in order to prevent larger catastrophe. Recent movie "Inferno" is about it, where a character created a virus to kill half humanity to safe all humanity later. I listed all my ideas in my agent map, which is here on Less Wrong http://lesswrong.com/r/discussion/lw/o0m/the_map_of_agents_which_may_create_xrisks/ [http://lesswrong.com/r/discussion/lw/o0m/the_map_of_agents_which_may_create_xrisks/]
Agential Risks: A Topic that Almost No One is Talking About

(2) is quite different in that it isn't motivated by supernatural eschatologies. Thus, the ideological and psychological profiles of ecoterrorists are quite different than apocalyptic terrorists, which are bound together by certain common worldview-related threads.

Agential Risks: A Topic that Almost No One is Talking About

I think my language could have been more precise: it's not merely genocidal, but humanicidal or omnicidal that we're talking about in the context of x-risks. Also, Khmer Rough wasn't suicidal to my knowledge. Am I less right?

A problem in anthropics with implications for the soundness of the simulation argument.

As for your first comment, imagine that everyone "wakes up" in a room with only the information provided and no prior memories. After 5 minutes, they're put back to sleep -- but before this occurs they're asked about which room they're in. (Does that make sense?)

2Gram_Stone5yI thought you might like to hear about some of the literature on this problem. Forgive me if you're already aware of this work and I've misunderstood you. Manfred writes: In Anthropic Bias: Observation Selection Effects in Science and Philosophy [https://www.amazon.com/Anthropic-Bias-Observation-Selection-Philosophy/dp/0415883946/ref=sr_1_1?ie=UTF8&qid=1477002400&sr=8-1&keywords=Anthropic+Bias] , Nick Bostrom describes a thought experiment known as 'Mr. Amnesiac' to illustrate the desirability of a theory of observation selection effects that takes this kind of temporal uncertainty into account: Not unlike Manfred's arguments in favor of betting on room B under imperfect recall, Bostrom's solution here is to propose observer-moments, time intervals of observers' experiences of arbitrary length, and reason as though you are a randomly selected observer-moment from your reference class, as opposed to just a randomly selected observer (in philosophy, Strong Self-Sampling Assumption vs. Self-Sampling Assumption). With this assumption and imperfect recall, you would conclude in Mr. Amnesiac that the probability of your being in Room 1 = 2/3 and of being in Room 2 = 1/3, and that you should bet on Room 1. But I don't think there's anything mysterious there. If I understand correctly, we are surreptitiously asking the room B people to bet 1000 more times per observer than the room A people. Yet again, the relevant consideration is "How many times is this experience occurring?" Nitpick: If we do include imperfect recall, doesn't this actually just make us indifferent between room A and room B, as opposed to making us prefer room B? Room A people collectively possess 100 trillion observer-moments that belong to 100 trillion observers, room B people collectively possess 1000 observer-moments per observer times 100 billion observers = 100 trillion observer-moments that belong to 100 billion observers. Our credence should be 50/50 and we're indifferent between bets. Or am
A problem in anthropics with implications for the soundness of the simulation argument.

Yes to both possibilities. But gbear605 is closer to what I was thinking.

Agential Risks: A Topic that Almost No One is Talking About

Great question. I think there are strong reasons for anticipating the total number of apocalyptic terrorists and ecoterrorists to nontrivially increase in the future. I've written two papers on the former, linked below. There's weaker evidence to suggest that environmental instability will exacerbate conflicts in general, and consequently produce more malicious agents with idiosyncratic motives. As for the others -- not sure! I suspect we'll have at least one superintelligence around by the end of the century.

1turchin5yI think that number of agents will also grow as technologies will be more accessible for smaller organisations and even individuals. If a teenager could create dangerous biovirus as simply as he now able to write computer virus to amuse his friends, we are certainly doomed.
Estimating the probability of human extinction

Thanks so much for these incredibly thoughtful responses. Very, very helpful.

Open thread, Jan. 19 - Jan. 25, 2015

Hello! I'm working on a couple of papers that may be published soon. Before this happens, I'd be extremely curious to know what people think about them -- in particular, what people think about my critique of Bostrom's definition of "existential risks." A very short write-up of the ideas can be found at the link below. (If posting links is in any way discouraged here, I'll take it down right away. Still trying to figure out what the norms of conversation are in this forum!)

A few key ideas are: Bostrom's definition is problematic for two reasons: ... (read more)

0Manfred7yThis is a nice paper, and is probably the sort of thing philosphers can really sink their teeth into. One thing I really wanted was some addressing of the basic "something that would cause much of what we value about the universe to be lost" definition of 'catastrophic', which you could probably even find Bostrom endorsing somewhere.
Open thread, October 2011

I'd love to know what the community here thinks of some critiques of Nick Bostrom's conception of existential risks, and his more general typology of risks. I'm new to the community, so a bit unsure whether I should completely dive in with a new article, or approach the subject some other way. Thoughts?

1Vaniver7yWelcome! The LW wiki page [http://wiki.lesswrong.com/wiki/Existential_risk] on Existential Risk is a good place to start. People here typically take the idea of existential risk seriously, but I don't see much discussion of his specific typology. AmandaEHouse made a visualization of Bostrom's Superintelligence here [http://lesswrong.com/lw/kkn/a_visualization_of_nick_bostroms_superintelligence/] , and Katja Grace runs a reading group for Superintelligence [http://lesswrong.com/lw/kw4/superintelligence_reading_group/], which is probably one of the best places to start looking for discussion. The open threads are also weekly now, and the current one is here [http://lesswrong.com/r/discussion/lw/lk6/open_thread_jan_19_jan_25_2015/]. (You can find a link to the most recent one on the right sidebar when you're in the discussion section.)