LESSWRONG
LW

philosophytorres
-117200
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
“A Harmful Idea”
[+]philosophytorres4y-420
Possible worst outcomes of the coronavirus epidemic
philosophytorres5y10

Also worth noting: if the onset of global catastrophes is better, then global catastrophes will tend to cluster together, so we might expect another global catastrophe before this one is over. (See the "clustering illusion.")

Reply
A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” in Enlightenment Now (Part 1)
philosophytorres7y-30

Part 2 can now be read here: https://www.lesswrong.com/posts/pbFGhMSWfccpW48wd/a-detailed-critique-of-one-section-of-steven-pinker-s

Reply
Is there a flaw in the simulation argument?
philosophytorres8y00

"The fact that there are more 'real' at any given time isn't relevant to the fact of whether any of these mayfly sims are, themselves, real." You're right about this, because it's a metaphysical issue. The question, though, is epistemology: what does one have reason to believe at any given moment. If you want to say that one should bet on being a sim, then you should also say that one is in room Y in Scenario 2, which seems implausible.

Reply
Is there a flaw in the simulation argument?
philosophytorres8y10

"Like, it seems perverse to make up an example where we turn on one sim at a time, a trillion trillion times in a row. ... Who cares? No reason to think that's our future." The point is to imagine a possible future -- and that's all it needs to be -- that instantiates none of the three disjuncts of the simulation argument. If one can show that, then the simulation argument is flawed. So far as I can tell, I've identified a possible future that is neither (i), (ii), nor (iii).

Reply
Could the Maxipok rule have catastrophic consequences? (I argue yes.)
philosophytorres8y20

"My 5 dollars: maxipoc is mostly not about space colonisation, but prevention of total extinction." But the goal of avoiding an x-catastrophe is to reach technological maturity, and reaching technological maturity would require space colonization (to satisfy the requirement that we have "total control" over nature). Right?

Reply
Could the Maxipok rule have catastrophic consequences? (I argue yes.)
philosophytorres8y00

Yes, good points. As for "As result, we only move risks from one side equation to another, and even replace known risks with unknown risks," another way to put the paper's thesis is this: insofar as the threat of unilateralism becomes widespread, thus requiring a centralized surveillance apparatus, solving the control problem is that mush more important! I.e., it's an argument for why MIRI's work matters.

Reply
Load More
-42“A Harmful Idea”
3y
7
-24Were the Great Tragedies of History “Mere Ripples”?
4y
16
8A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” in Enlightenment Now (Part 2)
7y
1
15A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” in Enlightenment Now (Part 1)
7y
1
2Is there a flaw in the simulation argument?
8y
14
10Could the Maxipok rule have catastrophic consequences? (I argue yes.)
8y
32