Open Thread May 2 - May 8, 2016

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

149 comments, sorted by
magical algorithm
Highlighting new comments since Today at 5:15 PM
Select new highlight date

There are a bunch of "comment score below threshold" comments on this thread. Those comments are reasonable polite comments, mostly about the current difficulties with karma abuse here.

I hope to eventually prevent karma abuse, and finding out who's been downvoting discussion of karma abuse should be part of the process.

Though I enthusiastically endorse the concept of rationality, I often find myself coming to conclusions about Big Picture issues that are quite foreign to the standard LW conclusions. For example, I am not signed up for cryonics even though I accept the theoretical arguments in favor of it, and I am not worried about unfriendly AI even though I accept most of EY's arguments.

I think the main reason is that I am 10x more pessimistic about the health of human civilization than most other rationalists. I'm not a cryonicist because I don't think companies like Alcor can survive the long period of stagnation that humanity is headed towards. I don't worry about UFAI because I don't think our civilization has the capability to achieve AI. It's not that I think AI is spectacularly hard, I just don't think we can do Hard Things anymore.

Now, I don't know whether my pessimism is more rational than others' optimism. LessWrong, and rationalists in general, probably have a blind spot relative to questions of civilizational inadequacy because those questions relate to political issues, and we don't talk about politics. Is there a way we can discuss civilizational issues without becoming mind-killed? Or do we simply have to accept that civilizational issues are going to create a large error bar of uncertainty around our predictions?

It's not that I think AI is spectacularly hard, I just don't think we can do Hard Things anymore.

I'm sympathetic to the idea that we can't do Hard Things, at least in the US and much of the rest of the West. Unfortunately progress in AI seems like the kind of Hard Thing that still is possible. Stagnation has hit atoms, not bits. There does seem to be a consensus that AI is not a stagnant field at all, but rather one that is consistently progressing.

Part of my worldview is that progress, innovation and competence in all areas of science, technology, and other aspects of civilization are correlated. Societies that are dynamic and competent in one area, such as physics research, will also be dynamic and competent in other areas, such as infrastructure and good governance.

What would the world look like if that hypothesis were false? Well, we could find a country that is not particularly competent overall, but was very competent and innovative in one specific civilizational subfield. As a random example, imagine it turned out that Egypt actually had the world's best research and technology in the field of microbiology. Or we might observe that Indonesia had the best set of laws, courts, and legal knowledge. Such observations would falsify my hypothesis.

If the theory is true, then the fact that the US still seems innovative in CS-related fields is probably a transient anomaly. One obvious thing that could derail American innovation is catastrophic social turmoil.

Optimists could accept the civilizational competence correlation idea, but believe that US competence in areas like infotech is going to "pull up" our performance in other areas, at which we are presently failing abjectly.

Well, we could find a country that is not particularly competent overall, but was very competent and innovative in one specific civilizational subfield.

Soviet Russia did very well with space and nukes. On the other hand, one of the reasons it imploded was that it could not keep up doing very well with space and nukes.

I think the correlation you're talking about exists, but it's not that strong (or, to be more precise, its effects could be overridden by some factors).

There is also the issue of relative position. Brain drain is important and at the moment US is the preferred destination of energetic smart people from all over the world. If that changes, US will lose much of it's edge.

I used to think that Soviet Union was worse in economy, but at least better at things like math. Then I read some books about math in Soviet Union and realized that pretty much all mathematical progress in Soviet Union came from people who were not supported by the regime, because the regime preferred to support the ones good at playing political games, even if they were otherwise completely incompetent. (Imagine equivalents of Lysenko; e.g. people arguing that schools shouldn't teach vectors, because vectors are a "bourgeoise pseudoscience". No, I am not making this one up.) There were many people who couldn't get a job at academia and had to work in factories, who did a large part of the math research in their free time.

There were a few lucky exceptions, for example Kolmogorov once invented something that was useful for WW2 warfare, so in reward he became one of the few competent people in the Academy of Science. He quickly used his newly gained political powers to create a few awesome projects, such as the international mathematical olympiad, the mathematical jurnal Kvant, and high schools specializing at mathematics. After a few years he lost his influence again, because he wasn't very good at playing political games, but his projects remained.

Seems like the lesson is that when insanity becomes the official ideology, it ruins everything, unless something like war provides a feedback from reality, and even then the islands of sanity are limited.

What were these books? I don't speak Russian, so I'll probably follow up with: who were a few important mathematicians who worked in factories?

I’ve heard a few stories of people being demoted from desk jobs to manual labor after applying for exit visas, but that’s not quite the same as never getting a desk job in the first place. I've heard a lot of stories of badly-connected pure mathematicians being sent to applied think tanks, but that's pretty cushy and there wasn't much obligation to do the nominal work, so they just kept doing pure math. I can't remember them, but I think I've heard stories of mathematicians getting non-research desk jobs, but doing math at work.

Masha Gessen: Perfect Rigour: A Genius and the Mathematical Breakthrough of the Century

This is a story about one person, but there is a lot of background information on doing math in Soviet Union.

Thanks! Since that's in English, I will take at least a look at it.

Gessen does not strike me as a reliable source, so for now I am completely discounting everything you said about it, in favor of what I have heard directly from Russian mathematicians, which is a lot less extreme.

space and nukes

Many of the same people worked on both projects. In particular, Keldysh's Calculation Bureau.

Part of my worldview is that progress, innovation and competence in all areas of science, technology, and other aspects of civilization are correlated.

I'm sure they're correlated but not all that tightly.

What would the world look like if that hypothesis were false? Well, we could find a country that is not particularly competent overall, but was very competent and innovative in one specific civilizational subfield. As a random example, imagine it turned out that Egypt actually had the world's best research and technology in the field of microbiology.

I think there are some pretty good examples. The soviets made great achievements in spaceflight and nuclear energy research in spite of having terrible economic and social policies. The Mayans had sophisticated astronomical calendars but they also practiced human sacrifice and never invented the wheel.

If the theory is true, then the fact that the US still seems innovative in CS-related fields is probably a transient anomaly.

I doubt it, but even if true it doesn't save us, since plenty of other countries could develop AGI.

Is there a way we can discuss civilizational issues without becoming mind-killed?

A LWer created Omnilibrium for that.

Any results? (I am personally unimpressed by the few random links I have seen.)

Is there a way we can discuss civilizational issues without becoming mind-killed?

Sure there is. Start with the usual rationalist mantra: what do you believe? Why do you believe it?
How would you describe this Great Stagnation? Why do you believe we are headed towards this?
And let us pick up from there.

I just don't think we can do Hard Things anymore

The humanity or just the West?

Is there a way we can discuss civilizational issues without becoming mind-killed?

I don't see why not.

do we simply have to accept that civilizational issues are going to create a large error bar of uncertainty around our predictions?

That, too. That large error bar of uncertainty isn't going to go away even if we talk about the issues :-)

LessWrong, and rationalists in general, probably have a blind spot relative to questions of civilizational inadequacy because those questions relate to political issues, and we don't talk about politics.

I don't think "we don't talk about politics" is true to the extend that people are going to have blind spots about it. Politics isn't completely banned from LW. There are many venues from facebook discussions with LW folks, Yvain's blog, various EA fora and omnilibrium that also are about politics.

I think we even had the question of whether people believe we are in a great stagnation in a past census.

I think the main reason is that I am 10x more pessimistic about the health of human civilization than most other rationalists.

How do you know? Did you actually look at the relevant census numbers to come to that conclusion? If so, quoting the numbers would make your post more data driven and more substantial. If you goal is to have important discussion about civilizational issues being more data driven can be quite useful.

I am looking for sources of semi-technical reviews and expository weblog posts to add to my RSS reader; preferably 4—20 screenfuls of text on topics including or related to evolutionary game theory, mathematical modelling in the social sciences, theoretical computer science applied to non-computer things, microeconomics applied to unusual things (e.g. Hanson's Age of Em), psychometrics, the theory of machine learning, and so on. What I do not want: pure mathematics, computer science trivia, coding trivia, machine learning tutorials, etc.

Some examples that mostly match what I want, in roughly descending order:

How do I go about finding more feeds like that? I have already tried the obvious, such as googling "allintext: egtheory jeremykun" and found a couple OPML files (including gwern's), but they didn't contain anything close. The obvious blogrolls weren't helpful either (most of them were endless lists of conference announcements and calls for papers). Also, I've grepped a few relevant subreddits for *.wordpress.*, *.blogspot.* and *.github.io submissions (only finding what I already have in my RSS feeds — I suspect the less established blogs just haven't gotten enough upvotes).

Probably saying the obvious, but anyway:

What is the advantage of nice communication in a rationalist forum? Isn't the content of the message the only important thing?

Imagine a situation where many people, even highly intelligent, make the same mistake talking about some topic, because... well, I guess I shouldn't have to explain on this website what "cognitive bias" means... everyone here has read the Sequences, right? ;)

But one person happens to be a domain expert in an unusual domain, or happened to talk with a domain expert, or happened to read a book by a domain expert... and something clicked and they realized the mistake.

I think that at this moment the communication style on the website has a big impact on whether the person will come and share their insight with the rest of the website. Because it predicts the response they get. On a forum with a "snarky" debating culture, the predictable reaction is everyone making fun and not even considering the issue seriously, because that's simply how the debate is done there. Of course, predicting this reaction, the person is more likely to just avoid the whole topic, and discuss something else.

Of course -- yes, I can already predict the reactions this comment will inevitably get -- this has to be balanced against people saying stupid things, etc. Of course. I know already, okay? Thanks.

Speaking as somebody who frequently engages in non-nice methodologies:

Niceness is more convincing. Way more convincing. And if you can get somebody to be mean enough to you, while you're being nice, that somebody feels like they should defend you, cognitive dissonance will push them to believe in your beliefs a little bit more.

If some people are nice, and some people are mean, we're injecting some very subtle irrationality into people reading our discourse.

So there is an advantage in picking one and sticking to it. (Or my policy, which is to match the tone of my opponent as well as I can.) And niceness is probably an easier schelling point that meanness.

Yep. I'll try to make a short summary of some arguments in the article and comments:

Why people want to be mean:

  • it signals strength (in the ancient environment it shows you are not afraid of being hit in return);
  • it signals intellectual superiority e.g. in the form of sarcasm;
  • if you already have a reputation, you can win debates quickly;
  • it helps you put distance towards people you want to avoid.

What are the negative impacts of meanness:

  • you may be wrong, but you have already proposed a solution ("the other person is stupid");
  • if there is a misunderstanding, hostile reaction lowers the chance of explaining or increases the time needed, compared with a polite request for clarification;
  • people will different experience will seem especially wrong to you, so this effect will be even stronger there;
  • you spread bad mood, which harms curiosity and exploration;
  • you signal that you are bad at cooperation, bad at managing your emotions, not caring about other people;
  • people stop listening to you and start avoiding you;
  • you lose possible allies.

Isn't the content of the message the only important thing?

Content is multi-level. A chunk of text often means more than the literal reading of the words.

People use forums for many things. Sometimes it's to inform, sometimes it's to set out a position, sometimes it's to vent and bitch, sometimes it's to just wave a dick around, sometimes it's to play social games, etc. It helps to figure out quickly to which category a message belongs and the style or tone of the message (here: nice or mean) is important. Think of it as a fuzzy tag, an email header line, a hint at how this message should be interpreted.

It's not simple, of course, and there is a lot of misdirection and false flags and signaling and counter signaling... basically, it's humans communicating :-)

Of course. I know already, okay? Thanks.

Is there anything in your post where you think that a likely reader doesn't already know what you are arguing?

On a forum with a "snarky" debating culture, the predictable reaction is everyone making fun and not even considering the issue seriously

That seems like arguing against a strawman.

What skills are overwhelmingly easier to learn in institutionalized context?

(e.g math wouldn't count, because even if motivation is circumvented as an issue in institutions, you should be theoretically to study everything at home. Neither would necessarily the handling of some kind of lab equipment, if there was some clear documentation available for you, and (assuming that you took the efforts to remember it) if the transfer to practice was straightforward (so pushing buttons and changing settings would be straightforward, while the precise motions of carving a specific kind of motive into wood would be less so))

What skills are overwhelmingly easier to learn in institutionalized context?

Heh.

Adjective: institutionalized

  1. of or relating to something that has been established as an institution It is very difficult to get bureaucracies to abandon their institutionalized practices.

  2. of or relating to someone who has been committed to an institution, such as a prison or an insane asylum Once a potential employer learns that you've been institutionalized, you can forget about getting the job.

Neither would necessarily the handling of some kind of lab equipment, if there was some clear documentation available for you

In practice, learning to handle certain lab equipment outside of an institutional context is sometimes hard because it's much easier to break expensive stuff if you don't have someone looking over your work the first few times you do something. Of course, you qualified your above statement quite well, so you haven't said anything incorrect. :)

Lesswrong.com and the facebook group were very quiet this week. (The slack doubled in volume to be around 18k messages this week)

Any ideas why?

More and more people have drifted away and not been replaced by active posters. There are still a few topics around more traditional LW topics but not attracting much discussion. The most active discussions seem to be around a single member whose attempts at disruption have been entirely successful at multiple levels. Some of the more prolific remaining are judged, via downvotes and commentary, to be of low quality, and little contentful discussion ensues. There are still a few debates or arguments outside meta topics but they are mostly covering familiar ground.

LW is not a well kept garden any longer, one may wonder whether it is even a garden. LW2.0 is often mentioned as a glorious future but it's looking pretty bleak around LW1.0 in the present.

As with many areas; The future could not come soon enough.

Maybe students in some universities have midterms?

Possibly just random? There's a feedback effect where if LW is quiet one day, there's less to respond to the next day so it is likely to remain quiet -- so I think smallish random fluctuations can easily produce week-long droughts or gluts.

I realize if I Pomodoro most things, instead of some thing, I feel more motivated to go through my to do list. Sorry if this is already obvious. I tend to do Pomodoros on repetitive, long-term, open-ended tasks like studying, practicing or working.

I'd refrained from doing any Poms on short-term goals, that are uncertain in time it takes, it may take longer than an hour but less than 8 hours, for example researching health insurance; I feel unmotivated to start it because I know it's going to take a long time but not too long, but I don't know how long, so I procrastinate. Putting on my list to do 2 poms of research on health insurance, then reassess if I need more poms, feels more motivating.

If I had to guess why I had a tendency to leave smallish tasks off my pom list, I would guess I was being arrogant in thinking I had the will power to just out right do these tasks with out resorting to poms.

Mainstream discussion of existential risk is becoming more of a thing, A recent example is this article in The Atlantic. They do mention a variety of risks but focus on nuclear war and worst case global warming.

That seems like an accurate analysis.

I'm actually more concerned about an error in logic. If one estimates a probability of say k that in a given year that climate change will cause an extinction event, then the probability of it occurring in any given string of years is not the obvious one, since part of what is going on in estimating k is the chance that climate change can in fact cause such an incident.

BBC News is running a story claiming that the creator of Bitcoin known as Satoshi Nakamoto is an Australian named Craig Wright.

Meta: I got the date wrong of the last OT, modified it to say 25-1st, and this thread runs 2nd-8th

I apologize in advance for asking an off-topic question, but my Google-fu has failed me.

My girlfriend's niece is a Small Child who likes to turn the volume on her Android tablet all the way up, making it too loud for everyone else. How can we make it so that when she tries to make the tablet louder, nothing happens? (I know how to do this on an iOS device but not an Android one.)

I use Volume Locker to keep myself from changing volume by accidentally pressing buttons when picking up my phone.

have you looked for apps that will do this? something that does the effect of "twilight" on screens but for volume. Have you checked the parental control tools? Have you considered getting the kid a hearing test?

Seconding getting the kid a hearing test. Alternatively, speech therapy, if the issue is that she cannot understand what's being said.

Look for kid headphones with maximum child safe volume levels.

If she's smart enough to understand words then just tell her not to do it. Take away the tablet whenever she disobeys.

If she's too young for that, tape over the part of the thing that she could press, or just hang it out of reach playing something happy.

Tried the first thing. She doesn't listen - the result would be she never keeps the tablet for very long.

How long did you try? It took me like 2 weeks to teach my nephews to do what I said in a similar case (keeping tv turned down instead of tablet). You need the parents cooperation too.

It's sometimes difficult to take the tablet away immediately. A typical scenario is that my girlfriend and I are in the front seat of the car while the Small Child sits in a booster seat in the back and wants to use the tablet; she'll fight to keep it and it's hard to reach around the chair to take it out of her hand. Also there's the fact that the Small Child consistently breaks promises - she'll agree not to make it loud to get the tablet back, but immediately turn up the volume anyway when I give it to her. A technical solution is easier than playing dog trainer to a child with a developmental disability...

The "simulation argument" by Bostrom is flawed. It is wrong. I don't understand why a lot of people seem to believe in it. I might do a write up of this if anyone agrees with me, but basically, you cannot reason about without our universe from within our universe. It doesn't make sense to do so. The simulation argument is about using observations from within our own reality to describe something outside our reality. For example, simulations are or will be common in this universe, therefore most agents will be simulated agents, therefore we are simulated agents. However, the observation that most agents will eventually be or already are simulated only applies in this reality/universe. If we are in a simulation, all of our logic will not be universal but instead will be a reaction to the perverted rules set up by the simulation's creators. If we're not in a simulation, we're not in a simulation. Either way, the simulation argument is flawed.

First, Bostrom is very explicit that the conclusion of his argument is not "We are probably living in a simulation". The conclusion of his argument is that at least one of the following three claims is very likely to be true -- (1) humans won't reach the post-human stage of technological development, (2) post-human civilizations will not run a significant number of simulations of their ancestral history, or (3) we are living in a simulation.

Second, Bostrom has addressed the objection you raise here (in his Simulation Argument FAQ, among other places). He essentially flips your disjunctive reasoning around. He argues that we are either in a simulation or we are not. if we are in simulation, then claim 3 is obviously true, by hypothesis. If we are not in a simulation, then our ordinary empirical evidence is a veridical guide to the universe (our universe, not some other universe). This means the evidence and assumptions used as the basis for the simulation argument are sound in our universe. It follows that since claim 3 is false by hypothesis, either claim 1 or claim 2 is very likely to be true. It's worth noting that these two are claims about our universe, not about some parent universe.

In other words, your objection is based on the argument that if we are in a simulation, there is no good reason to trust the assumptions of the simulation argument (such as assumptions about how our simulators will behave). Bostrom's reply is that if we are in a simulation, then his conclusion is true anyway, even if the specific reasoning he uses doesn't apply. If we are not in a simulation, then the reasoning he uses does apply, so his conclusion is still true.

There does seem to be some sort of sleight-of-mind going on here, if you want my opinion. I generally feel that way about most non-trivial uses of anthropic reasoning. But the exact source of the sleight is not easy for me to detect. At the very least, Bostrom has a prima facie response to your objection, so you need to say something about why his response is flawed. Making your objection and Bostrom's response mathematically precise would be a good way to track down the flaw (if any).

I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.

So I am struggling to understand his reply to my argument. In some ways it simply looks like he's saying either we are in a simulation or we are not, which is obviously true. The claim that we are probably living in a simulation (given a couple of assumptions) relies on observations of the current universe, which either are not reliable if we are in a simulation, or obviously are wrong if we aren't in a simulation.

If I conclude that there are more simulated minds than real minds in the universe, I simply do not think that implies that I am probably a simulated mind.

If we are not in a simulation, then the reasoning he uses does apply, so his conclusion is still true.

He's saying that (3) doesn't hold if we are not in a simulation, so either (1) or (2) is true. He's not saying that if we're not in a simulation, we somehow are actually in a simulation given this logic.

either we are in a simulation or we are not, which is obviously true

Just wanted to point out that this is not necessarily true; in a large enough multiverse, there would be many identical copies of a mind, some of which would probably be "real minds" dwelling in "real brains", and some would be simulated.

I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.

(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation.

He's saying that (3) doesn't hold if we are not in a simulation, so either (1) or (2) is true. He's not saying that if we're not in a simulation, we somehow are actually in a simulation given this logic.

Right. When I say "his conclusion is still true", I mean the conclusion that at least one of (1), (2) or (3) is true. That is the conclusion of the simulation argument, not "we are living in a simulation".

If I conclude that there are more simulated minds than real minds in the universe, I simply do not think that implies that I am probably a simulated mind.

This, I think, is a possible difference between your position and Bostrom's. You might be denying the Self-Sampling Assumption, which he accepts, or you might be arguing that simulated and unsimulated minds should not be considered part of the same reference class for the purposes of the SSA, no matter how similar they may be (this is similar to a point I made a while ago about Boltzmann brains in this rather unpopular post).

I actually suspect that you are doing neither of these things, though. You seem to be simply denying that the minds our post-human descendants will simulate (if any) will be similar to our own minds. This is what your game AI comparisons suggest. In that case, your argument is not incompatible with Bostrom's conclusion. Remember, the conclusion of the simulation argument is that either (1), (2), or (3) is true. You seem to be saying that (2) is true -- that it is very unlikely that our post-human descendants will create a significant number of highly accurate simulations of their descendants. If that's all you're claiming, then you're not disagreeing with the simulation argument.

(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation.

The negation of (1) and (2) are premises if the conclusion is (3). So when I say they are "true" I mean that, for example, in the first case, that humans WILL reach an advanced level of technological development. Probably a bit confusing, my mistake.

You seem to be saying that (2) is true -- that it is very unlikely that our post-human descendants will create a significant number of highly accurate simulations of their descendants.

I think Bostrom's argument applies even if they aren't "highly accurate". If they are simulated at all, you can apply his argument. I think the core of his argument is that if simulated minds outnumber "real" minds, then it's likely we are all simulated. I'm not really sure how us being "accurately simulated" minds changes things. It does make it easier to reason outside of our little box - if we are highly accurate simulations then we can actually know a lot about the real universe, and in fact studying our little box is pretty much akin to studying the real universe.

This, I think, is a possible difference between your position and Bostrom's. You might be denying the Self-Sampling Assumption, which he accepts, or you might be arguing that simulated and unsimulated minds should not be considered part of the same reference class for the purposes of the SSA, no matter how similar they may be (this is similar to a point I made a while ago about Boltzmann brains in this rather unpopular post).

Let's assume I'm trying to make conclusions about the universe. I could be a brain in a vat, but there's not really anything to be gained in assuming that. Whether it's true or not, I may as well act as if the universe can be understood. Let's say I conclude, from my observations about the universe, that there are many more simulated minds than non-simulated minds. Does it then follow that I am probably a simulated mind? Bostrom says yes. I say no, because my reasoning about the universe that led me to the conclusion that there are more simulated minds than non-simulated ones is predicated on me not being a simulated mind. I would almost say it's impossible to reason your way into believing you're in a simulation. It's self-referential.

I'm going to have to think about this harder, but try and criticise what I'm saying as you have been doing because it certainly helps flesh things out in my mind.

I think Bostrom's argument applies even if they aren't "highly accurate". If they are simulated at all, you can apply his argument.

I don't think that's true. The SSA will have different consequences if the simulated minds are expected to be very different from ours.

If we suppose that simulated minds will have very different observations, experiences and memories from our own, and we consider the hypothesis that the vast majority of minds in our universe will be simulated, then SSA simply disconfirms the hypothesis. If I should reason as if I am a random sample from the pool of all observers, then any theory which renders my observations highly atypical will be heavily disconfirmed. SSA will simply tell us it is unlikely that the vast majority of minds are simulated. Which means that either civilizations don't get to the point of simulating minds or they choose not to run a significant number of simulations.

If, on the other hand, we suppose that a significant proportion of simulated minds will be quite similar to our own, with similar thoughts, memories and experiences, and we further assume that the vast majority of minds in the universe are simulated, then SSA tells us that we are likely simulated minds. It is only under those conditions that SSA delivers this verdict.

This is why, when Bostrom describes the Simulation Argument, he focuses on "ancestor-simulations". In other words, he focuses on post-human civilizations running detailed simulations of their evolutionary history, not just simulations of any arbitrary mind. It is only under the assumption that post-human civilzations run ancestor-simulations that the SSA can be used to conclude that we are probably simulations (assuming that the other two possible conclusions of the argument are rejected).

So i think it matters very much to the argument that the simulated minds are a lot like the actual minds of the simulators' ancestors. If not, the argument does not go through. This is why I said you seem to simply be accepting (2), the conclusion that post-human civilizations will not run a significant number of ancestor-simulations. Your position seems to be that the simulations will probably be radically dissimilar to the simulators (or their ancestors). That is equivalent to accepting (2), and does not conflict with the simulation argument.

You seem to consider the Simulation Argument similar to the Boltzmann brain paradox, which would raise the same worries about empirical incoherence that arise in that paradox, worries you summarize in the parent post. The reliability of the evidence that seems to point to me being a Boltzmann brain ts itself predicated on me not being a Boltzmann brain. But the restriction to ancestor-simulations makes the Simulation Argument importantly different from the Boltzmann brain paradox.

If we are in a simulation, all of our logic will not be universal but instead will be a reaction to the perverted rules set up by the simulation's creators.

While I do not agree on the conclusion of the simulation argument, I think your rebuttal is flawed: we can safely reason about the reality outside simulation if we presume that we are inside a realistic simulation, that is a simulation whose purpose is to mimic as closely as possible the reality outside. I don't know if it's made explicit in the exposition you read, but I've always assumed the argument was about a realistic simulation. Indeed, if the law of physics are computable, you can have even have an emulation argument.

Hm. Let me try to restate that to make sure I follow you.

Consider three categories of environments: (Er) real environments, (Esa) simulated environments that closely resemble Er, aka "ancestral simulations", and (Esw) simulated environments that dont't closely resemble Er, aka "weird simulations."

The question is, is my current environment E in Er or not?

Bostrom's argument as I understand it is that if post-human civilizations exist and create many Esa-type environments, then for most E, (E in Esa) and not (E in Er). Therefore, given that premise I should assume (E in Esa).

Your counterargument as I understand it is that if (E in Esw) then I can draw no sensible conclusions about Er or Esa, because the logic I use might not apply to those domains, so given that premise I should assume nothing.

Have I understood you?

you cannot reason about without our universe from within our universe. It doesn't make sense to do so.

Of course you can. Anyone who talks about any sort of 'multiverse' - or even causally disconnected regions of 'our own universe' - is doing precisely this, whether they realize it or not.

No. Think about what sort of conclusions an AI in a game we make would come to about reality. Pretty twisted, right?

It sounds like you expect it to be obvious, but nothing springs to mind. Perhaps you should actually describe the insane reasoning or conclusion that you believe follows from the premise.

We could have random number generators that choose the geometry an agent in our simulation finds itself in every time it steps into a new room. We could make the agent believe that when you put two things together and group them, you get three things. We could add random bits to an agent's memory.

There is no limit to how perverted a view of the world a simulated agent could have.

I did an exercise in generating my values.

A value is like a direction - you go north, or south. You may hit goal mountains and hang a right past that tree but you still want to be going north. Specifically you may want to lose weight on the way to being healthy, but being healthy is what you value. This was from a 5-10 minute brainstorm, pen+paper session (with a timer) in one of our dojos. I kinda don't want it to be for just my benefit; so I figured I would share it here; they are in no order.

My values rot13:

  • Haqrefgnaq ubj guvatf jbex

  • Yvir ybat, Urnygul

  • Unir rabhtu jrnygu gb yvir jvgubhg jbeel sbe zl shgher (naq zl snzvyl'f shgher)

  • Perngr guvatf bs inyhr gb zr be bguref - Neg, Jevgvat, Pbafgehpgvbaf, jbbq/ryrpgebavp, Prenzvpf

  • Uryc jvgu gur gbbyf bs gur shgher

  • Haqrefgnaq ubj V jbex rabhtu fb gung V pna jbex gbjneqf nyy bs zl inyhrf

  • qb guvatf V rawbl

  • unir gur cbjre gb or serr gb qb nf V cyrnfr

  • yrnir n yrtnpl (ivn perngvat guvatf)

  • uryc crbcyr ol oevatvat gurz gbtrgure

  • unir vasyhrapr jura V jnag vg

  • or npxabjyrqtrq (fhotbny gb yrtnpl)

  • Nibvq nqqvpgvba, fgntangvba, cnva, ybaryvarff, wnvy, qrog, qehtf.

  • Xabjyrqtr - (fhotbny gb qbvat gur guvatf V jnag gb qb)

  • Or ebznagvpnyyl unccl/shyshyyrq

Hopefully this gets you thinking about doing the exercise once for yourself. Also some ideas came from The list of common human goals that I wrote a while back.

  • NZ epidemiologist Pearson A.L appears to have predicted the trans-pacific partnership in 2014: Although such a case may have no strong grounds in existing New Zealand law, it is possible that New Zealand may in the future sign international trade agreements where such legal action became more plausible. - British Medical Journal

  • Why do I, as a desperate male, lonely and horney level desperate, stave of the attention of females when I’m not the one leading the charge? One of my peak experiences was visiting Torquay on an undergrad uni field trip walking with the sexiest girl I’d ever met. A busker was playing ‘I’m a believer’ at a market. It was magical. After the field trip she invited me to a coffee date - I agreed. I never took the initiative from there, and nothing happened. I had spent a week fantasising about her and enjoying her company, but her sexual aggression was somewhat intimidating. The same happened with someone I struggle to appreciate, recently, a girl who flirts with me on an ongoing basis.

  • My housemate said having a strong feeling of I don't want to be like my parents will make me more like them. I wonder if that's true? Is trying to be less neurotic self defeating?
  • CFMEU has a new slogan: 'every battle makes us stronger'. Looks like smart advertising from a group that's under constant fire.

Reframe log

  • Instead if seeing people moving through crowds as antagonistic see them as compatible want (not wanting to collide with you)
  • instead of seeing strangers around me as potentially violent threats see them as potential defenders

behavioural insight, modification

Stop doing those sloppy back slap drum roll hugs Carlos!

  • I used topsy turvy photo icons in my science presentation. I thought it looks kooky and kitsch. It looked dumb. As they say: ironic shitposting is still shitposting.

NZ epidemiologist Pearson A.L appears to have predicted the trans-pacific partnership in 2014

Given that the trans-pacific partnership negotiations started in 2008 and were first sheduled to end in 2012 predicting it in 2014 seems like a feat that doesn't have much to do with predictions but just with being up to date about what's currently negotiated.

Well, what are your beliefs and feelings about intimacy and sex? If you imagine yourself accepting the offers, what would it mean about you? Imagine it like a movie, and then what your parents (or other important people) would say about that.

(I suspect there is something negative, either directly about you e.g. "if you don't lead, then you are weak", or about the girl and then indirectly about you e.g. "if she initiates, she is a slut; and you are a loser if you date a slut".)

beliefs and feelings about intimacy and sex?

This is a complicated clusterfuck and I don't know where to begin

Imagine it like a movie, and then what your parents (or other important people) would say about that.

I would feel kinda ashamed

(I suspect there is something negative, either directly about you e.g. "if you don't lead, then you are weak", or about the girl and then indirectly about you e.g. "if she initiates, she is a slut; and you are a loser if you date a slut".)

I feel I can totally identify with this suggestion. But I'm not sure if that's just cause I'm suggestable.

Thank you so much for you insight.

Why do I stave off the attention of women?

I've had similar reactions in the past. There are a couple reasons, I think. Fear or rejection of the unknown, of jumping into new social situations. Nearsightedness in wanting everything to go perfectly the first time so much that you don't get practice at making things go well. Fear of exposing myself to rejection, coupled with harder to describe feelings of low romantic or sexual worth. The feeling that you don't really know for absolutely sure that you want to spend a ton of time with the person you're flirting with, so you shouldn't follow through.

Two things have helped me with this. The first is increasing my self-worth a little. You can probably think of men less physically attractive than you who have had perfectly happy relationships. Try to understand what makes them attractive people (I tend to think of this as "falling in love" in miniature). In fact, I've found this exercise of trying to see the lovable in other people is a pretty good one in general. Anyhow, you can do this on yourself too. You have plenty of good points, I guarantee it.

The second thing was just jumping into those novel social situations. I have a mantra for it, even: "I would regret not doing it, therefore I will do it."

Fear or rejection of the unknown, of jumping into new social situations.

I suppose so

Nearsightedness in wanting everything to go perfectly the first time so much that you don't get practice at making things go well.

Other experiences support this hypothesis in my case

Fear of exposing myself to rejection, coupled with harder to describe feelings of low romantic or sexual worth.

yep

The feeling that you don't really know for absolutely sure that you want to spend a ton of time with the person you're flirting with, so you shouldn't follow through.

I don't want to get attached to someone that's gonna burn me! :(

Two things have helped me with this. The first is increasing my self-worth a little. You can probably think of men less physically attractive than you who have had perfectly happy relationships. Try to understand what makes them attractive people (I tend to think of this as "falling in love" in miniature). In fact, I've found this exercise of trying to see the lovable in other people is a pretty good one in general. Anyhow, you can do this on yourself too. You have plenty of good points, I guarantee it.

That's a very compelling case. Thank you. And, I feel more positive about other people now too :)

I have a mantra for it, even: "I would regret not doing it, therefore I will do it."

I guess it's time to pull up that backlist of people I have a vauge interest in... ;)

On Fox News, Trump said that regarding Muslims in the US, he would do "unthinkable" things, "and certain things will be done that we never thought would happen in this country". He also said it's impossible to tell with absolute certainty whether a Syrian was Christian or Muslim, so he'd have to assume they're all Muslims. This suggests that telling US officials that I'm a LW transhumanist might not convince them that I have no connection with ISIS. I'm not from Syria, but I have an Arabic name and my family is Muslim.

I've read Cory Doctorow's Little Brother, and this might be a generalization from fictional evidence, but I can't help asking: As a foreign student in the US, how likely is Trump to have me tortured for no reason? Should I drop everything and make a break for it before it's too late? Initially, many Germans didn't take Hitler's extremist rhetoric seriously either, right? (If I get deported in a civilized manner, well, no harm done to me as far as I'm concerned.)

I normally assume, as a rule of thumb, that politicians intend to fulfill all their promises. If a politician says he wants to invade Mars, that could be pure rhetoric, but I'd typically assume that he might try it in the worst case scenario. I have observed it is often the case that when we think other people are joking, they are in fact exaggerating their true desires and presenting them in an ironic/humorous light.

Seems like you're just falling for partisan media histrionics and conflating a lot of different things out of context.

On Fox News, Trump said that regarding Muslims in the US, he would do "unthinkable" things, "and certain things will be done that we never thought would happen in this country".

In context, Trump is giving a tough-sounding but vague and non-committal response to questions about whether there should be a digital database of Muslims in the country. He later partially walked this back, saying it was a leading question from a reporter and he meant we should have terrorism watch lists. Which obviously already exist.

I've read Cory Doctorow's Little Brother, and this might be a generalization from fictional evidence, but I can't help asking: As a foreign student in the US, how likely is Trump to have me tortured for no reason?

I'd say it's about as likely as you giving yourself a heart attack reading political outrage porn.

Thanks, I guess. I knew he was talking about a digital database, but I was wondering if it could have been a dogwhistle for something else. I don't have a favorable opinion of human decency in general.

FWIW, that wasn't a political comment. I hardly ever read or watch anything political. Some TV clips were shown to me by an acquaintance and I wanted an honest assessment of what he had told me it was about. I don't have any opinions on the subject myself.

assume, as a rule of thumb, that politicians intend to fulfill all their promises.

This is a horrible rule of thumb. It's not anywhere close to true, and even if it were, their ability lags their intent by orders of magnitude. Instead, assume that politicians will very slightly alter existing trends in order to encourage their constituents.

I suspect you are at somewhat higher risk of being targetted by officials for your foreign-ness than you were last year. Trump being president will somewhat increase as well, but more because it'll be a sign that the general populace is more racist than we thought than because of any actual policy change.

I think it's really unlikely you'd be imprisoned or tortured, with or without Trump, unless there are stronger ties to enemy groups than just your nationality.

I assume that because I read on the SEP that strategic voting skews results in democracies. The rule of thumb is more like a Schelling point than a lower order rational principle. I said that's what I usually do because I'm aware it's not very applicable in this context since I'm not voting in these elections, but it's a habit I've indulged in for years, unfortunately.

If I were in a pedantic mood, I'd say that the results skew because of bad voting mechanisms (state-level electors and first-past-the-post decisions) that encourage strategic voting, rather than directly from strategic voting.

Still, the electoral skew isn't what you should fear, nor the actual election outcome. The signalling of the populace that such ideas are acceptable to a significant degree is very scary. It's up to you just how personally to take the fear, and how to react to a risk increase from a small fraction of a percent to a less-small fraction of a percent.

I can imagine if you're an activist or particularly stand out as a target group, or just a nervous person, it might be justified to maintain an exit plan you can execute over the course of few days if something changes your estimate of personal danger to the measurable range.

that such ideas are acceptable to a significant degree is very scary.

Which ideas? After John Yoo's memos on torture, Snowden, assassination-by-drone as an entirely routine matter, Guantanamo, etc. what exactly is new and scary to you?

New and scary is the degree to which it's become normal and accepted in mainstream press and the general populace. People with power have always been horrible, but until recently they've had to do it in secret and say they're sorry when they get caught.

New and scary is the degree to which it's become normal and accepted in mainstream press and the general populace.

So... if we're talking presidents, this goes straight to Bush and Obama. I would say Obama in particular because he was supposed to be a bulwark against such things.

However we are discussing why is Trump scary. Why is he more scary than status quo or, say, Hillary? There is a pronounced trend towards a police state, Trump isn't going to stop it, but then I don't see anyone who would and who has a chance at getting to a position where he could.

"As a foreign student in the US, how likely is Trump to have me tortured for no reason?"

It's hard to judge, but I think having a pro-torture president will make use of torture by the police more likely. My feeling is that you aren't in clear and present danger, and institutional changes take time.

You are not as safe as someone with a non-Arabic name.

My feeling is that you don't need a go bag, but you might as well start researching other places which would be good for you to live.

Hitler had a huge party of supporters behind him that he spend a decade gathering around him. Trump on the other hand is much more of an one-man show. One of the biggest role of the president is making personal choices and there simply no comparable pool of talent. Under a Trump administration someone like Chris Christie who's a long-term friend of the Trump family is likely going to get a post in his administration.

When it comes to totalitarism it's a mistake to assume that the past will repeat exactly the same way. It's hard to believe a US government would simply torture random people with Arabic names intentionally just because they have Arab names. It's more likely that privacy will get completely eroded. Today we have face recognition that's strong enough to hook up all camera's on streets to it and get general movement profiles. Forbidding encryption would also be on the table.

Thanks, I'm basically ignorant about contemporary American politics. (But I've read Tocqueville. This is probably not a desirable state of affairs.)

Effective careers

One line summary of What is a fulfilling career? (part 1) @ Cambridge University:

  • autonomy, clear tasks, feedback and variety --> engaging;
  • meaningful;
  • do work that helps others;
  • good relationships with colleagues and social support;
  • fair pay; not long commute; not excessive hours;
  • fits with the rest of your life

--> job satisfaction

subject matter of the work (e.g. your passion is sports, something is a non-neutral cause choice) is actually irrelevant!

Systematic reviews

Pubmed recently spat out:

Wildcard search for 'spa*' used only the first 600 variations. Lengthen the root word to search for all endings

So, next time I will learn to search pub-med before other databases. to identify if your wildcards are overly general for my search strategy. Reckon that's a good approach?

Lobbying

World Coal Association says: 'The power of high efficiency coal - The most cost-effective way to mitigate CO2 emissions' here

The World Coal Associationis non-profit so perhaps we shouldn’t fetishise the term non-profit, or the term coal?

The World Coal Associationis non-profit so perhaps we shouldn’t fetishise the term non-profit, or the term coal?

I couldn't parse this. What do you mean?