Looking forward to the results.
Somewhere I read that a big reason IQ tests aren't all that popular is because when they were first introduced, lots of intellectuals took them and didn't score all that high. I'm hoping prediction markets don't meet a similar fate.
It's fiction ¯\_(ツ)_/¯
I guess I'll say a few words in defense of doing something like this... Supposing we're taking an ethically consequentialist stance. In that case, the only purpose of punishment, basically, is to serve as a deterrent. But in our glorious posthuman future, nanobots will step in before anyone is allowed to get hurt, and crimes will be impossible to commit. So deterrence is no longer necessary and the only reason to punish people is due to spite. But if people are feeling spiteful towards one another on Eudaimonia that would kill the vibe. Being able to forgive one person you disagree with seems like a pretty low bar where being non-spiteful is concerned. (Other moral views might consider punishment to be a moral imperative even if it isn't achieving anything from a consequentialist point of view. But consequentialism is easily the most popular moral view on LW according to this survey.)
A more realistic scheme might involve multiple continents for people with value systems that are strongly incompatible, perhaps allowing people to engage in duels on a voluntary basis if they're really sure that is what they want to do.
In any case, the name of the site is "Less Wrong" not "Always Right", so I feel pretty comfortable posting something which I suspect may be flawed and letting commenters find flaws (and in fact that was part of why I made this post, to see what complaints people would have, beyond the utility of sharing a fun whimsical story. But overall the post was more optimized for whimsy.)
For some thoughts on how climate change stacks up against other world-scale issues, see this.
Yep. Good thing a real AI would come up with a much better idea! :)
It seems to me that under ideal circumstances, once we think we've invented FAI, before we turn it on, we share the design with a lot of trustworthy people we think might be able to identify problems. I think it's good to have the design be as secret as possible at that point, because that allows the trustworthy people to scrutinize it at their leisure. I do think the people involved in the design are liable to attract attention--keeping this "FAI review project" secret will be harder than keeping the design itself secret. (It's easier to keep the design for the bomb secret than hide the fact that top physicists keep mysteriously disappearing.) And any purported FAI will likely come after a series of lesser systems with lucrative commercial applications used to fund the project, and those lucrative commercial applications are also liable to attract attention. So I think it's strategically valuable to have the distance between published material and a possible FAI design be as large as possible. To me, the story of nuclear weapons is a story of how this is actually pretty hard even when well-resourced state actors try to do it.
Of course, that has to be weighed against the benefit of openness. How is openness helpful? Openness lets other researchers tell you if they think you're pursuing a dangerous research direction, or if there are serious issues with the direction you're pursuing which you are neglecting. Openness helps attract collaborators. Openness helps gain prestige. (I would argue that prestige is actually harmful because it's better to keep a low profile, but I guess prestige is useful for obtaining required funding.) How else is openness helpful?
My suspicion is that those papers on Arxiv with 5 citations are mostly getting cited by people who already know the author, and the Arxiv publication isn't actually doing much to attract collaboration. It feels to me like if our goal is to help researchers get feedback on their research direction or find collaborators, there are better ways to do this than encouraging them to publish their work. So if we could put mechanisms in place to achieve those goals, that could remove much of the motivation for openness, which would be a good thing in my view.
Dating is a project that can easily suck up a lot of time and attention, and the benefits seem really dubious (I know someone who had their life ruined by a bad divorce).
I would be interested in the opposite question: Why *would* an EA try and find someone to marry? I'm not trying to be snarky, I genuinely want to hear why in case I should change my strategy. The only reason I can think of is if you're a patient longtermist and you think your kids are more likely to be EAs.
I spent some time reading about the situation in Venezuela, and from what I remember, a big reason people are stuck there is simply that the bureaucracy for processing passports is extremely slow/dysfunctional (and lack of a passport presents a barrier for achieving a legal immigration status in any other country). So it might be worthwhile to renew your passport more regularly than is strictly necessary, so you always have at least a 5 year buffer on it say, in case we see the same kind of institutional dysfunction. (Much less effort than acquiring a second passport.)
Side note: I once talked to someone who became stuck in a country that he was not a citizen of because he allowed his passport to expire and couldn't travel back home to get it renewed. (He was from a small country. My guess is that the US offers passport services without needing to travel back home. But I could be wrong.)
Worth noting that we have at least one high-karma user who is liable to troll us with any privileges granted to high-karma users.
I was always nice and considerate, and it didn’t work until I figured out how to filter for women who are themselves lovely and kind.
Does anyone have practical tips on finding lonely single women who are lovely and kind? I've always assumed that these were universally attractive attributes, and thus there would be much more competition for such women.