If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Open thread, Nov. 24 - Nov. 30, 2014
New Comment
325 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The header for this page says "You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet.". It's inaccurate because Discussion doesn't include the posts which were started in Main.

Stuart Russell contributes a response to the Edge.org article from earlier this month.

Of Myths And Moonshine

"We switched everything off and went home. That night, there was very little doubt in my mind that the world was headed for grief."

So wrote Leo Szilard, describing the events of March 3, 1939, when he demonstrated a neutron-induced uranium fission reaction. According to the historian Richard Rhodes, Szilard had the idea for a neutron-induced chain reaction on September 12, 1933, while crossing the road next to Russell Square in London. The previous day, Ernest Rutherford, a world authority on radioactivity, had given a "warning…to those who seek a source of power in the transmutation of atoms – such expectations are the merest moonshine."

Thus, the gap between authoritative statements of technological impossibility and the "miracle of understanding" (to borrow a phrase from Nathan Myhrvold) that renders the impossible possible may sometimes be measured not in centuries, as Rod Brooks suggests, but in hours.

None of this proves that AI, or gray goo, or strangelets, will be the end of the world. But there is no need for a proof, just a convincing a

... (read more)
6artemium
Finally some common sense. I was seriously disappointed in statements made by people I usually admire (Pinker, Schremer). It just shows how much we still have to go in communicating AI risk to the general public when even the smartest intellectuals dismiss this idea before any rational analysis. I'm really looking forward to Elon Musk's comment.
1Brillyant
ELI5... * Why can't we program hard stops into AI, where it is required to pause and ask for further instruction? * Why is "spontaneous emergence of consciousness and evil intent" not a risk?
8Viliam_Bur
If the AI is aware of the pauses, it can try to eliminate them (if the pauses are triggered by a circumstance X, it can find a clever way to technically avoid X), or to make itself receive the "instruction" it wants to receive (e.g. by threating or hypnotising a human, or by doing something that technically counts as human input).
-3Brillyant
I see. This is the gist of the AI Box experiment, no?
4Viliam_Bur
The important aspect is that there are many different things the AI could try. (Maybe including those that can't be "ELI5". It is supposed to have superhuman intelligence.) Focusing on specific things is missing the point. As a metaphor, imagine that a group of retarded people is trying to imprison MacGyver in a garden shed. Later MacGyver creates an explosive from his chewing gum, destroys a wall, and leaves. The moral of this story is not: "To imprison MacGyver reliably, you must take all the chewing gum from him." The moral is: "If you are retarded, and your enemy is MacGyver, you almost certainly cannot imprison him in the garden shed." If you get this concept, then similar debates will feel like: "Let's suppose we make really really sure he has no chewing gum. We will even check his shoes, although, realistically, no one keeps chewing gum in their shoes. But we will be extra careful, and will check his shoes anyway. What could possibly go wrong?"
2wedrifid
No. Bribes and rational persuasion are fair game too.
1[anonymous]
Because instructions are words, and "ask for instructions" implies an ability to understand and a desire to follow. The desire to follow instructions according to their givers' intentions is more-or-less a restatement of the Hard Problem of FAI itself: how do we formally specify a utility function that converges to our own in the limit of increasing optimization power and autonomy?
-3TheAncientGeek
If you are worrying about the dangers of human level or greater AI, you are tacitly taking the problem of natural language interpretation to have been solved, so the above is an appeal to Mysterious Selective Stupidity.
1[anonymous]
No, I am not. Just because an AGI can solve the natural-language interpretation problem does not mean the natural-language interpretation problem was solved separately from the AGI problem, in terms of narrow NLP models. In fact, more or less the entire point of AGI is to have a single piece of software to which we can feed any and all learning problems without having to figure out how to model them formally ourselves.
0TheAncientGeek
In responding to Brilliant, you were tacitly assuming that the AI has been given instructions in some higher level language that is subject to differing interpretations, and is not therefore just machine code, which US tacitly assuming it has already got .NL abilities. Yes, it would probably need a motivation to interest such sentences correctly. But that us an easier problem to solve than coding un the whole of human value. An AI would need to understand human value in order to understand NL, but would not need to be preloaded with all human value, since discovering it would be a subsidiary goal of interpreting NL correctly. And interpreting instructions correctly is a subgoal of getting things in general right. Building AIs that are epistemic rationalists could be a further simplification of the problem of AI safety. Epistemic rationality is difficult for humans because humans are evolutionary hacks whose goals are spreading their genes, achieving status, etc.It may be excessively anthropomorphic to assume human levels of deviousness in AIs.
1[anonymous]
No, I'm insisting that no realistic AGI at all is a Magic Genie which can be instructed in high-level English. If it were, all I would have to say is, "Do what I mean!" and Bob's your uncle. But since that cannot happen without solving Natural Language Processing as a separate problem before constructing an AGI, the AGI agent has a utility function coded as program code in a programming language -- which makes desirable behavior quite improbable. Again: knowing is quite different from caring. What we could do in this domain is solve natural-language learning and processing separately from AGI, and then couple that to a well-worked-out infrastructure of normative uncertainty, and then, after making absolutely sure that the AI's concept-learning via the hard-wired natural-language processing library matches the way human minds represent concepts computationally, use a large corpus of natural-language text to try to teach the AI what sort of things human beings want. Unfortunately, this approach rarely works with actual humans, since our concept machinery is horrifically prone to non-natural hypotheses about value, to the point that most of the human race refuses as a matter of principle to consider ethical naturalism a coherent meta-ethical stance, let alone the correct one. We have some idea of a safe goal function for the AGI (it's essentially a longer-winded version of "Do what I mean, but taking the interests of all into account equally, and considering what I really mean even under reflection as more knowledge and intelligence are added"), the question is how to actually program that. Which is actually an instance of the more general problem: how do we program goals for intelligent agents in terms of any real-world concepts about which there might be incomplete or unformalized knowledge? Without solving that we can basically only build reinforcement learners. The whole cognitive-scientific lens towards problems is to treat them as learning and inference prob
0TheAncientGeek
I was actually agreeing with you that NLP needs to be solved separately if you want to instruct it in English. The rhetoric about magic isn't helpful. I don't see why that would follow, and in fact I argued against it. I know. That's not what I was saying. I was saying an AI with a motivation to understand .NL correctly would research whatever human value was relevant. That's kind of what I was saying. Non sequitur. In general, what is an instrumental goal will vary with final goals, and epistemic rationality is a matter of final goals. Omohundran drives are unusual in not having the property of varying with final goals.
[-]NikiT170

I've been trying to decide whether or not to pursue an opportunity to spread rationalist memes to an audience that wouldn't ordinarily be exposed to them. I happen to be friends with the CEO and editor of an online magazine/community blog that caters to queer women, and I'm reasonably confident that with the right pitch I could convince them to let me do a column dedicated to rationality as it relates to the specific interests of queer women. I think there might be value in tailoring rationality material for specific demographics.

The issue is that, in order to make it relevant to the website and the demographic, I would need to talk about politics while trying to teach rationality, which seems highly risky. As one might imagine from the demographic, the website and associated community is heavily influenced by social justice memes, many of which I wholeheartedly endorse and many others of which I'm highly critical of. The strategy I've been formulating to avoid getting everybody mindkilled is to talk about the ways biases contibute to sexisim and homophobia, and then also talk about how those same bias can manifest in feminist/social justice ideas, while emphasising to death how i... (read more)

There a good strategy against publishing something stupid: Peer review before publication.

Something that's missing from a lot of social justice talk is quoting cognitive science papers. Talking about actual experiments and what the audience can learn from them could make people care more about empiricism.

4NikiT
I was planning to have one of my friends from the community around that website test read the articles for me, though I might also benefit from having a rationalist test read them, if anybody wants to volunteer. Discussing cognitive science experiments is part of the plan. I actually performed a version of the 2-4-6 experiment on a group of people associated with the website (while dressed as a court jester!(it was during a renaissance fair)) and as predicted only 20% of them got it right. I think knowing that members of their own ingroup are just as susceptible to bias as faceless experimental subjects will help get the point across.
2ChristianKl
I volunteer for giving you feedback on a few articles.

Suddenly, I know the relative sizes of the planets!

HT Andrew Gelman.

ETA: Pluto isn't in the picture, but it would be a coriander seed, half the diameter of Mercury. For the Sun, imagine a spherical elephant.

5philh
The radius of the sun is only about ten times the radius of jupiter. I feel like a spherical elephant has considerably more than ten times the radius of a watermelon. ...is what I was about to say until I did research, and apparently it's pretty accurate. A watermelon can exceed 60cm diameter, and wolfram alpha gives an elephant's length between 5.4 and 7.5 metres.
4Brillyant
That's either one huge grapefruit...or one tiny watermelon.
[-]Torgo140

I've long been convinced that donating all the income I can is the morally right thing to do. However, so far this has only taken the form of reduced consumption to save for donations down the road. Now that I have a level of savings I feel comfortable with and expect to start making more money next year, I no longer feel I have any excuse; I aim to start donating by the end of this year.

I’m increasingly convinced that existential risk reduction carries the largest expected value; however, I don’t feel like I have a good sense of where my donations would have the greatest impact. From what I have read, I am leaning towards movement building as the best instrumental goal, but I am far from sure. I’ll also mention that at this point I’m a bit skeptical that human ethics can be solved and then programmed into an FAI, but I also may be misunderstanding MIRI’s approach. I would hope that by increasing the focus on the existential risks of AI in elite/academic circles, more researchers could eventually begin pursuing a variety of possibilities for reducing AI risk.

At this point, I am primarily considering donating to FHI, CSER, MIRI or FLI, since they are ER focused. However, I am open to alternatives. What are others’ thoughts? Thanks a lot for the advice.

3Gurkenglas
An upper bound on the loss incured by waiting another year before you donate your savings to an organization is the interest they would have to pay on a loan of your saving's size in that time. If you estimate the chance that you will regret your choice of donation target in a year highly enough, that means waiting may be prudent. Just a thought. (The cost might be increased by their reduced capacity for planning with the budget provided by you in mind; but with enough people acting like you, the impact of this factor should disappear in the law of large numbers)
4Torgo
Certainly that is an important point to consider. I could always place funds in a donor advised fund for now. However, if an organization that I donated to thought the funds would be best spent later, they could invest the funds. Considering this, my current thinking is that I should donate to an organization if they share the goal of reducing existential risk and I think they would be better at deciding on the best course of action than I would. Considering I am not currently an expert in areas which would prove useful to reducing existential risk, I'm leaning towards donating. Does this seem like a sensible course of action?
3jefftk
In practice, charities don't really invest excess money or take out loans to spend money sooner. I'm not sure why. Possible explanations: * No one will lend much to charities, because they don't have much collateral and their income expectations are so uncertain. Or this leads to very high interest rates. * Investing money instead of spending it looks bad and is visible externally through things like the US Form 990. * You're required to spend at least X% of the money that comes in each year. * If you take a loan, having already spent the money makes it harder to fundraise. People want to pay for things to happen. * Investing extra money signals that you don't have room for more funding and so should get less money in the future. Regardless, if you're thinking that your decision doesn't matter because the recipient can just do X or Y, and it turns out X and Y aren't really options for them, then your decision does still matter.
0[anonymous]
So I pressed the icon that looked like "Delete" and it just struck the text through. Great.
1jefftk
If you think general EA movement building is what makes the most sense currently, then funding the Centre for Effective Altruism (the people who run GWWC and 80k) is probably best. If you think X-risk specific movement building is better, then CSER and FLI seem like they make the most sense to me: they're both very new, and spreading the ideas into new communities is very valuable. (And congratulations on getting to where you're ready to start donating!)
0Torgo
Thanks. At this point, I'm leaning towards CSER. Do you happen to know how it compares to other X-risk organizations in terms of room for more funding?
1jefftk
I don't know, sorry! Without someone like GiveWell looking into these groups individuals need to be doing a lot of research on their own. Write to them and ask? And then share back what you learn? (Lack of vetting and the general difficulty of evaluating X-risk charities is part of why I'm currently not giving to any.)

This week's writing lesson: If your motivation for writing is almost entirely internal, then you should write what you enjoy writing, not what you think you should write.

(I lost a few days' worth of productivity getting that one knocked into my skull, though hopefully I'm back to snuff.)

A song about self-awareness:

Yielding to Temptation by Mark Mandel, to the tune of Bin There, Dun That by Cat Faber

Something called me from the bookcase
and I answered quick and dumb
And I guess I'd still be reading there
if rescue hadn't come.
Well, I must have jumped six inches
and I answered "Coming, dear!"
Now the sf's in the basement
and it doesn't call so clear.

Chorus: 'Cause I've bin there, dun that,
learned what I should know.
Had the hours* go like nothing
and had nothing good to show.
Yes, I've bin there, dun that,
learned to recognize
When I'm yielding to temptation
by the haze behind my eyes.

  • changes with each chorus

I was filling up the ice cube tray
last night at half past ten
When I heard a voice entreating
"Won't you dance with me again?"
It's the caramel fudge ripple,
sweet as love and thick as sin.
I'm not dumb, I'm not expAndable,
and I'm not digging in!

Chorus: 'Cause I've bin there, dun that,
learned what I should know.
Had the calories* go like nothing
and had nothing good to show.
Yes, I've bin there, dun that,
learned to recognize
When I'm yielding to temptation
by the haze behind my eyes.

As I stroll around the dealers' room
I'm only there to look.
No, I d

... (read more)

Has anyone been prompted to study or read anything thanks to MIRI's new research guide?

Development aid is really hard.

A project that works well in one place or for a little while may not scale. Focus on administrative costs may make charities less competent.

Nonetheless, some useful help does happen, it's just important to not chase after the Big Ideas.

[-][anonymous]140

One of the charities mentioned in the article, Deworm the World, is actually a Givewell top charity, due to "the strong evidence for deworming having lasting impact on childhood development". The article, on the other hand, claims that the evidence is weak, citing three studies in the British Medical Journal, which Givewell doesn't appear to mention in their review of the effectiveness of deworming.

Givewell's review of deworming

Might be worth looking into more.

4NancyLebovitz
Something that should have occurred to me-- the deworming experiment was done in the late 90s, which means that the effect on lifetime income is an estimate.

What does your inner Quirrellmort tell you?

Has your internal model of the most competent person you can imagine ever given you an insight you wouldn't have thought of with more traditional methods?

Do you have more than one such useful sub-personality?

Does your main mode of thinking bring anything to the table that your useful mental models of others don't? If so, what?

4MathiasZaman
He mostly tells me to kill annoying people. No, but I'm working on them. I've found my inner Hufflepuff to be particularly helpful in actually getting things done. Incidentally, is there a name for the "sub-personality technique?"
6DataPacRat
'Deliberately induced dissociative identity disorder'? 'Cultivation of tulpas'? 'Acting'?
8somnicule
Internal Family Systems is the analogous therapy technique, I think.
7[anonymous]
What would Jesus do?
6Richard_Kennaway
Adopting a hero. Short Duration Personal Saviour. Method acting.
0Vulture
This already refers to a similar, but much dicier, technique.
3Sjcs
I unfortunately haven't developed a quirrellmort yet (the concept is on my to-do list though, along with a number of other personifications). I do have two loose internal models though, for very specific tasks. The first is called "The Alien" or just "Alien". I created it in my mid-teens after reading the last samurai (not the movie), although my use of The Alien is not the same as the book's. The Alien is the voice in my head that says the pointlessly stupid or cruel things (generally about people) for no reason other than being able to. They aren't things I actually believe or feel, so I just tell The Alien to shut up. By doing this, I can create a divide between myself and these thoughts, not feel guilty about them occuring, and more quickly put them out of my mind. The second I created very recently based off this thread. It is for the prevention of ego depletion when it comes to either starting big tasks or taking care of long lists of little tasks. Rather than think "Ok time to (make myself) do this" I defer the choice to an internal, slightly more rational model of myself that doesn't suffer from decision fatigue. The outcome is very predictable ("Do the goddarn task already"), but does seem to work very well for me. It's still quite new, and I probably don't use it as much as I should. I have plans to make a number of other internal models to create an internal 'parliment' that can discuss and debate major decisions, or act on their own for specific required benefits. Other models that might be included include a cynic/pessimist (to help me be more pessimistic in my planning), an altruist (to consider if my actions are actually beneficial), a highly motivated being (to help renew my resolve), and some kind of quirrellmort. These are probably very liable to change as I try to implement them.
0RowanE
I've often considered producing such a personality, after observing a previous LW discussion about tulpas, but never even got past the stage of which character to use - I don't know who the "most competent person I can imagine" would be.
-5maxikov

I have been playing the card game Hanabi one hell of a lot recently, and I strongly recommend it to the LW community.

Hanabi is an abstract, cooperative game with limited information. And it's practically a tutorial in rational thinking in a group. Extrapolating unstated facts from other players' belief states is essential: "X did something that doesn't make sense given what I know; what is it that X knows but I don't, under which that action makes sense?" So, for that matter, is a consequentialist view of communication: "If I tell X the fact... (read more)

0MrMind
Seconding too. I've played in very small groups (~3), and the game usually stabilizes into predictable strategies (1 discards, 2 gives information, 3 puts down, and after a while switch between 2 and 3). Larger groups are probably messier and funnier, but nonetheless, very instructive.
0drethelin
Seconding this recommendation.
[-]Shmi80

From a comment on SSC:

Attempts to get the LW community to borrow some of the risk analysis tools that are used to make split second judgments in such communities effectively has been met with a crushing wall of failure and arrogance. Suggestion that LW-ers should take a simple training course at their local volunteer fire department so they can understand low probability high cost risk on an emotional level has been met with outright derision.

Does anyone close to CFAR know the specifics?

As someone who has taken the NIMS/ICS 100 course (online through FEMA), and gone to my local fire station and taken their equivalent of NIMS/ICS 100/200/70 -- I was not very impressed.

I can clearly see that there are valuable things in NIMS/ICS, and I can even believe that the movement which gave rise to the whole thing had valuable, interesting, and novel insights. But you're not going to get much of that by taking the course. It's got about one important concept -- which basically boils down to "it's good for different agencies to cooperate effectively, and here's one structure under which that empirically seems to happen well, therefore let's all use it" -- and the rest is a lot of details and terminology which are critically important to people actually working in said agencies, and mostly irrelevant otherwise.

EDIT: Boromir's big thing seems to be that HRO is about risk analysis, updating based on evidence, and dealing with low probabilities as mentioned in the excerpt. I can tell you that the basic ICS course covers exactly none of that. So I wonder what 'training course at the local volunteer fire department' he thinks we should all take. (I admit I have not taken the FEMA-official ICS 200 and 70 classes, which are online. But given the style of the 100 class, I cannot imagine them being dense with the kind of knowledge he thinks we should be gaining from them.)

6bogus
Interesting, though apparently this person made his suggestions to Salamon and Yudkowsky in person, not to the LW community itself - thus, his reference to "outright derision" is somewhat misleading. CFAR has indeed adopted some ideas that originally came from LW itself - the whole "goal factoring" theme of recent CFAR workshops seems to be a significant example.
4Nornagest
I'm not particularly close to the CFAR wing of that crowd, but: on the one hand, that sounds at least potentially valuable, and I'd look into it if I had anything more specific to go on than "a simple training course". (Poking around my local fire department's webpage turned up only something called "Community Emergency Response Training", which seems to consist of first aid, disaster prep, and basic firefighting -- too narrow and skill-based to be what Boromir's comment is talking about.) On the other hand, though, I don't think we're getting the full story here. The fact that Boromir devotes most of his comment to flogging the organization he's (judging from his username's link) either a member or a fanboy of, in particular, is a very bad sign.

An idea I've been toying with in my head, and discussed slightly at LW London yesterday: a sort of Snopes for "has person X professed opinion Y?"

Has Scott Alexander endorsed GamerGate? Did Eric Raymond say that hackers tend to be libertarian (or neoconservative, depending who you ask)? Did Eliezer say the singularity was too close to bother getting a degree?

I'll put further thoughts in replies to this comment.

[-]Baughn340

I'd be wary of making a thing like that. Even ignoring the EU's bizarre "Right to be forgotten" law, people should be allowed to change their opinion, and such a website would incentivise consistency only. Not truth; consistency.

Are you sure that's what you want?

[-]philh120

Mm, good point.

One of the things which inspired this idea was this thread: "okay, yes, it seems that Eliezer might well have said something like that, back in 2001". Eliezer already doesn't get to be forgotten. But if people are attacking him for things he said back in 2001, it seems like an improvement if we make it obvious that he said them back in 2001.

But for other people, I can see how this could be a bad thing to have. I'd like to be able to write "they said this in 2001, but in 2010 they said the opposite" and have people accept "okay, they changed their mind", but that doesn't seem entirely realistic.

I've updated from "probably good idea, unsure how valuable" to "possibly good idea, high variance".

0DanielLC
Ideally it would have "he said it", "he did not say it", and "he has since retracted it". As is, you could find where someone originally said something, and have no way of knowing if it has ever been retracted.
0NancyLebovitz
:My idea version of the wiki would include a history of the person's ideas. There still might be be problems with people (I'm thinking of Moldbug) whose ideas are hard to parse.
0Baughn
That wouldn't prevent selective quoting, and all the other typical human behaviour which would, still, incentivise consistency.
8philh
The answers to questions like this aren't necessarily "yes" or "no". But it could still be valuable to say things like "the source for this seems to be this article from 2004, in which he is quoted as saying ...." Or, "he was quoted as saying this in this article. He encouraged people to read the article, but years later, he said that that line was a misquote."
8bogus
That's pretty much how TakeOnIt works already.
2philh
That seems pretty similar to what I'm envisioning, but transposed. They want to look at positions, and ask "whose opinions on this position are notable?" where notability is based on whether they're likely to have a clue. I'm going for looking at people, and asking "which of this person's positions are notable?" where notability is based on (something like) whether people are talking about it being their position.
6bogus
That's just the default view. You can click on the name of any "expert" and bring up a nice report where all of their positions are listed and compared with other experts'. And "notability" is viewed quite generally anyway. As long as the person has something genuinely worthwhile to say, you can add their opinion on all sorts of stuff.
4ChristianKl
The fact that I recommend people to read an article in which I'm cited doesn't imply that I believe that the article is 100% factually correct. In general journalists do simply the positions of the people they quote. Depending on the context I might be okay with a slight alteration of my position in the article as long as the main points I want to make appear in the article. If the quote then gets lifted into another context, I might have a problem.
7[anonymous]
I assume you're talking about internet figures in the greater LW-memeplex. If so, I think this is a bad idea. Tidy reasons this may have low-to-moderate value: * It's already easy to find the public positions of an internet figure. * Reasons are more important than conclusions. Unless you think you can present the arguments better than the original source, you'll just end up simply linking to the original source, which is, again, easy to find. Messy reasons this might have negative value: * As a rule, no online community has ever suffered from a lack of introspection. I'm so very sick of hearing groups talk about themselves. In particular, talking about prominent group figures is extremely off-putting to newcomers. * It will become a source of emotional stress for those quoted. "Popular-online-writer" is a world apart from being a real public figure. Empirically, the latter handle third-party discussion of themselves poorly. * Realistically, this will not guard against drama involving the unfair attributions of positions. If somebody wants to pattern match so-and-so to a particular archetype, there's nothing you can do to stop them. * I love my favorite blogs, but gaining an audience is a quality-quantity game, with an emphasis on quantity. Why give particular attention to the conclusions of a figure who have been selected in this way?
5philh
I'm not intending it to be LW-focused at all (except perhaps by accident of userbase). Other public figures I recall seeing misrepresented include Eric S Raymond, Orson Scott Card and Larry Summers. I've read enough ESR that when RationalWiki says I know that the blog post in question suggests that they really did perform a ritual for that purpose, and that the ritual had a significant effect on the mental state of the participants, but ESR does not believe that the ritual was effective in summoning any kind of god. The blog post doesn't make that last part explicit, but if pressed I could find a slashdot comment where he does say so explicitly. I don't think it's easy to do this. (The RW line could be considered not-completely-false, because one can summon a god without the god answering. And it might even be honest, if the writer didn't understand where ESR was coming from. But to the extent that people read it and think that ESR believes that Ogun was successfully summoned, that line isn't true.) I'm also not interested in arguing over whether or not that ritual ever took place. I don't think anyone's particularly interested in that. I think some people are interested in making fun of ESR, and I'm interested in making it as easy as possible to debunk those people when they say things that aren't true. So I don't need to present ESR's arguments, I just want to say "no, you're misrepresenting his conclusions".
3Lumifer
The list of misrepresented public figures is the list of public figures.
6philh
There are a lot of true claims of the form "person X said thing Y". It would be a mistake to only include false claims, because then a claim which isn't listed may be considered true by default. But including every claim would make it impossible to find the one someone is interested in. I'm not sure what notability guidelines would look like.
5ChristianKl
As far as famous/notable people go, skeptics.stackexchange works perfectly well for those questions. In general however focusing on "he said, she said" is bad. I might argue I wide arrange of positions depending on the context. Sometimes I play devils advocate to make points. Focusing on actual content instead of focusing on what someone said in a single instance if often better.
3philh
I'm envisioning this as a mediawiki, where a given person will have a page, and that page lists claims about things they have said. Edit wars can hopefully be fixed by having a number of editors who know how to be impartial, and being trigger-happy on locking pages so that only they can edit. The talk page can be used for discussion, and for the person themselves to weigh in.
0Artaxerxes
I like this idea a lot. I honestly think it would be a useful resource, should it be well researched and accurate.
0Gunnar_Zarncke
What is your intention? If you hope to espouse truth then I doubt it helps. People have lots of opinions - many of them uninformed or guesswork. And such a site has the risk of additionally weighing the prominent voices too much. But assuming there is a sensible purpose then I think care must be taken to balance against prominence. User pages are prone to become hubs and mouthpieces of prominent people. Same for popular topics. I think wikipedias approach of mentioning popular backers for claims is a good balance. Maybe this could be realized as an add-on to existing sites like Wikipedia. "What did X say about Wikipediapage Y?"
4philh
I'm not hoping to espouse truth in general - I don't think this is a good way to give people correct opinions about, say, neoreaction. I'm hoping to espouse truth about what people actually think, and I'm hoping that this will help to quell bullshit rumours. So if someone starts a rumour that Eliezer is neoreactionary, someone else could add a section "Eliezer on neoreaction" saying things like: this rumour might be triggered by Eliezer's associations with Mike Anissimov and LW; Eliezer has never publicly endorsed neoreaction; in fact he has publicly disclaimed it in a comment on this article, and hasn't said much else on the subject. (A lot of this has the implied qualification "as far as the editor knows". I'm not sure how explicit this should be.) And then anyone who sees the rumour will have an easy way to find out whether or not it's true, instead of googling for "Eliezer Yudkowsky neoreaction" which by then could be a self-citing tumblr-storm, and will not show up anything by Eliezer on neoreaction because he hasn't actually said all that much about it.
8sixes_and_sevens
There's an unavoidable disconnect between "what people actually think" and "what people report about what they think". As a matter of good faith, I think people should be taken at their word and deed for what they say they think. Others disagree, and will ascribe all manner of beliefs to a person, regardless of that person's protestations. Eliezer might not say he's neoreactionary, but they can read between the lines. They can probably put together a plausible post-hoc justification for it as well. If someone's motivated enough to believe Eliezer is a neoreactionary, I don't think your site stops that. I don't think Eliezer getting a "Seriously, Fuck NRx" tattoo stops that. It just gives them a new venue to try and make their case.
[-]philh100

There are also people who would believe that Eliezer is a neoreactionary if they were told it, but would also believe that Eliezer is not a neoreactionary if they were told that.

I guess I'm hoping that if this question comes up on a public forum, most people won't really know or care about Eliezer. The narrative in my head is along the lines of: someone says Eliezer is NRx, and someone else looks it up and says, no, Eliezer is not NRx, it says so right here. Then if the first person wants to convince anyone, their arguments become complicated and boring and nobody reads them.

This may be a naive question, which has a simple answer, but I haven't seen it. Please enlighten me.

I'm not clear on why an AI should have a utility function at all.

The computer I'm typing this on doesn't. It simply has input-output behavior. When I hit certain keys it reacts in certain, very complex ways, but it doesn't decide. It optimizes, but only when I specifically tell it to do so, and only on the parameters that I give it.

We tend to think of world-shaping GAI as an agent with it's own goals, which it seeks to implement. Why can't it be more like a... (read more)

9JStewart
This has been proposed before, and on LW is usually referred to as "Oracle AI". There's an entry for it on the LessWrong wiki, including some interesting links to various discussions of the idea. Eliezer has addressed it as well. See also Tool AI, from the discussions between Holden Karnofsky and LW.
1Capla
I was just reading though the Eliezer article. I'm not sure I understand. Is he saying that my computer actually does have goals? Isn't there a difference between simple cause and effect and an optimization process that aims at some specific state?
3Viliam_Bur
Maybe it would help to "taboo" the word "goal". A process can progress towards some end state even without having any representation of that state. Imagine a program that takes a positive number at the beginning, and at each step replaces the current number "x" with value "x/2 + 1/x". Regardless of the original number, the values will gradually move towards a constant. Can we say that this process has a "goal" or achieving the given number? It feels wrong to use this word here, because the constant is nowhere in the process, it just happens. Typically, when we speak about having a "goal" X, we mean that somewhere (e.g. in human brain, or in the company's mission statement) there is a representation of X, and then the reality is compared with X, various paths from here to X are evaluated, and then one of those paths is followed. I am saying this to make more obvious that there is a difference between "having a representation of X" and "progressing towards X". Humans typically create representations of their desired end states, and then try finding a way to achieve them. Your computer doesn't have this, and neither does "Tool AI" at the beginning. Whether it can create representations later, that depends on technical details, how specifically such "Tool AI" is programmed. Maybe there is a way to allow superhuman thinking even without creating representations corresponding to things normally perceived in our world. (For example AIXI.) But even in such case, there is a risk of having a pseudo-goal of the "x/2 + 1/x" kind, where the process progresses towards an outcome even without having a representation of it. AI can "escape from the box" even without having a representation of "box" and "escape", if there exists a way to escape from it.
0torekp
I don't get this explanation. Sure, a process can tend toward a certain result, without having an explicit representation of that result. But such tendencies often seem to be fragile. For example, a car engine homeostatically tends toward a certain idle speed. But take out one or all spark plugs, and the previously stable performance evaporates. Goals-as-we-know-them, by contrast, tend to be very robust. When a human being loses a leg, they will obtain a synthetic one, or use a wheelchair. That kind of robustness is part of what makes a very powerful agent scary, because it is intimately related to the agent's seeing many things as potential resources to use toward its ends.
7Wes_W
First, there's the political problem: if you can build agent AI and just choose not to, this doesn't help very much when someone else builds their UFAI (which they want to do, because agent AI is very powerful and therefore very useful). So you have to get everyone on board with the plan first. Also, having your superintelligent oracle makes it much easier for someone else to build an agent: just ask the oracle how. If you don't solve Friendliness, you have to solve the incentives instead, and "solve politics" doesn't look much easier than "solve metaethics." Second, the distinction between agents and oracles gets fuzzy when the AI is much smarter than you. Suppose you ask the AI how to reduce gun violence: it spits out a bunch of complex policy changes, which are hard for you to predict the effects of. But you implement them, and it turns out that they result in drastically reduced willingness to have children. The population plummets, and gun violence deaths do too. "Okay, how do I reduce per capita gun violence?", you ask. More complex policy changes; this time they result in increased pollution which disproportionately depopulates the demographics most likely to commit gun violence. "How do I reduce per capita gun violence without altering the size or demographic ratios of the population?" Its recommendations cause a worldwide collapse of the firearms manufacturing industry, and gun violence plummets, along with most metrics of human welfare. If you have to blindly implement policies you can't understand, you're not really much better off than letting the AI implement them directly. There are some things you can do to mitigate this, but ultimately the AI is smarter than you. If you could fully understand all its ideas, you wouldn't have needed to ask it. Does this sound familiar? It's the untrustworthy genie problem again. We need a trustworthy genie, one that will answer the questions we mean to ask, not just the questions we actually ask. So we need an orac
0gedymin
This is actually one of the standard counterarguments against the need for friendly AI, at least against the notion that is should be an agent / be capable of acting as an agent. I'll try to quickly summarize the counter-counter arguments Nick Bostrom gives in Superintelligence. (In the book, AI that is not agent at all is called tool AI. AI that is an agent but cannot act as one (has no executive power in the real world) is called oracle AI.) Some arguments have already been mentioned: * Tool AI or friendly AI without executive power cannot stop the world from building UFAI. Its abilities to prevent this and other existential risks are greatly diminished. It especially cannot guard us against the "unknown unknowns" (an oracle is not going to give answers to questions we are not asking.) * The decisions of an oracle or tool AI might look good, but actually be bad for us in ways we cannot recognize. There is also a possibility of what Bostrom calls mind crime. If a tool or oracle AI is not inherently friendly, it might simulate sentient minds in order to give the answers to the questions that we ask; kill or possibly even torture these minds. The possibility that these simulations have moral rights is low, but there can be trillions of them, so even a low probability cannot be ignored. Finally, it might be that the best strategy for a tool AI to give answer is to internally develop an agent-type AI that is capable of self-improvement. If the default outcome of creating a self-improving AI is doom, then the tool AI scenario might in fact be less safe.
0ChristianKl
If you use a spell checking engine while you are typing that likely has an utility function buried in it's code.

This is a disturbing talk from Schmidhuber (who worked with Hutter and one of the founders of Deep Mind at the Swiss AI lab).
I say disturbing because of the last minute where he basically says we should be thankful for being the stepping stone to the next step in an evolution towards a world ran by AI's.
This is the nonsense we see repeated almost everywhere (outside lesswrong) that we should be happy to have humanity supplanted by the more intelligent AI, and here it is coming from a pretty wellknown AI researcher... https://www.youtube.com/watch?v=KQ35zNlyG-o

[-][anonymous]60

Today I read a post by Bryan Caplan aimed toward effective altruists:

Question: How hard would it be to set up a cost-effective charity to help sponsor the global poor for immigration to Argentina? Responses from GiveWell, the broader Effective Altruism community, and Argentina experts are especially welcome.

For context, Argentina essentially allows immigration by anybody who can get an employer to sponsor them.

9bramflakes
what could a faltering, medium-trust country like argentina need more than millions of poor, low-trust immigrants

It's a common framing, and so I don't intend to pick on you, but I think the key issue isn't levels of trust, but levels of trustworthiness. Yes, there can be feedback effects in both directions between trust and trustworthiness, but fundamentally, it is possible for people and institutions with high trustworthiness to thrive in an otherwise low-trust/trustworthiness society. Indeed, lacking competitors, they may find it particularly easy to do so, and through gradual growth and expansion, lead to a high-trust/trustworthiness society over time. It is not possible for people and institutions with high trust to thrive in an otherwise low-trust/trustworthiness society, as they will be taken advantage of.

You can't bootstrap a society to a high-trust equilibrium by encouraging people to trust more. You need to encourage them to keep their promises.

2[anonymous]
I think this line of thinking is productive. Other thoughts: For cooperative agents to thrive among non-cooperators, they must be able to identity other cooperators. Of course you can wait for the non-cooperators to identity themselves (via an act of non-cooperation in tit-for-tat, or a costly signal), but other agents are inevitably going to rely on other heuristics and information to predict the hidden strategies of others, and, when the agents are human, they will do this in a risk-averse way. Accordingly, a low-trust society (one in which no single entity is able or willing to enforce cooperative behavior over all individuals) is seldom homogeneously low-trust (or low trustworthiness), but rather a amalgamation of subgroups, each of which is relatively more trusting and trustworthy, but only within the subgroup. Because of the need to guess at the hidden strategies of others, these subgroups don't necessarily split the society into "levels of trustworthiness". The task of moving to a high trust/trustworthiness society becomes the task of getting cooperative subgroups to identity other potentially cooperative subgroups, and for those two subgroups to figure out a way to share the duty of enforcing cooperative behavior, or of allowing more true information about the cooperative behavior of individuals to flow between groups. Since evolution produces a special cooperation in close-kinship relations, the simplest artificial grounds for merging two previously uncooperative subgroups is to stretch the kinship relation as far as possible (as in clans, or any society where third- and fourth-cousin relationships are considered relevant). Some other examples related to this process: * The spread of shared religious identity (when this involves submitting to a punitive religious law). * Trade unions, cartels and guilds. * Language boundaries (which impede information about trustworthiness from flowing across groups). * Race, (as an amalgam of language, religion,

Anyone want to comment on a pilot episode of a podcast "Rationalists in Tech"? Please PM or email me. I'll ask for your feedback and suggestions for improvement on a 30-minute audio interview with a leading technologist from the LW community. This will allow me to plan an even better series of further interviews with senior professionals, consultants, founders, and executives in technology, mostly in software.

  • Discussion topics will include the relevance of CfAR-style techniques to the career and daily work of a tech professional; tips on

... (read more)

Many Interacting Worlds: Boffo or Bunk?

From my blogfeed: http://theness.com/neurologicablog/index.php/the-many-interacting-worlds-hypothesis/ , which links to http://www.nature.com/news/a-quantum-world-arising-from-many-ordinary-ones-1.16213 , which links to http://journals.aps.org/prx/abstract/10.1103/PhysRevX.4.041013 .

Does anyone with a better understanding of Schrodinger's Equation(s) than I know if any of the above is worth paying attention to?

4MrMind
It's interesting, but I wouldn't be much concerned with models that "reproduce some generic quantum phoenomena". Thanks to categorical quantum mechanics, we already know that many finite toy models do that: heck, you can have quantum phoenomena in databases.
0Slider
I had a similar prompt for knowledge seeking in wanting to figure out how the math supports or doesn't support "converging worlds" or "mangled worlds". The notion of a converging world is also porbably of note worthy intuitive reference point in thought-space. You could have a system that is in a quantum indeterministic state each state have a different interaction so that the futures of the states are identical. At that point you can drop the distinguising of the worlds and just say that two worlds have become one. Now there is a possibility that a state left alone first splits and then converges or that it does both at the same time. There would be middle part that would not be being able to be "classified" which in these theories would be represented by two worlds in different configurations (and waves in more traditional models). Some times I have stumbled upon an argument that if many worlds creates extra worlds whether that forms as a kind of growing block ontology (such as the flat splitters in the sequence post). Well if the worlds also converge that could keep the amount of "ontology stuff" constant or able to vary in both directions. I stumbled upon that |psi(x)^2| was how you calculated the evolution of a quantum state which was like taking a second power and then essentailly taking a square root by only careing about the magnitude and not the phase of the complex value. For a double slit wtih L being left and R being R it resulted in P(L+R)^2= ^2+C+^2 (where C was either 1, 2 or sqr(2) don't remember and didn't understand which) . The squarings in the sum I found was claimed to be the classical equivalent of the two options. The interference fridges would be great and appear where the middle term was strong. I also that you could read as something like "obtain X if situation was/is y". Getting L when the particle went L is thus very ordinary and all. You can also note that the squaring have the same form as the evolution of a pure state. However I didn
0Viliam_Bur
I don't quite understand this topic, but maybe this could be useful: The problem with "converging / mangled worlds" is statistical. To make two worlds interact (and become the same world, or erase each other, depending on mutual orientation of their amplitudes), those worlds must have all their particles in the same position. In usual circumstances, this seems unlikely. Imagine the experiment with the cat, where in one world the cat is dead, and in other world the cat happily walks away. How likely is it that at some moment in the future, both universes will have all particles in the same positions? So, in usual circumstances two worlds interact only if a moment ago they were the same world, and the only difference was one particle going two different paths. (Yes, there are also all the other particles in the universe, also splitting all the time. But this happens the same way in both branches, so it cancels out.) My intuition is that this "single state" was never literally one point, but always a small interval (wave? hump?). An interval can break into two parts, and those can travel in different directions. There is no such thing as a single point in quantum physics. (Disclaimer: I don't really understand quantum physics; I am just interpreting the impression I got from looking at Eliezer's drawings. If you have better knowledge, feel free to ignore this.)
0Slider
What forces the worlds to be same in order to interact? You could also have merely adjacent worlds where the "collisions angle" could compensate for small differences. It is just a little harder to imagine how worlds of unrelated state would interact. Maybe dark energy is the sum total of gravity from other worlds? It's also that two worlds won't long stay singular, but branch all the time into subworlds. The probability of some of the pairwise worlds being close enough is higher. edit: Also there are settings where splitting doesn't mean lack of structure. For example in the mirror experiements the two paths will systematically intersect and this is a pretty stable result of the mirror positionings.
0[anonymous]
If something branches in a limited space, soon the branches will touch each other. The question is, how soon is "soon". If we imagine a real 3D tree in a 3D world, the branches will touch before dozen splits. But if the tree would be extremely large (a few kilometers) and the branches extremely tiny (a few milimeters), there could be more splits. If we imagine the history of the whole universe as a branching tree in a many-dimensional world, we have to realize there are many dimension (I guess approximately six dimensions for each particle: position and momentum), and compared with the size of the dimension, the branches are really tiny (take two random particles in the whole universe, what is their probability of hitting each other). So there is a lot of time for the tree to grow. Eventually, the branches will run out of space and start hitting each other all the time. But I think this will happen at the "heat death" of the universe. Then the branches will hit each other so much that the whole concept of time or even reality may become meaningless. But I think this is not happening now, yet. There is still a lot of space for the universe to grow without intersecting with other branches. This seems to me like a new hypothesis, outside of the quantum physics as we know it yet, not supported by experimental results. Maybe it is so; maybe it isn't. Without good evidence for it the prior probability seem small (there are many possible new hypotheses we could make to explain dark energy, this is just one of them, why should it be preferred to the alternatives).
0JoshuaZ
This doesn't seem to give a straightforward explanation for whether it could reproduce the expected Bell-type experiments, especially a CHSH experiment, and from a glance I don't see how they'll get that correct without forcing some sort of completely ad-hoc rule for how the universes interact.
0Manfred
Sure, it's doable. It may even be trivial - one can recast partial time derivatives of a wave function as total time derivatives of a distribution of particles with velocities. Unfortunately this seems doable an infinite number of ways, and in general probably isn't useful.
[-]tog50

It's an appealing and easy enough hack that I'll plug my recent LessWrong discussion post Shop for Charity: how to earn proven charities 5% of your Amazon spending in commission. Especially now that Black Friday week has started on Amazon.

3tog
On the same topic, Gunnar_Zarncke recently started a LessWrong Financial Effectiveness Repository
1Drayin
That is a neat hack - who said there's no such thing as a free lunch?
3Sysice
This isn't necessarily- if you have to think about using that link as charity while shopping, it could decrease your likelihood of doing other charitable things (which is why you should set up a redirect so you don't have to think about it, and you always use it every time!)
2faul_sname
Amazon already does that for you -- if you go to buy something without using that link, it'll ask you if you want to.

Calico, the aging research company founded by Google, is hiring.

TLDR: Requesting articles/papers/books that feature detailed/explicit "how-to" sections for bio-feedback/visualization/mental training for improving performance (mostly mental, but perhaps cognitive as well)

Years ago I saw an interview with Michael Phelps' (Olympic swimmer) coach in which he claims that most Olympic-finalist caliber swimmers have nearly indistinguishable physical capabilities, Phelps' ability to focus and visualize success is what set him apart.

I also saw a program about free divers (staying underwater for minutes) who slow ... (read more)

4Sjcs
The book On Combat by Dave Grossman discusses some of these things. I haven't read it yet, but have read reviews and listened to a podcast by two people I consider highly evidence-based and reputable (here). In particular, the book discusses a method of physiologically lowering your heart rate he calls "Combat Breathing". This entails 4 phases, each for the durations of a count of 4 (no unit specified, I do approx 4 seconds): 1. Breathe in 2. Hold in 3. Breathe out 4. Hold out It sounds very simple, but I have heard multiple recommendations of it from both the armed-forces and medical worlds. I can also add a data point confirming it works well for me (mostly only for reducing heart rate to below 100, not all the way down to resting rate).
3Brillyant
I'm skeptical of this. No doubt it is relatively true that professional/elite athletes have similar physical capabilities, but even very small differences in athletic ability can be very consequential over the course of XXX meters in a swimming race or, say an entire season of football. We are talking about very small margins of victory in many (or most) cases.
0Torello
I agree that small physical differences can be very consequential--wouldn't small mental differences be similarly consequential? http://www.radiolab.org/story/91618-lying-to-ourselves/ This radiolab episode discusses how swimmers who engage in more self-deception win more frequently, controlling for other factors (i.e., self-deceivers on a division 3, 2, and 1 teams are more likely to beat their opponents, so at different levels of physical skill their mentality is predictive). I'm not sure what you're getting at here--that the victory of a particular person is attributable to noise because the margin of error is small?
0Brillyant
Great points. In Phelps' case, I think he is physically superior—though perhaps only slightly—compared to the competition. Same with Usain Bolt. I'd agree confidence, even to the extent it is self-deception, can make a significant difference when it comes to sports performance. However, when an athlete—like Phelps or Bolt—routinely wins over the course of several races spanning years, I think physical capability differences are the main reason. In team sports, or really any sport that requires more than just straight line speed, I think psychological difference are very important. But swimming and sprinting are largely physical contests. Unless you have problems with false starts, I'm not seeing where the mental edge figures in. (Obviously longer races that require endurance and pacing considerations are more prone to psychological influence.)
1ChristianKl
The first step of how to of biofeedback means getting a biofeedback device. Direct heart rate is no good goal. Doing biofeedback on heart rate variance is better. I'm not sure whether you want a bomb squad to have a heart rate that's lower than normal. Step-by-step instructions are not how you achieve the kind of results of Phelps or the bomb squat. Both are done through the guidance of coaches. To the extend that the main way I meditate has steps it has three: 1. Listen to the silence 2. Be still 3. Close your eyes. Among those (3) is obvious in meaning. (1) takes getting used to and is probably not accessible by mere reading. Understanding the meaning of (2) takes months.
0Torello
Thanks for your reply. Can you point me to any articles/sites about biofeedback devices? Have you done biofeedback yourself? Perhaps you're right about the bomb squad heart rate, maybe a moderately raised rate would be a proxy for optimal/peak arousal levels. However, I'd guess that a little too much calm is better than overwhelming panic, which would probably be a more typical reaction to approaching a bomb that's about to explode. I agree that a coach would be better, but a book is a more practical option at the moment. (this may sound snarky, but isn't) Did you learn meditation from a teacher, or from a step-by-step book? The steps you give seem are simple (not easy), and a good starting point. I think a meditation coach would help you flesh these out, but those kinds of precise instruction are what I'm looking for.
2ChristianKl
Yes, and people at LW are in generally very bad at simple. People here have the skills for dealing with complex intellectual subjects. The problem with "be still" is that it leaves you with question like: "4 minutes in the meditation I feel the desire to adjust my position, what do I do?" It doesn't give you a easy criteria to decide when moving to change your position violates "be still" and when it doesn't. Doing biofeedback is still on my todo list. My device knowledge might be 1-2 years out of date. Before that point the situation was that emWave2 and wilddivine were the good non-EGG based solutions. Good EGG based solutions are more expensive. See also a QS-forum article on neurofeedback. Even through the QS forum is very low in terms of posts, posting a question there on topics like this is still a good idea (Bias disclosure: I'm a mod at the QS-Forum). Among those two emWave2 basically only goes over heart rate variance (HRV) and WildDevine also measures skin conductance level (SCL) with is a proxy for the amount that you sweat. WildDevine also has a patent for doing biofeedback with HRV + SCL. emWave2 is with 149$ at the moment AFAIK the cheapest choice for a good device that comes with a good explanation of how to do training with it and that you can just use as is. I started with learning meditation from a book by Aikido master Koichi Tohei ten years ago. I have roughly three years of in person training. I also have NLP/Hypnosis training since that time. If I would switch out an emotional response of the bomb swat, then hypnosis is probably the tool of choice. With biofeedback I would see no reason for overcompensation. Switching out an emotional response via hypnosis on the other hand can lead to such effects. Hearing an alarm of an ambulance might also lower my heart rate ;) There are also safety issues. I don't like the idea of people messing themselves up and are faced with experiences that they can't handle because they don't have proper supervi

We're considering Meetup.com for the Tel Aviv LW group. (Also, the question was asked here.) It costs money, but we'd pay if it's worthwhile. I note that there are only 5 LessWrong groups at Meetup of which 2-3 are active. I'll appreciate feedback on the usefulness of Meetup.

Nice blog post about AI and existential risks by my friend and occasional LW poster. He was inspired by disappointingly bad debate on Edge.org. Feel free to share if you like it. I think it is a quite good introduction on Bostrom's and MIRI arguments.

"The problem is harder than it looks, we don’t know how to solve it, and if we don’t solve it we will go extinct."

http://nthlook.wordpress.com/2014/11/26/why-fear-ai/

1Viliam_Bur
Seems very good, but this is coming from a person familiar with the topic. I wonder how good it would seem to someone who hasn't heard about the topic yet.

I'm looking for an old post. Something about an extinct species of primate that may once have been nearly as smart as humans, but evolved over time to be much dumber, apparently because the energy costs of intelligence were maladaptive in its environment.

Can anyone point me in the right direction?

[-][anonymous]20

This site drains my energy. Too many topics seem interesting on the surface but are really just depressing and not actionable, with the big example being a bad singularity.

I have also found in my life that general, useful advice is rare. Most advice here seems either too vague or too specific to the poster. I did find at least one helpful book (by Scott Adams) and a couple of good posts, but think other sources could help at less cost. There are many smart people here, but if you look you can find something much more useful: smart people who have already achieved the particular goals you seek.

Bye.

[-][anonymous]20

The year is 1800. You want to reduce existential-risk. What do you do?

Are you a time-traveler or a native?

3[anonymous]
A native (but optionally a very insightful and visionary native). EDIT: I said native, but all that I really want to avoid is an answer like "I would use all my detailed 21-st century scientific knowledge to do something that a native couldn't possibly do".
7Lumifer
How about "I would use all my detailed 21-st century scientific knowledge to be concerned about something that a native couldn't possibly be concerned about"?
0[anonymous]
Sure, if it leads to an interesting point. For example, if you were trying to avoid suffering: "I would kill 12 year old Hitler" isn't very interesting, but "I would do BLAH to improve European relations" or "There's nothing I could do" are interesting.
2polymathwannabe
Did you mean 1800 or 1900?
5[anonymous]
I didn't mean that example to refer to original question; I just wanted to demonstrate a vague but somewhat intuitive difference between "fair" and "unfair" use of future knowledge.
6Lumifer
Well, being concerned about existential risk in 1800 probably means you were very much impressed by Thomas Malthus' An Essay on the Principle of Population (published in 1798) and were focused on population issues. Of course, if you were a proper Christian you wouldn't worry too much about X-risk anyway -- first, it's God's will, and second, God already promised an end to this whole life: the Judgement Day.
1Brillyant
Still true today.
5Lumifer
Sure, but the percentage of fully believing Christians was much higher in 1800.
0lmm
I give Napoleon a hand, on the basis that he was one of the more scientifically-minded world leaders, and the theory that a strong France makes our future more multipolar. For the same reason I try to spread the notion of the limited-liability corporation in the islamic world (no idea how to do that though). I might try to convince nations of the (AIUI genuine) non-profitability of colonialism.
4TimS
If you want multi-polar, Napoleon is the last person you should help. He was clearly acting to reduce the number of Great Powers to 1. He even succeed for a bit re: Prussia & Austria. Alternatively, if he wins, how do you prevent France v. USA instead of Russia v. USA.
0lmm