I am a first year CS PhD student at Cornell, and interested (though not currently working on it). I will DM you.
The brain may also be excessively complicated to defend against parasites.
Which random factors caused the frostwing snippers to die out? Them migrating out? Competitors or predators migrating in? Or is there some chance of not getting the seed, even if they're the only species left? I didn't get a good look at the source code, but I thought things were fairly deterministic once only one species was left.
In most formulations, the five people are on the track ahead, not in the trolley.
I took a look at the course you mentioned:
It looks like I got some of the answers wrong.
Where am I?
In the trolley. You, personally, are not in immediate danger.
Who am I?
A trolley driver.
Who's in the trolley?
You are. No one in the trolley is in danger.
Who's on the tracks?
Five workers ahead, one to the right.
Do I work for the trolley company?
The problem was not as poorly specified as you implied it to be.
What year is it?
Near a trolley track.
You don't know.
Who designed the trolley?
Who is responsible for the brake failure?
Assume that you're the only person who can pull the lever in time, and it wouldn't be difficult or costly for you to do so. If your answer still depends on whether or not you work for the trolley company, you are different from most (WEIRD) people, and should explain both cases explicitly.
If so, what are its standard operating procedures for this situation?
Either there are none, or you're actually not in the situation above, but creating those procedures right now.
What would my family think?
I don't know, maybe you have an idea.
Would either decision affect my future job prospects?
Is there a way for me to fix the systemic problem of trolleys crashing in thought experiments?
Maybe, but not before the trolley crashes.
Can I film the crash and post the video online?
Note: it's probably not a good idea to post a photo of your vaccine card online.
If Scarlet pressed the PANIC button then she would receive psychiatric counseling, three months mandatory vacation, optional retirement at full salary and disqualification for life from the most elite investigative force in the system.
This sounds familiar, but some quick searching didn't bring anything up. Is it a reference to something?
From the old wiki discussion page:
I'm thinking we can leave most of the discussion of probability to Wikipedia. There might be more to say about Bayes as it applies to rationality but that might be best shoved in a separate article, like Bayesian or something. Also, I couldn't actually find any OB or LW articles directly about Bayes' theorem, as opposed to Bayesian rationality--if anyone can think of one, please add it. --A soulless automaton 19:31, 10 April 2009 (UTC)
For wiki pages which are now tags, should we remove linked LessWrong posts, since they are likely listed below?
What should the convention be for linking to people's names? For example, I have seen the following:
Finally, should the "see also" section be a comma-separated list after the first paragraph, or a bulleted list at the end of the page?
Thanks. I had skimmed that paper before, but my impression was that it only briefly acknowledged my main objection regarding computational complexity on page 4. Most of the paper involves analogies with evolution/civilization which I don't think are very useful-my argument is that the difficulty of designing intelligence should grow exponentially at high levels, so the difficulty of relatively low-difficulty tasks like designing human intelligence doesn't seem that important.
On page 35, Eliezer writes:
I am not aware of anyone who has defended an “intelligence fizzle” seriously and at great length.
I will read it again more thoroughly, and see if there's anything I missed.