DSherron
DSherron has not written any posts yet.

No, because "we live in an infinite universe and you can have this chocolate bar" is trivially better. And "We live in an infinite universe and everyone not on earth is in hell" isn't really good news.
You're conflating responsibility/accountability with things that they don't naturally fall out of. And I think you know that last line was clearly B.S. (given that the entire original post was about something which is not identical to accountability - you should have known that the most reasonable answer to that question is "agentiness"). Considering their work higher, or considering them to be smarter, is alleged by the post to not be the entirety of the distinction between the hierarchies; after all, if the only difference was brains or status, then there would be no need for TWO distinct hierarchies. There is a continuous distribution of status and brains throughout both hierarchies (as... (read 540 more words →)
Doesn't that follow from the agenty/non-agenty distinction? An agenty actor is understood to make choices where there is no clear right answer. It makes sense that mistakes within that scope would be punished much less severely; it's hard to formally punish someone if you can't point to a strictly superior decision they should have made but didn't. Especially considering that even if you can think of a better way to handle that situation, you still have to not only show that they had enough information at the time to know that that decision would have been better ("hindsight is 20/20"), but also that such an insight would have been strictly within the... (read more)
...Or you could notice that requiring that order be preserved when you add another member is outright assuming that you care about the total and not about the average. You assume the conclusion as one of your premises, making the argument trivial.
Better: randomly select a group of users (within some minimal activity criteria) and offer the test directly to that group. Publicly state the names of those selected (make it a short list, so that people actually read it, maybe 10-20) and then after a certain amount of time give another public list of those who did or didn't take it, along with the results (although don't associate results with names). That will get you better participation, and the fact that you have taken a group of known size makes it much easier to give outer bounds on the size of the selection effect caused by people not participating.
You can also improve participation by giving those users an easily accessible icon on Less Wrong itself which takes them directly to the test, and maybe a popup reminder once a day or so when they log on to the site if they've been selected but haven't done it yet. Requires moderate coding.
She responds "I'm sorry, but while I am a highly skilled mathematician, I'm actually from an alternate universe which is identical to yours except that in mine 'subjective probability' is the name of a particularly delicious ice cream flavor. Please precisely define what you mean by 'subjective probability', preferably by describing in detail a payoff structure such that my winnings will be maximized by selecting the correct answer to your query."
Written before reading comments; The answer was decided within or close to the 2 minute window.
I take both boxes. I am uncertain of three things in this scenario: 1)whether the number is prime; 2) whether Omega predicted I would take one box or two; and 3) whether I am the type of agent that will take one box or two. If I take one box, it is highly likely that Omega predicted this correctly, and it is also highly likely that the number is prime. If I take two boxes, it is highly likely that Omega predicted this correctly and that the number is composite. I prefer the number to be composite,... (read more)
It seems that mild threats, introduced relatively late while immersion is strong, might be effective against some people. Strong threats, in particular threats which pattern-match to the sorts of threats which might be discussed on LW (and thus get the gatekeeper to probably break some immersion) are going to be generally bad ideas. But I could see some sort of (possibly veiled/implied?) threat working against the right sort of person in the game. Some people can probably be drawn into the narrative sufficiently to get them to actually react in some respects as though the threat was real. This would definitely not apply to most people though, and I would not be shocked to discover that getting to the required level of immersion isn't humanly feasible except in very rare edge cases.
His answer isn't random. It's based on his knowledge of apple trees in bloom (he states later that he assumed the tree was an apple tree in bloom). If you knew nothing about apple trees, or knew less than he did, or knew different but no more reliable information than he did, or were less able to correctly interpret what information you did have, then you would have learned something from him. If you had all the information he did, and believed that he was a rationalist and at the least not worse at coming to the right answer than you, and you had a different estimate than he did, then you... (read more)
After considering this for quite some time, I came to a conclusion (imprecise though it is) that my definition of "myself" is something along the lines of:
- In short form, a "future evolution of the algorithm which produces my conscious experience, which is implemented in some manner that actually gives rise to that conscious experience"
- In order for a thing to count as me, it must have conscious experience; anything which appears to act like it has conscious experience will count, unless we somehow figure out a better test.
- It also must have memory, and that memory must include a stream of consciousness which leads back to the stream of consciousness I am experiencing right
... (read more)