Posts

Sorted by New

Wiki Contributions

Comments

After considering this for quite some time, I came to a conclusion (imprecise though it is) that my definition of "myself" is something along the lines of:

  • In short form, a "future evolution of the algorithm which produces my conscious experience, which is implemented in some manner that actually gives rise to that conscious experience"
  • In order for a thing to count as me, it must have conscious experience; anything which appears to act like it has conscious experience will count, unless we somehow figure out a better test.
  • It also must have memory, and that memory must include a stream of consciousness which leads back to the stream of consciousness I am experiencing right now, to approximately the same fidelity as I currently have memory of a continuous stream of consciousness going back to approximately adolescence.

Essentially, the idea is that in order for something to count as being me, it must be the sort of thing which I can imagine becoming in the future (future relative to my conscious experience; I feel like I am progressing through time), while still believing myself to be me the whole time. For example, imagine that, through some freak accident, there existed a human living in the year 1050 AD who passed out and experienced an extremely vivid dream which just so happens to be identical to my life up until the present moment. I can imagine waking up and discovering that to be the case; I would still feel like me, even as I incorporated whatever memories and knowledge he had so that I would also feel like I was him. That situation contains a "future evolution" of me in the present, which just means "a thing which I can become in the future without breaking my stream of consciousness, at least not any more than normal sleep does today".

This also implies that anything which diverged from me at some point in the past does not count as "me", unless it is close enough that it eventually converges back (this should happen within hours or days for minor divergences, like placing a pen in a drawer rather than on a desk, and will never happen for divergences with cascading effects (particularly those which significantly alter the world around me, in addition to me)).

Obviously I'm still confused too. But I'm less confused than I used to be, and hopefully after reading this you're a little less confused too. Or at least, hopefully you will be after reflecting a bit, if anything resonated at all.

No, because "we live in an infinite universe and you can have this chocolate bar" is trivially better. And "We live in an infinite universe and everyone not on earth is in hell" isn't really good news.

You're conflating responsibility/accountability with things that they don't naturally fall out of. And I think you know that last line was clearly B.S. (given that the entire original post was about something which is not identical to accountability - you should have known that the most reasonable answer to that question is "agentiness"). Considering their work higher, or considering them to be smarter, is alleged by the post to not be the entirety of the distinction between the hierarchies; after all, if the only difference was brains or status, then there would be no need for TWO distinct hierarchies. There is a continuous distribution of status and brains throughout both hierarchies (as opposed to a sharp distinction where even the lowest officer is significantly smarter or higher status than the highest soldier), so it seems reasonable to just link them together.

One thing which might help to explain the difference is the concept of "agentiness", not linked to certain levels of difficulty of roles, but rather to the type of actions performed by those roles. If true, then the distinguishing feature between an officer and a soldier is that officers have to be prepared to solve new problems which they may not be familiar with, while soldiers are only expected to perform actions they have been trained on. For example, an officer may have the task of "deal with those machine gunners", while a soldier would be told "sweep and clear these houses". The officer has to creatively devise a solution to a new problem, while the soldier merely has to execute a known decision tree. Note that this has nothing to do with the difficulty of the problem. There may be an easy solution to the first problem, while the second may be complex and require very fast decision making on the local scale (in addition to the physical challenge). But given the full scope of the situation, it is easy to look at the officer and say "I think you would have been better off going further around and choosing a different flank, to reduce your squad's casualties; but apparently you just don't have that level of tactical insight. No promotion for you, maybe next time". To the soldier, it would be more along the lines of "You failed to properly check a room before calling it clear, and missed an enemy combatant hiding behind a desk. This resulted in several casualties as he ambushed your squadmates. You're grounded for a week." The difference is that an officer is understood to need special insight to do his job well, while a soldier is understood to just need to follow orders without making mistakes. It's much easier to punish someone for failing to fulfill the basic requirements of their job than it is to punish them for failing to achieve an optimal result given vague instructions.

EDIT: You've provided good reason to expect that officers should get harsher punishments than soldiers, given the dual hierarchy. I claim that the theory of "agentiness" as the distinguishing feature between these hierarchies predicts that officers will receive punishments much less severe than your model would suggest, while soldiers will be more harshly punished. In reality, it seems that officers don't get held accountable to the degree which your model predicts they would, based on their status, while soldiers get held more accountable. This is evidence in favor of the "agentiness" model, not against it, as you originally suggested. The core steps of my logic are: the "agentiness" model predicts that officers are not punished as severely as you'd otherwise expect, while soldiers are punished more severely; therefore, the fact that in the real world officers are not punished as harshly as you'd otherwise expect is evidence for the "agentiness" model at the expense of any models which don't predict that. If you disagree with those steps, please specify where/how. If you disagree with an unstated/implied assumption outside of these steps, please specify which. If I'm not making sense, or if I seem exceedingly stupid, there's probably been a miscommunication; try and point at the parts that don't make sense so I can try again.

Doesn't that follow from the agenty/non-agenty distinction? An agenty actor is understood to make choices where there is no clear right answer. It makes sense that mistakes within that scope would be punished much less severely; it's hard to formally punish someone if you can't point to a strictly superior decision they should have made but didn't. Especially considering that even if you can think of a better way to handle that situation, you still have to not only show that they had enough information at the time to know that that decision would have been better ("hindsight is 20/20"), but also that such an insight would have been strictly within the requirements of their duties (rather than it requiring an abnormally high degree of intelligence/foresight/clarity/etc.).

Meanwhile, a non-agenty actor is merely expected to carry out a clear set of actions. If a non-agenty actor makes a mistake, it is easy to point to the exact action they should have taken instead. When a cog in the machine doesn't work right, it's simple to point it out and punish it. Therefore it makes a lot of sense that they get harsher punishments, because their job is supposed to be easier. Anyone can imagine a "perfect" non-agenty actor doing their job, as a point of comparison, while imagining a perfect "agenty" actor requires that you be as good at performing that exact role, including all the relevant knowledge and intelligence, as such a perfect actor.

Ultimately, it seems like observing that agenty actors suffer less severe punishments ought to support the notion that agentiness is at the least believed to be a cluster in thingspace. Of course, this will result in some unfair situations; "agenty" actors do sometimes get off the hook easy in situations where there was actually a very clear right decision and they chose wrong, while "non-agenty" actors will sometimes be held to an impossible standard when presented with situations where they have to make meaningful choices between unclear outcomes. This serves as evidence that "agentiness" is not really a binary switch, thus marking this theory as an approximation, although not necessarily a bad approximation even in practice.

Better: randomly select a group of users (within some minimal activity criteria) and offer the test directly to that group. Publicly state the names of those selected (make it a short list, so that people actually read it, maybe 10-20) and then after a certain amount of time give another public list of those who did or didn't take it, along with the results (although don't associate results with names). That will get you better participation, and the fact that you have taken a group of known size makes it much easier to give outer bounds on the size of the selection effect caused by people not participating.

You can also improve participation by giving those users an easily accessible icon on Less Wrong itself which takes them directly to the test, and maybe a popup reminder once a day or so when they log on to the site if they've been selected but haven't done it yet. Requires moderate coding.

She responds "I'm sorry, but while I am a highly skilled mathematician, I'm actually from an alternate universe which is identical to yours except that in mine 'subjective probability' is the name of a particularly delicious ice cream flavor. Please precisely define what you mean by 'subjective probability', preferably by describing in detail a payoff structure such that my winnings will be maximized by selecting the correct answer to your query."

Written before reading comments; The answer was decided within or close to the 2 minute window.

I take both boxes. I am uncertain of three things in this scenario: 1)whether the number is prime; 2) whether Omega predicted I would take one box or two; and 3) whether I am the type of agent that will take one box or two. If I take one box, it is highly likely that Omega predicted this correctly, and it is also highly likely that the number is prime. If I take two boxes, it is highly likely that Omega predicted this correctly and that the number is composite. I prefer the number to be composite, therefor I take both boxes on the anticipation that when I do so I will (correctly) be able to update to 99.9% probability that the number is composite.

Thinking this through actually led me to a bit of insight on the original newcomb's problem, namely that last part about updating my beliefs based on which action I choose to take, even when that action has no causal effects on the subject of my beliefs. Taking an action allows you to strongly update on your belief about which action you would take in that situation; in cases where that fact is causally connected to others (in this case Omega's prediction), you can then update through those connections.

It seems that mild threats, introduced relatively late while immersion is strong, might be effective against some people. Strong threats, in particular threats which pattern-match to the sorts of threats which might be discussed on LW (and thus get the gatekeeper to probably break some immersion) are going to be generally bad ideas. But I could see some sort of (possibly veiled/implied?) threat working against the right sort of person in the game. Some people can probably be drawn into the narrative sufficiently to get them to actually react in some respects as though the threat was real. This would definitely not apply to most people though, and I would not be shocked to discover that getting to the required level of immersion isn't humanly feasible except in very rare edge cases.

His answer isn't random. It's based on his knowledge of apple trees in bloom (he states later that he assumed the tree was an apple tree in bloom). If you knew nothing about apple trees, or knew less than he did, or knew different but no more reliable information than he did, or were less able to correctly interpret what information you did have, then you would have learned something from him. If you had all the information he did, and believed that he was a rationalist and at the least not worse at coming to the right answer than you, and you had a different estimate than he did, then you still ought to update towards his estimate (Aumann's Agreement Theorem).

This does illustrate the point that simply stating your final probability distribution isn't really sufficient to tell everything you know. Not surprisingly, you can't compress much past the actual original evidence without suffering at least some amount of information loss. How important this loss is depends on the domain in question. It is difficult to come up with a general algorithm for useful information transfer even just between rationalists, and you cannot really do it at all with someone who doesn't know probability theory.

Load More