I asked about these differences in my second post in this post tree, where I explained how I understood these counterfactuals to work. I explained as clearly as I could that, for example, calculators should work as they do in real world. I did this explaining in hopes of someone voicing disagreement if I had misunderstood how these logical counterfactuals work.
However, modifying any calculator would mean that there can not be, in principle, any "smart" enough ai or agent that could detect it was in counterfactual. Our mental hardware that checks ...
Well, to be exact, your formulation of this problem has pretty much left this counterfactual entirely undefined. Naive approximation, that the world is just like ours, and Omega just lies in counterfactual, would not contain such weird calculators which give you wrong answers. If you want to complicate problem by saying that some specific class of agents have a special class of calculators that one would usually think to work in certain way, but actually they work in a different way, well, so be it. That's however just a free-floating parameter you have left unspecified and that, unless stated otherwise, should be assumed not to be the case.
Yes, those agents you termed "stupid" in your post, right?
After asking about this on #LW irc channel, I take back my initial objection, but I still find this entire concept of logical uncertainty kinda suspicious.
Basically, if I'm understanding this correctly, Omega is simulating an alternate reality which is exactly like ours, and where the only difference is that Omega says something like "I just checked if 0=0, and turns out it's not. If it was, I would've given you moneyzzz(iff you would give me moneyzzz in this kind of situation), but now that 0!=0, I must ask you for $100." Then the agent notices,...
You lost me at part
In Counterfactual Mugging with a logical coin, a "stupid" agent that can't compute the outcome of the coinflip should agree to pay, and a "smart" agent that considers the coinflip as obvious as 1=1 should refuse to pay.
The problem is that, I see no reason why smart agent should refuse to pay. Both stupid and smart agent know it as logical certainty that they just lost. There's no meaningful difference between being smart and stupid in this case, that I can see. Both however like to be offered such bets, where lo...
This actually was one of the things inspiring me to write this post. I was wondering if I could make use of LW community to run such tests, because it would be interesting to get to practice these skills with consent, but trying to devise such tests stumped me. It's actually pretty difficult to come up with a goal that's actually difficult to achieve in any not-overtly-hostile social context. Laborious, maybe, but that's not the same thing. I just kinda generalized from this, that it should actually be pretty easy to run with any consciously named goal and...
That's a nice heuristic, but unfortunately, it's easy to come up with cases where this heuristic is wrong. Say, people want to play a game, I'll use chess for availability, not because it best exemplifies this problem. If you want to have a fun game of chess, ideally you'd hope you did have roughly equal matches. If 9 out of 10 players are pretty weak, just learning the rules, and want to play and have fun with it, you, the 10th player, a strong club player, being an outlier, cannot partake because you are too good(with chess, you could maybe try giving yo...
In general, the the very skilled player would have gotten that way by being smart AND smashing a ton of less skilled players. Trying to say: "I can't go to chess club because I would just defeat everyone and it wouldn't be fair" is ridiculous, and even more so when you've never actually won a tournament. You never hear the story "I was a social butterfly, the most popular person in school, but then I decided that was abusing my powers and now I'm alone. Yay!" On the other hand "I was alone and sad and nerdy, but then I practiced social skills and now I have a ton of friends and am the most popular person in school. Yay!" is, if not very common, a story that I've heard way more than once.
Oh, yes, that is basically my understanding: We do social manipulation to the extent it is deemed "fair", that is, to the point it doesn't result in retaliation. But at some point it starts to result in such retaliation, and we have this "fairness"-sensor that tells us when to retaliate or watch out for retaliation.
I don't particularly care about manipulation that results in obtaining salt shaker or a tennis partner. What I'm interested in is manipulation you can use to form alliances, make someone liable to help you with stuff you want...
This I agree with completely. However, it sounding like power fantasy doesn't mean it's wrong or mistaken.
True. However, it's difficult to construct culturally neutral examples that are not obvious. The ones that pop to my mind are the kind of "it's wrong to be nice to an old, really simple-minded lady because that way you can make her rewrite her will to your benefit", or "It's allright to try to make your roommate do the dishes as many times as you possibly can, as long as you're both on equal footing on this "competition" of "who can do the least dishes"".
I'm not sure how helpful that kind of examples are.
This strikes to me as massively confused.
Keeping track of cancelled values is not required as long as you're working with a group, that is, a set(like reals), and an operation(like addition) that follows the kinda rules addition with integers and multiplication with non-zero real values do. If you are working with a group, there's no sense in which those canceled out values are left dangling. Once you cancel them out, they are gone.
http://en.wikipedia.org/wiki/Group_%28mathematics%29 <- you can check group axioms here, I won't list them here.
Are you sure it wouldn't be rational to pay up? I mean, if the guy looks like he could do that for $5, I'd rather not take chances. If you pay, and it turns out he didn't have all that equipment for torture, you could just sue him and get that $5 back, since he defrauded you. If he starts making up rules about how you can never ever tell anyone else about this, or later check validity of his claim or he'll kidnap you, you should, for game-theoretical reasons not abide, since being the kinda agent that accepts those terms makes you valid target for such frauds. Reasons for not abiding being the same as for single-boxing.
Actually, there is such a law. You cannot reasonably start, when you are born into this world, naked, without any sensory experiences, expecting that the next bit you experience is much more likely to be 1 rather than 0. If you encounter one hundred zillion bits and they all are 1, you still wouldn't assign 1/3^^^3 probability to next bit you see being 0, if you're rational enough.
Of course, this is mudded by the fact that you're not born into this world without priors and all kinds of stuff that weights on your shoulders. Evolution has done billions of ye...
I don't think you need to change the domain name. For marketability, you might wanna have the parts named so that stuff within your site becomes brand in itself, so greatplay.net becomes associated with " utilitarianism", " design" etc. Say, I read a blog by a chemist who has series of blog posts titled "stuff i won't work with: ". I can't remember the domain name, but I know that whenever I want to read about nasty chemical, i google that phrase.
yes. yes. i remember thinking "x + 0 =". after that it gets a bit fuzzy.
Qiaochu_Yuan already answered your question, but because he was pretty technical with his answer, I thought I should try to simplify the point here a bit. The problem with division by zero is that division is essentially defined through multiplication and existence of certain inverse elements. It's an axiom in itself in group theory that there are inverse elements, that is, for each a, there is x such that ax = 1. Our notation for x here would be 1/a, and it's easy to see why a 1/a = 1. Division is defined by these inverse elements: a/b is calculated by a...
My friend told me he wanted to see http://en.wikipedia.org/wiki/Andrei_Sakharov on this list. I must say that I don't know the guy, but based on the Wikipedia article, he was a brilliant Soviet nuclear physicist behind few of the largest man-made explosions ever to happen, and somewhere around 1960's he turned to political activism regarding dangers posed by nuclear arms race. In the political climate of 1960 Soviet Union, that was a brave move, too, and the powers that be made him lose much because of that choice.
Sequences contain a rational world view. Not a comprehensive one, but still, it gives some idea about how to avoid thinking stupid and how to communicate with other people that are also trying to find out what's true and what's not. It gives you words by which you can refer to problems in your world view, meta-standards to evaluate whether whatever you're doing is working, etc. I think of it as an unofficial manual to my brain and the world that surrounds me. You can just go ahead and figure out yourself what works, without reading manuals, but reading a manual before you go makes you better prepared.
Interaction of this simulated TDT and you is so complicated I don't think many of commenters here actually did the math to see how should they expect the simulated TDT agent to react in these situations. I know I didn't. I tried, and failed.
I got similar results when I tried the more nondescript "focus on your breathing, if you get lost in your thoughts, go back to breathing, try to observe what happens in your mind" style meditation. Also, I got intense feeling of euphoria on my third try, and feelings of almost passing out under the storm of weird thoughts flowing in and out. That made me a bit scared of meditation, but this post series managed to scare me a whole lot more.
This probably doesn't interest many of you, but I'd be curious to know if I'd hear here any suggestions to inspiring works of fiction with hypercompetent characters in them. Watched the Bourne trilogy in the middle of reading this post, now I want more! :)
My own ideas
Live: -James Bond Casino Royale/Quantum of Solace/Skyfall -House MD -Sherlock
Anime: Death Note Golden Boy
I do think it is good to have some inspirational posts here than don't rely that much on actual argumentation but rather paint an example picture where you could be when using rationality, what rationality could look like. There are dangers to that, but still, I like these.
I had missed this. The original post read as really weird and hostile, but I only read after having heard about this thread indirectly for days, mostly about the way how later she seemed pretty intelligent, so I dismissed what I saw and substituted what I ought to have seen. Thanks for pointing this out.
Is there any data supporting the idea that dvorak/colemak/some other new keyboard layout are actually better than qwerty. Like, actual data collected by doing research on actual people that type stuff, how their layout of choice affects their health and typing speed. I do know that you get figures like "on average your fingers travel twice the amount if you type on qwerty as compared to some other layout", but actual data from actual typists?
I've been practicing dvorak for about a month. Not much since I got above 10wpm(1 hour a day for a week), but I've used it when there has been typing to be done. I've gotten to 40wpm, and I started with 70wpm qwerty speed. Incidentally, I've also forgotten how to type with qwerty.
I'd suggest you find a week when you are free to use about an hour of your time every day to practice dvorak and don't need to type anything really, and then maybe another week when you are not under any stress about your typing speed. After that, you should be able to type well...
Welcome, its fun to have you here.
So, the next thing, I think you should avoid this religion-topic here. I mean, you are allowed to continue about it, but I fear you are gonna wear yourself out by doing that. I think there are better topics to discuss, where both you and LW have chance to learn new and change their opinions. Learning new is refreshing, discussions about religion rarely are that.
Admittedly, I think that there is no god, but also I'm not thinking anyone here convinces you of that. I think you actually have higher chance of converting someone...
"Ylioppilasaukio 5"? I can't find Cafe Picnic at an address like that
If you make copy, then inform both original and the copy of their states("You're the original" "You're the first copy"), and then proceed to make new copy of the original, information equivalence exists only between copy number 2 and the original, making it back to 1/2, 1/4, 1/4
Even majority of readers participated to these meetups every time, it doesn't matter. Quoting the about-post: ""Promoted" posts (appearing on the front page) are chosen by the editors on the basis of substantive new content, clear argument, good writing, popularity, and importance."
Meetup-posts do not contain new, important, argumentative content. It's meta-level discussion, meta that it bit by bit trying to take over the whole LW. I don't want LW that exists for posts about LW. Meetup-posts are not the only thing driving LW towards uselessness, but as far as I can tell, having those posts in the front page is by far the most visible and obvious warning sign.
So you can avoid being punished by not predicting potential punishers well enough, or by deciding to do something regardless of punishments you're about to receive? I'm not sure that's good.
Oh, thanks to more discussion today, I figured out why the dangerous idea is dangerous, and now I understand why people shouldn't seek it. More like, the actual idea is not dangerous, but it can potentially lead to dangerous ones. At least, if I understood the entire thing correctly. So, I understand that it is harmful for us to seek that idea, and if possible, it shouldn't be discussed.
I sought out the dangerous idea right after I heard about the commotion, and I was disappointed. I discussed the idea, and thought about it hard, I'm still a bit unsure if I figured out why people think of the idea as dangerous, but to me it seems to be just plain silly.
I don't regret knowing it. I figured right from the start that the probability of it actually being dangerous was low enough that I don't need to care about it, and seems that my initial guess was right on the spot. And I really do dislike not knowing about things that everybody says are really dangerous and can cause me and my loved ones much agony for reasons no one is allowed to tell
Yes, but that incomplete-one means that his power can't override powers others have. Even if he could, after paying attention to Allirea, understand her power, it doesn't follow from what we know of his powers up to now that he could pay attention to her any more than any other person there. Even some sort of power-detection field would fail to reveal other than "There's is vampire that diverts attention paid to it in that general direction", if we assume it overrides her ability, which would make Eleazar severely handicapped in a fight anyway.
Yeah, and I wanted to say that you're treating the characters you create in an awful and cruel way. Stop that. They should be happy at least once in a while :p
Is Allirea + Eleazar thing canon? It sure doesn't seem to follow from what we've seen before, unless Eleazar lied to Bella.
Mind explaining why? I don't see any reason it's any more true than it is false.
This isn't translatable as a function. 'Meaningful' and 'meaningless' aren't values bivalent functions return so they shouldn't be values in our logic.
So the sentence "The sentence 'Everything written on the board in Room 33 is either false or meaningless.' is meaningless" is not true?
Yes, humans performing outstandingly well in this sort of problem was my inspiration for this. I am not sure how far it is possible to generalize this sort of winning. Humans themselves are kinda complex machines, so, if we start with perfectly rational LW reader and paperclip maximizer with one-shot PD with randomized payoff matrix, what's the least amount of handicaps we need to give them to reach this super-optimal solution? At first, I thought we could even remove the randomization alltogether, but it is making the whole problem more ambiguous I think.
Becoming a person doesn't seem like something that you can do free of cost. There seems to be a lot of complexity hidden in that "Become a person" part.
Those properties that we think makes happy humans better than totally artificial smiling humans mimicing happy humans. You'd need to find it in order to grasp what it means to have a being that lacked moral value, and "both ideas" refers to the distinct ways of explaining what sort of paperclip maximizer we're talking about.
But I'd think if I only said "It doesn't have moral value in itself", you'd still have to go back similar steps to find that property cluster that we assign value. I tried to transfer both ideas by using the word soul and claiming lack of moral value.
It requires us to know what sort of utility function the other player has, at the very least, and even then the result might be, at best, mutual defect or, against superrational players, mutual co-operation.
And? If you have multiple contradictory wishes what to do next, some of them are bound to be unfulfilled. CEV or negotiation are just ways to decide which ones.
Why do you think I lose?
Because there are a lot more of those with values totally different from yours, which made the CEV optimize a future that you didn't like at all. If you're negotiating will all those people, why would they give in to you any more than CEV would optimize for you?
So you're bound to end up losing in this game, anyway, right? Negotiation in itself won't bring you any additional power over the coherent extrapolated volition of humanity to change the future of the universe. If others think very much unlike you, you need to overpower them to bring your values back to the game or perish in the attempt.
The above is a caricature of 'coherence' as presented in the May 2004 document. If someone else can provide a better interpretation, that would be welcome.
It seemed accurate to me. Also, I didn't find any problems from it that would seem frightening or so. Was it supposed to be problematic in some way?
Just an attempt to make it clear that we're dealing with something like an intelligent calculator here with nothing in it that we'd find interesting or valuable in itself. Setting up this as the true PD.
Try as I might, I cannot find any reference to what's canonical way of building such counterfactual scenarios. Closest I could get was in http://lesswrong.com/lw/179/counterfactual_mugging_and_logical_uncertainty/ , where Vladimir Nesov seems to simply reduce logical uncertainty to ordinary uncertainty, but this does not seem to have anything to do with building formal theories and proving actions or any such thing.
To me, it seems largely arbitrary how agent should do when faced with such a dilemma, all dependent on actually specifying what it means to tes... (read more)