All of mwaser's Comments + Replies

Thoughts on the Singularity Institute (SI)

If it is true (i.e. if a proof can be found) that "Any sufficiently advanced tool is indistinguishable from agent", then any RPOP will automatically become indistinguishable from an agent once it has self-improved past our comprehension point.

This would seem to argue against Yudkowsky's contention that the term RPOP is more accurate than "Artificial Intelligence" or "superintelligence".

4Alejandro110yI don't understand; isn't Holden's point precisely that a tool AI is not properly described as an optimization process? Google Maps isn't optimizing anything in a non-trivial sense, anymore than a shovel is.
3shminux10yFirst, I am not fond of the term RPOP, because it constrains the space of possible intelligences to optimizers. Humans are reasonably intelligent, yet we are not consistent optimizers. Neither do current domain AIs (they have bugs that often prevent them from performing optimization consistently and predictably).That aside, I don't see how your second premise follows from the first. Just because RPOP is a subset of AI and so would be a subject of such a theorem, it does not affect in any way the (non)validity of the EY's contention.
Babies and Bunnies: A Caution About Evo-Psych

Actually, eating a baby bunny is a really bad idea when viewed from a long-term perspective. Sure, it's a tender tasty little morsel -- but the operative word is little. Far better from a long-term view to let it grow up, reproduce and then eat it. And large competent bunnies aren't nearly as cute as baby bunnies, are they? So maybe evo-psych does have it correct . . . . and maybe the short-sighted rationality of tearing apart a whole field by implication because you don't understand how something works doesn't seem as brilliant.

3DavidAgain11yIt's only a bad idea if there's a decent chance of you getting to eat that bunny or its offspring AND if there would otherwise be a shortage. Otherwise a small bunny in the hand is worth dozens of big ones in the bush. As a tribe, or better still a species, there might be benefits to not eating what you catch, but there's unlikely to be real benefits to the individual, so you'd need group selection here. Even in modern society we can see this: look at the problem of over-fishing for instance. 'Fishermen' and indeed 'humankind' would benefit from more careful fishing, but you need strong international enforcement to try to make indivduals follow this route. As an individual, the food you get from a sprat is more on average than the miniscule chance of you getting bigger fish later because you release it.
BOOK DRAFT: 'Ethics and Superintelligence' (part 1)

MY "objection" to CEV is exactly the opposite of what you're expecting and asking for. CEV as described is not descriptive enough to allow the hypothesis "CEV is an acceptably good solution" to be falsified. Since it is "our wish if we knew more", etc., any failure scenrio that we could possibly put forth can immediately be answered by altering the potential "CEV space" to answer the objection.

I have radically different ideas about where CEV is going to converge to than most people here. Yet, the lack of distincti... (read more)

BOOK DRAFT: 'Ethics and Superintelligence' (part 1)

I know the individuals involved. They are not biased against non-academics and would welcome a well-thought-out contribution from anyone. You could easily have a suitable abstract ready by March 1st (two weeks early) if you believed that it was important enough -- and I would strongly urge you to do so.

4lukeprog11yThanks for this input. I'm currently devoting all my spare time to research on a paper for this volume so that I can hopefully have an extended abstract ready by March 15th.
"Nahh, that wouldn't work"

Threats are certainly a data point that I factor in when making a decision. I, too, have been known to apply altruistic punishment to people making unwarranted threats. But I also consider whether the person feels so threatened that the threat may actually be just a sign of their insecurity. And there are always times when going along with the threat is simply easier than bothering to fight that particular issue.

Do you really always buck threats? Even when justified -- such a "threatened consequences" for stupid actions on your part? Even from, say, police officers?

1prase11yI didn't say that I always bucked threats, but that in most cases I am less likely to do what the person wants if a threat is added. I have never been threatened by police officers.
"Nahh, that wouldn't work"

I much prefer the word "consequence" -- as in, that action will have the following consequences . . . .

I don't threaten, I point out what consequences their actions will cause.

1wedrifid11yThis works so long as the 'pointing out consequences' are not, well, threats. There is a difference in more than word, even if the line is blurry.
Imperfect Levers

For-profit corporations, as a matter of law, have the goal of making money and their boards are subject to all sorts of legal consequences and other unpleasantnesses if they don't optimize that goal as a primary objective (unless some other goal is explicitly written into the corporate bylaws as being more important than making a profit -- and even then, there are profit requirements that must be fulfilled to avoid corporate dissolution or conversion to a non-profit -- and very few corporations have such provisions).

Translation

Corporations = powerful, intelligent entities with the primary goal of accumulating power (in the form of money).

0[anonymous]11yI hear this point a lot, but I've never looked in to it. Are there any famous cases of companies being sued or prosecuted for being insufficiently evil in the pursuit of profits?
1timtyler11yIt's maximise profits under the constraint of obeying the law. The latter part is where some moral constraints are explicitly encoded.
An Xtranormal Intelligence Explosion

As you get closer to the core of friendliness, you get all sorts of weird AGI's that want to do something that twistedly resembles something good, but is somehow missing something or is somehow altered so that the end result is not at all what you wanted.

Is this true or is this a useful assumption to protect us from doing something stupid?

Is it true that Friendliness is not an attractor or is it that we cannot count on such a property unless it is absolutely proven to be the case?

1Jonii11yMy idea there was that if it's not Friendly, then it's not Friendly, ergo it is doing something that you would not want an AI to be doing(if you thought faster and knew more and all that). That's the core of the quote you had there. Random intelligent agent would simply transform us into something of value, so we would most likely die very quickly. However, when you get closer to the Friendliness, Ai is no longer totally indifferent to us, but rather, is maximizing something that could involve living humans. Now, if you take an AI that wants there to be living humans around, but is not known for sure to be Friendly, what could go wrong? My answer, many things, as what humans prefer to be doing is rather complex set of stuff, and even quite little changes could make us really, really unsatisfied with the end result. At least, that's the idea I've gotten from posts here like Value is Fragile. When you ask if Friendliness is an attractor, do you mean to ask if intelligences near Friendly ones in the design spaces tend to transform into Friendly ones? This seems rather unlikely, as that sort of AI's most likely are capable of preserving their utility function, and the direction of this transformation is not "natural". For these reasons, arriving at the Friendliness is not easy, and thus, I'd say you gotta have some sort of a way to ascertain the Friendliness before you can trust it to be just that.
Yet Another "Rational Approach To Morality & Friendly AI Sequence"

I meant lurking is slow, lurking is inefficient, and a higher probability that it gets worse results for the newbie. I'm not sure which objective is being referred to in that clause. I retract those evaluations as flawed.

Yeah, I made the same mistake twice in a row. First, I didn't get that I didn't get it. Then I "got it" and figured out some obvious stuff -- and didn't even consider that there probably was even more below that which I still didn't get and that I should start looking for (and was an ass about it to boot). What a concept --... (read more)

Yet Another "Rational Approach To Morality & Friendly AI Sequence"

Umph! I am really not used to interacting with people mentally skilled enough that I have a really bad case of not knowing what I don't know. I need to fix that.

Good one with the tags. I'm still recalibrating from it/working through all its implications.

I'm going off to work on one of the questions now.

An Xtranormal Intelligence Explosion

AI correctly predicts that programmer will not approve of its plan. AI fully aware of programmer-held fallacies that cause lack of approval. AI wishes to lead programmer through thought process to eliminate said fallacies. AI determines that the most effective way to initiate this process is to say "I recognize that even with all of my intelligence I’m still fallible so if you object to my plans I will rethink them." Said statement is even logically true because the statement "I will rethink them " is always true.

Yet Another "Rational Approach To Morality & Friendly AI Sequence"

Yes. If you have actually learned something then your comments will reflect this and earn karma. You'll be into the positives before you know it.

OK. Got it.

If this is so, you have been doing it very badly.

I've already acknowledged that. But I've clearly been doing better with the "What I missed" explanation being +5 and this post only garnering -2 over two days as opposed to -6 in a few hours so I must have learned something.

I've also learned that we've reached the point where some people are tired enough of this thread that they will ... (read more)

Yet Another "Rational Approach To Morality & Friendly AI Sequence"

Okay. So the comment is unclear and incomplete but not unwelcome with a +5 karma). Clearly, I need to slow down and expand, expand, expand. I'm willing to keep fighting with it and do that and learn. Where is an appropriate place to do so?

6Alicorn11yHow about you answer any one of the questions I posed, right here? Take your pick. There's plenty.
Yet Another "Rational Approach To Morality & Friendly AI Sequence"

Is effective a cousin? I suspect so since the easiest way to rewrite it would be to simply replace rational with effective. If not, assume that my rewrite simply does that. If so, can I get a motivation for the request? I'm not sure where you're going or why "cousins" are disallowed.

5Alicorn11yBy "cousins" I meant "rational", "irrational", "rationality", "irrationality", etcetera. "Effective" is not technically a cousin, but any form of search-and-replace would not be in keeping with the spirit of the exercise [http://lesswrong.com/lw/nu/taboo_your_words/]. Since you are confused, I will go into more detail, but I am nearing the last straw in trying to deal with you and won't extend the courtesy again. Do you mean: Lurking is slow compared to other strategies, lurking gets worse results for the newbie, lurking is worse for the rest of the community, lurking is inefficient, lurking fails altogether at achieving the objective, or something else? This is meaningless until you explain the assertion you offer to support. Nope. That doesn't sound appealing at all. I would rather have zero subpar newbies, and instead of peace and quiet I want lively and productive signal with minimal noise. Also, "such irrationality" is presumptuous. Weren't you going on about how LW is actually governed by structures and rules that you now understand that only look irrational? Where did that go? Interestingly, your "option" is not so obviously and blindingly brilliant that I could only reject it as the solution to all my problems through sheer bloodymindedness. I don't actually want LW to be attached to a rock-bottom-standards blog with a similar color scheme that purports to funnel newbies into the real deal. I think that would be bad. Yes, even if I never have to look directly at it without a pinhole camera and even if it's minded by volunteers. If you were demonstrating actual understanding of any relevant concepts... or if you were offering to personally do some work for the site instead of just throwing around vague plans for its expansion and calling it the provision of "access"... or if your proposal were actually good or "rational"... or, I'll admit it, if you weren't so annoying... then you'd be getting a better reception. This is, of course, a counterfactual.
Yet Another "Rational Approach To Morality & Friendly AI Sequence"

If you want to be safe, you lurk until you truly get what's going on around you. People can in fact learn things that way.

I never said I wanted to be safe. Please reread what I said.

Lurking until you truly get what's going on around you is not the most effective (rational) way to learn. I can provide you a boatload of references supporting that if you wish.

Do you really want subpar newbies who will accept such irrationality just to maintain your peace and quiet? Particularly when a playground option is suggested? You could even get volunteers and ne... (read more)

3Vladimir_Nesov11yPlease taboo "rational". It's generally a good idea for this word. Edit: Interestingly, exactly the same thing irked Alicorn, apparently independently.
3Alicorn11yI invite you to try to re-write this comment without the word "rationality" or its cousins.
Yet Another "Rational Approach To Morality & Friendly AI Sequence"

claiming repeatedly to have learned some unspecified thing which makes you above disapproval.

Could you point to an example please so I can try to evaluate how I implied something so thoroughly against my intent? I certainly don't believe myself above disapproval.

5jmmcd11yBetter to reply to the person you're replying to, not yourself.
0mwaser11yI never said I wanted to be safe. Please reread what I said. Lurking until you truly get what's going on around you is not the most effective (rational) way to learn. I can provide you a boatload of references supporting that if you wish. Do you really want subpar newbies who will accept such irrationality just to maintain your peace and quiet? Particularly when a playground option is suggested? You could even get volunteers and never deal with the hassle. Premise: It's more rational for your goals, to just ignore a good rational proposal from an erring, annoying newbie who is trying to provide access to new resources for you (both newbies and structures for their care and feeding). I just don't get that.
Yet Another "Rational Approach To Morality & Friendly AI Sequence"

However, it doesn't say anywhere what it is that you claim to have suddenly understood

here

0[anonymous]11yAll right, I'll dissect that comment. Okay: what mistaken assumptions about karma? What false beliefs did you have about karma, and how did they mislead your actions? Okay: how does what underlying structure explain what apparently irrational behavior? Okay: And those regularities and assumptions are...? Okay: and I can find your list of these, and how you misunderstood them, where? Which takes what form, please? Such as? And the process is? The expectations are? The terms mean? The structure/understanding is? What is the mystery you have unraveled here, please show the class. Do tell. How does it make sense? And the rules are...? Ending the dissection here because comments can't be arbitrarily long, and because it's all the same. You throw around words labeling things you supposedly understand without ever describing those things. Over and over and over.

All right, I'll dissect that comment.

Some of it was mistaken assumptions about karma.

Okay: what mistaken assumptions about karma? What false beliefs did you have about karma, and how did they mislead your actions?

a huge amount of underlying structure which is necessary to explain what looks like seemingly irrational behavior (to someone who doesn't have that structure)

Okay: how does what underlying structure explain what apparently irrational behavior?

(until you catch the underlying regularities and make the right assumptions)

Okay: And those r... (read more)

0mwaser11yCould you point to an example please so I can try to evaluate how I implied something so thoroughly against my intent? I certainly don't believe myself above disapproval.
Yet Another "Rational Approach To Morality & Friendly AI Sequence"

Could someone give me a hint as to why this particular comment which was specifically in answer to a question is being downvoted? I don't get it.

0jmmcd11yI didn't downvote because you were right that the hypothesis I provided (there are some rational options) was not equivalent to the question (which are the rational options). This is quite a fundamental point, so extra black marks to me for being careless. However, Einstein's Arrogance doesn't deal with this fundamental point, so I disagree with "would have given you all of the above reply" and still dispute its relevance to Misha's original comment. ETA: also you didn't address "what is a short-term rational answer?". Maybe these are possible reasons for downvoting?
Yet Another "Rational Approach To Morality & Friendly AI Sequence"

Thank you. As I said below, I didn't clearly understand the need for the explicit inclusion of motivation before. I now see that I need to massively overhaul the question and include motivation (as well as make a lot of other recommended changes).

The post has a ton of errors but I don't understand why you think it was in error. Given that your premise about my intentions is correct, doesn't your argument mean that posting was correct? Or, are you saying that it was in error due to the frequency of posting?

0nhamann11yTricky words. I meant simply that it had errors. Of course I agree that even a flawed post is useful (in that it helps to expose buggy thinking), but here it seems like you're attempting to argue about what it means for a post to be "in error." Taboo the word "error" and I don't think we disagree.
Yet Another "Rational Approach To Morality & Friendly AI Sequence"

Ah. Now I see your point.

The actions of a nation are those which were caused by it's governance structure like your actions are those which are caused by your brain. A fever or your stomach growling is not your action in the same sense that actions by lower-level officials and large companies are not the actions of a nation -- particularly when those officials and companies are subsequently censured or there is some later attempt to rein them in. Actions of the duly recognized head of state acting in a national capacity are actions of the nation unless ... (read more)

Yet Another "Rational Approach To Morality & Friendly AI Sequence"

And this is still too abstract. Depending on detail of the situation, either decision might be right. For example, I might like to remain where I am, thank you very much.

So I take it that you are heavily supporting the initial post's "Premise: The only rational answer given the current information is the last one."

Worse, so far I've seen no motivation for the questions of this post, and what discussion happened around it was fueled by making equally unmotivated arbitrary implicit assumptions not following from the problem statement in the post.

Thank you. I didn't clearly understand the need for the explicit inclusion of motivation before.

Yet Another "Rational Approach To Morality & Friendly AI Sequence"

Seconding Carl changes your argument to this is the first substantive posting I've made in four days. Now it's one in five days.

Other than not posting on a new given topic (while you have no active or live posts), what would you suggest? Personally, I would suggest a separate area (a playpen, if you will) where newbies are allowed to post and learn. You can't truly learn anything of value just by watching. Insisting that a first attempt be done correctly on the first try under safe circumstances is counter-productive.

My last substantive post before thi... (read more)

8WrongBot11yYou misunderstand. Your posts are not being downvoted specifically because people dislike you. Neither are draq's. A downvote means, approximately, "I would like to see less of this." Yes. If you have actually learned something then your comments will reflect this and earn karma. You'll be into the positives before you know it. If this is so, you have been doing it very badly. I'm sorry I have to be so blunt, but I have yet to see any indication that you have actually learned something.

Personally, I would suggest a separate area (a playpen, if you will) where newbies are allowed to post and learn. You can't truly learn anything of value just by watching. Insisting that a first attempt be done correctly on the first try under safe circumstances is counter-productive.

mwaser, every person on this board (possibly excepting some transfers from Overcoming Bias) was once a newbie. I was once a newbie. My first toplevel was downvoted too. If you want to be safe, you lurk until you truly get what's going on around you. People can in fact l... (read more)

Yet Another "Rational Approach To Morality & Friendly AI Sequence"

Acquiring citizenship is joining a nation. People who are not only allowed to acquire citizenship but encouraged to do so are "invited to join". To choose whether to do so or not is to file the necessary papers and perform the necessary acts. I think that these answers should be obvious.

A nation has a top-most goal if all of its goals do not conflict with that goal. This is more specific than a top-level goal.

A nation is rational to the extent that its actions promote its goals. Did you really have to ask this?

How does a nation identify the ... (read more)

4jmmcd11yThe reason I ask questions which you think have obvious answers is that I think the easily-stated obvious answers make large, blurry assumptions. For example: What are the actions of a nation? The aggregate actions of the population? Those of the head of state? What about lower-level officials in government? Large companies based in the nation? Ok, I should have started with a more basic question then. What does it mean for a nation to have any goal? I agree that nations are not a great example. After all, acquiring citizenship usually means emigration, new rights of travel, change in economic circumstances and often loss of previous citizenship. All of these overwhelm any considerations about rationality of the new nation.
7Vladimir_Nesov11yAnd this is still too abstract. Depending on detail of the situation, either decision might be right. For example, I might like to remain where I am, thank you very much. Worse, so far I've seen no motivation for the questions of this post, and what discussion happened around it was fueled by making equally unmotivated arbitrary implicit assumptions not following from the problem statement in the post. It's the worst kind of confusion when people start talking about the topic as if understanding each other, when in fact the direction of their conversation is guided by any reasons but the content of the topic in question. Cargo cult conversation (or maybe small talk).
Yet Another "Rational Approach To Morality & Friendly AI Sequence"

Upvote from me! Yes, you are understanding me correctly.

One could indeed come up with my list of options without having done any prior investigation. But would one share it with others? My pointing at that particular post is meant to be a signal that I grok that it is not rational to share it with others until I believe that I have strong evidence that it is a strong hypothesis and have pretty much run out of experiments that I can conduct by myself that could possibly disprove the hypothesis.

Skepticism is desired as long as it doesn't interfere with th... (read more)

Yet Another "Rational Approach To Morality & Friendly AI Sequence"

The question "which options are long-term rational answers?" corresponds immediately to the hypothesis "among the options are some long-term rational answers" and can be investigated in the same way.

Incorrect. Prove that one option is a long-term rational answer and you have proved the hypothesis "among the options are some long-term rational answers". That is nowhere near completing answering the question "which options are long-term rational answers"

My hypothesis was much, much more limited than "among th... (read more)

0mwaser11yCould someone give me a hint as to why this particular comment which was specifically in answer to a question is being downvoted? I don't get it.
6WrongBot11yThe piece of the sequences relevant here is probably Science as Attire [http://lesswrong.com/lw/ir/science_as_attire/]. You are not in a position to tell other people to go read the Sequences.
Yet Another "Rational Approach To Morality & Friendly AI Sequence"

Thank you very much.

Premise: Most developed nations are such a community although the goal is certainly not explicit.

Do you believe that premise is flawed?

2jmmcd11yI think that premise is very wrong. If "developed nations" is the model you had in mind while writing, I can understand why most commentors find this post confusing. I guessed you meant something like an internet community like LW. Attempting to abstract over these things seems problematic, as pointed out by Vladimir Nesov. What does it mean to "join" a nation? To be "invited to join"? To choose whether to do so or not? In what sense does a nation have a top-level goal (explicit or otherwise)? In what sense is a nation rational or otherwise? How does a nation identify the goals of its members?
Yet Another "Rational Approach To Morality & Friendly AI Sequence"

"Option 2 is the only long-term rational answer" is a clear hypothesis. It is disproved if any of the other options is also a long-term rational answer. "Which options are long-term rational answers?" is a question, not a hypothesis.

Reread Einstein's Arrogance

-1jmmcd11yThe question "which options are long-term rational answers?" corresponds immediately to the hypothesis "among the options are some long-term rational answers" and can be investigated in the same way. Mind you, "long-term rational answer" is not well-defined; I guess you mean something influenced by ideas like Nash equilibrium and evolutionarily stable strategy. What is a "short-term rational answer"? The post you link to is irrelevant to Misha's reasonable question, except insofar as it contains discussion of hypotheses. If you really think that people here need to be educated as to what a hypothesis is, then a) it'd be better to link to a wikipedia definition and b) why are you bothering to post here?
4[anonymous]11yIn other words, you're still investigating the same things (possibly with different stopping criteria -- e.g. you'd be done if you disproved your hypothesis), but you have substantial evidence in favor of your hypothesis already. Am I understanding you correctly? I'm not sure the blog post you're linking to is helpful, though. One could come up with your list of options without having done any prior investigation. In other words, unlike Einstein, it's entirely plausible to be at the stage where you're considering Option 2 without having evidence favoring Option 2 over the others. And even if you have 50% certainty in Option 2, that only implies 3-4 bits of evidence. And I think the mistrust you see in the comments is due precisely to the absence of evidence from your post. Which is weakly evidence of absence [http://lesswrong.com/lw/ih/absence_of_evidence_is_evidence_of_absence/]. Granted, I don't think your post is intended to present all your evidence, but seeing some of it first would help frame your discussion.
Yet Another "Rational Approach To Morality & Friendly AI Sequence"

Too ambiguous. It's not clear which elements aren't clear to you, so it's not possible to fix the problem.

3Vladimir_Nesov11yPretty much everything. To fix the problem, give an example.
0Relsqui11yI proposed a specific source of ambiguity elsewhere in the thread [http://lesswrong.com/r/discussion/lw/31c/yet_another_rational_approach_to_morality/2wyl?c=1] .
Yet Another "Rational Approach To Morality & Friendly AI Sequence"

The original statement said nothing about how much work each step was. In fact, the original statement was refuting a statement that was even more simplistic and strongly implied the process was limited to just data and conclusions.

I agree with your second sentence.

6Relsqui11yStrictly speaking, if the data clearly supports a conclusion, why does it matter whether you predicted the conclusion or not? Assuming your goal was to learn about the data/conclusion, not to assess your own predictive power.
0NihilCredo11yThat consequence should factor into your reasoning of "what is in my best interest?", in the same way why you don't usually want to perform petty crimes IRL. I read 2 as "I will perform the most rational actions among the subset of those that lead to a net positive for the community", whereas 5 takes away everything from 'among' onwards.
0WrongBot11yPlease take a look through the list of recently posted discussion topics, and note how often various authors post. At the moment, the only one approaching your frequency is draq, who is also heavily downvoted. While there are LW users who would be celebrated if they posted new material every day or two, you can mostly identify them by looking at the "Top Contributors" list on the bottom right of this page. Also, I second Carl Shulman [http://lesswrong.com/lw/31c/yet_another_rational_approach_to_morality/2x10?c=1] .
8CarlShulman11yThat upvoted post was an apology for your substantive posting, not a substantive post itself.
5Relsqui11yI think the mean is a more meaningful statistic than the net, in this case.
Yet Another "Rational Approach To Morality & Friendly AI Sequence"

Merely knowing that a group is rational and utilitarian (or at least, that it claims to be) doesn't narrow down what it is very much.

Interesting. Those were sidelights for me as well. What was definitive for me was the statement of the community's top-most goal.

What you've stated instead is that you're trying to prove your hypothesis, which is, I hope, wrong--rather, you're investigating whether your hypothesis is true, without the specific goal of proving or disproving it.

Agreed.

2Relsqui11yIt's definitive, in that it's a definition, but it's not very descriptive.
An apology

It is a definition, not an explanation. I misunderstood his post to be questioning what the quoted word "structures" meant so I provided a definition.

I am editing it to provide examples. It was certainly not intended as a curiosity-stopper.

As a definition, it had meaning -- but none that was new to you.

1Vladimir_Nesov11ySorry, I still have plenty of prejudice, will try to be more careful.
2Relsqui11yHere's an even clearer phrasing that refutes that. "What is day-to-day life like for a member of this community? How does it differ from what I'm accustomed to?" Really, I'm just trying to resolve the ambiguity that Vladimir_Nesov observed. Merely knowing that a group is rational and utilitarian (or at least, that it claims to be) doesn't narrow down what it is very much. Also, I would find the statement of hypothesis in the original post much clearer if you said "my hypothesis is that ..." What you've stated instead is that you're trying to prove your hypothesis, which is, I hope, wrong--rather, you're investigating whether your hypothesis is true, without the specific goal of proving or disproving it.
7JGWeissman11yThis is the simplification taught in science class, that perhaps scientists even tell themselves. Really though, most of the work is forming the hypothesis [http://lesswrong.com/lw/jo/einsteins_arrogance/], and the social process of science only manages to protect itself from the dangers of forming hypotheses without that work [http://lesswrong.com/lw/19m/privileging_the_hypothesis/] by having norms of collecting lots of redundant data.
POLL: Realism and reductionism

1A

2A - if your morality is rationally defined by your goals OR 2C - if you insist on all the confusing noise of human morality

3A or 3B - I don't know how to test to determine

4 is confused

An apology

By "structures" I mean "interlocking sets of values, virtues, norms, practices, identities, technologies, and psychological mechanisms that work together to fulfill the goal of stabilization (of something)".

Examples: The "terms of art" like "confused" (different from common use in that it can imply sins of omission as well as commission), the use of karma, the nearly uniform behavior when performing certain tasks, the nearly uniform reactions to certain things, etc. are all part of the "structures" supporting the community.

1Relsqui11yNo, I know. But I don't know what they are in any way that is unusual to this place. It's possible that we come from such different backgrounds that what seems extraodinary to you is unnoticeable to me; it's also possible you've figured out something about LW that I haven't. Either way I'm curious what it is.
2Vladimir_Nesov11yNow this is pretty meaningless, but in many words purporting to explain something. Beware curiosity-stoppers. Edit: I misinterpreted mwaser's comment, correction [http://lesswrong.com/lw/30v/an_apology/2wxo?c=1].
indexical uncertainty and the Axiom of Independence

We seem to have differing assumptions:

My default is to assume that B utility cannot be produced in a different world UNLESS it is of utility in B's world to produce the utility in another world. One method by which this is possible is trade between the two worlds (which was the source of my initial response).

Your assumption seems to be that B utility will always have value in a different world.

My default assumption is explicitly overridden for the case where I feel good (have utility in the world where I am present) when I care about the world where I am ... (read more)

1Vladimir_Nesov11y(Just to be sure, I expect this is exactly the point you've changed your mind about, so there is no need for me to argue.) Does not compute. Utility can't be "in given world" or "useful" or "useful from a given world". Utility is a measure of stuff, not stuff itself. Measure has no location. Not if we interpret "utility" as meaning "valuable stuff". It's not generally correct that the same stuff is equally valuable in all possible worlds. If in worlds of both agents A and B we can produce stuff X and Y, it might well be that producing X is world A has more B-utility than producing Y in world A, but producing X in world B has less B-utility than producing Y in world B. At the same time, given amount of B-utility is equally valuable, no matter where the stuff measured so got produced.
indexical uncertainty and the Axiom of Independence

Fair enough. I'm willing to rephrase my argument as A can't produce B utility because there is no B present in the world.

Yes, I do want to pre-commit to a counter-factual trade in the mugging because that is the cost of obtaining access to an offer of high expected utility (see my real-world rephrasing here for a more intuitive example case).

In the current world-splitting case, I see no utility for me since the opposing fork cannot produce it so there is no point to me pre-committing.

3Vladimir_Nesov11yWhy do you believe that the counterfactual isn't valuable? You wrote: That B is not present is a given possible world is not in itself a valid reason to morally ignore that possible world (there could be valid reasons, but B's absence is not one of them for most preferences that are not specifically designed to make this condition hold, and for human-like morality in particular). For example, people clearly care about the (actual) world where they've died (not present): you won't trade a penny a day while you live for eternal torture to everyone after you die (while you should, if you don't care about the world where you are not present).
indexical uncertainty and the Axiom of Independence

Strategy one has U1/2 in both A-utility and B-utility with the additional property that the utility is in the correct fork where it can be used (i.e. it truly exists).

Strategy two has U2/2 in both A-utilty and B-utility but the additional property that the utility produced is not going to be usable in the fork where it is produced (i.e. the actual utility is really U0/2 unless the utility can be traded for the opposite utility which is actually usable in the same fork).

Assuming that there is no possibility of trade (since you describe no method by which it... (read more)

4Vladimir_Nesov11yUtility is not instrumental, not used for something else, utility is the (abstract) thing you try to maximize, caring of nothing else. It's the measure of success, all consequences taken into account (and is not itself "physical"). As such, it doesn't matter in what way (or "where") utility gets "produced". Knowing that might be useful for the purpose of computing utility, but not for the purpose of interpreting the resulting amount, since utility is the final interpretation of the situation, the only one that matters. Now, it might be that you consider events in the counterfactual worlds not valuable, but then it interrupts my argument a step earlier than you did, it makes incorrect the statement that A's actions can produce B-utility. It could be that A can't produce B-utility, but it can't be that A produces B-utility but it doesn't matter for B. Hence the second paragraph about counterfactual mugging: if you accept that events in the counterfactual world can confer value, then you should take this deal as well. And no matter whether you accept CM or not, if you consider the problem in advance, you want to precommit to counterfactual trade. And hence, it's a reflectively consistent thing to do to accept counterfactual trade later as well.
An apology

Wow! Evil. Effective. Not to mention a great demonstration of the criticality of context.

Definitely deserves a link or mention in a newbie's guide.

An apology

What I didn't get?

Some of it was mistaken assumptions about karma. Much more of it was the lack of recognition of the presence of a huge amount of underlying structure which is necessary to explain what looks like seemingly irrational behavior (to someone who doesn't have that structure). I also didn't recognize most of the offered help because I didn't understand it. (Even just saying to a newbie, "I know that you don't recognize this as help because you don't get it yet but could you please trust me that it is intended as help" would probabl... (read more)

4Vladimir_Nesov11yNot helping. I still have no idea whether you actually changed your mind about anything. You say you did, but you didn't give any specific detail (explicit statements about the beliefs you changed; I don't expect you should've changed your mind so soon, for that matter). The change that's obvious is that you snapped out of adversarial mode, which is great (and in long term sufficient to start learning), but is generally unrelated to changes in what you believe. For example, people in crackpot hubs can well agree with each other on all the contradicting and meaningless woo they generate, thus starting to agree with the community is generally not a sure sign of changing your mind.
8Relsqui11yI confess that this reply doesn't clarify the matter at all for me. I haven't the faintest what "structures" you're referring to that are so particular to this specific community. I'm looking forward to seeing more detail if you do write the top-level post that has been suggested.
6wedrifid11yThat's a really good point. Humans are particularly bad at communicating over that kind of inferential distance (see post by that name for jargon or infer from context - kinda means what you are saying). People who have been thinking from within one culture for a long time will often not even understand what their words will mean to someone who has not. This applies to university courses too. Looking back it doesn't seem like I learned anything that wasn't bloody obvious... until I look at the people who didn't have equivalent training. You, having recently understood the culture in question, are in a perfect place to inform others. And it is best to do it now, before you forget why what you now know wasn't always obvious. And you will forget, given time. This reply, with a little tinkering to target it specifically to the desired effect, would make a good post and I can imagine people referring to it frequently. People may still take it as an insult and leave in disgust.... but some may not.
2Spurlock11yThanks for the long reply. My first experience with the site was to make a couple of comments which seemed "rational" to me at the time but were painfully (although not surprisingly) stupid in hindsight. The community told me, in a way that I now recognize as being as polite as possible, that I was being an idiot. At first I resisted, but after I started to dig into the site a little, it became clear to me that they were 100% right, and I deleted those early comments out of tremendous embarrassment. I say all this because it sounds like it might basically be the same experience you had, except that I came around more quickly. I agree with most of your thoughts about the abundance of confusing local jargon, acceptance of strawmen, etc. It also might as well be explicitly laid out that everyone is expected to read all the sequences before they'll be taken seriously. Which understandably seems really stupid and unfair the first time one bumps into it, but it might as well be stated. I guess what confused me about your post was that the karma system and the way it's used here has always made sense to me, so I'm not sure what about it you weren't expecting. But then, that's exactly why it would be great for you to write up the quickstart guide: Most of us can't see the flaws and hurdles in the system (and therefore, can't guide others around them) because we're already used to it. So good luck with this. I'd put it on the wiki, and when it becomes mature enough see to it that it gets linked to in any future "Welcome to Less Wrong" posts (which seem to be where newcomers making boneheaded comments like I did inevitably get directed).
Harry Potter and the Methods of Rationality discussion thread, part 4

Using "because" on evolution is tricky -- particularly when co-evolution is involved --and society and humans are definitely co-evolving. Which evolved first -- the chicken or the chicken egg (i.e. dinosaur-egg-type arguments explicitly excluded).

Rationality Quotes: November 2010

I can think of several reasons

  1. Your post appears to be a dominance game. Your bible will obliterate their bible.
  2. While beauty is in the eye of the beholder, I would guess that the initial quote probably strikes many here as elegant poetry that is well worth sharing (and upvotes effectively equal sharing).
  3. Your post isn't particularly interesting so I would guess that it wouldn't attract any upovotes and point 1 means that it is nearly certain to attract at least two or three downvotes.
An apology

If you were the first person to see such a post (where Yvain made such a stupid comment that you believed that it deserved to attract 26527 downvotes), would you, personally, downvote it for stupidity or would you upvote it for interestingness?

EDIT: I'd be interested in answers from others as well.

3Relsqui11yGiven his track record--which is roughly estimated by his karma score--I would assume a lot of things before I assumed that Yvain had genuinely posted something that impressively stupid. (Hacker, friend at keys, misunderstanding, etc.) So I probably wouldn't vote on it, but if followup comments made it clear that yes, it was Yvain, and yes, it was really that stupid, I'd downvote the followup.
6[anonymous]11yGiven that I've never encountered a comment that stupid, I'm not sure my intuition is correct here. I mean, we're approaching "huge number of dust specks" level here. For all I know, the post would be so horrifying that I would be physically unable to avoid downvoting it.
5sixes_and_sevens11yI suspect I'd downvote it, but reply with "downvoted, but bravo!"
Irrational Upvotes

From Wikipedia

A straw man argument is an informal fallacy based on misrepresentation of an opponent's position.[1] To "attack a straw man" is to create the illusion of having refuted a proposition by substituting it with a superficially similar yet unequivalent proposition (the "straw man"), and refuting it, without ever having actually refuted the original position.[1][2]

My position never included the any claims about the value of the statement as an argument. To imply that my position was that it was a "bad" argument i... (read more)

Waser's 3 Goals of Morality

Now that I've got it, this is clear, concise, and helpful. Thank you.

I also owe you (personally) an apology for previous behavior.

An apology

Yes, that is precisely what I wish to do -- but, as I said, that is also going to take some patience and help from others and I have certainly, if unintentionally, abused my welcome.

There is also (obviously) still a lot that I don't understand -- for example, this post quickly acquired a downvote in addition to your comment and I don't get why.

5wedrifid11yI wouldn't worry about having abused your welcome. In fact, give the biases of the kind of people who are attracted to this place it would be damn near impossible to do so irredeemably. They tend to have a character trait that makes them absolute suckers for humility and apology. They also suck at holding grudges. And I do mean suck. As in, probably to a fault. If someone with a grudge against you saw you arguing with someone else and the other guy was using fallacies all around they would come vigorously to your defense and may accidentally forget their grudge in the process. This is a negative only in as much as it sucks balls as a social-politics strategy. It did? Crazy. Some people just vote poorly. ;) If in doubt wait till the post/comment has been up for a day and see if it stays negative. Sometimes these things reverse themselves. If the comment/post stays negative then it does mean something (for better or for worse).
5jmmcd11yI think that talking about karma causes one to get less of it. On reddit (where the commenting/karma algorithms come from) it's quite different: "I know this'll be downvoted, but..." is guaranteed a few upvotes.
4David_Allen11yPerhaps because you say here [http://lesswrong.com/lw/30i/wasers_3_goals_of_morality/2w1j?c=1]: Then you posted this a day later.

It helps to think of the karma scores on individual comments as having a 2 point margin of error, especially if they're less than a day old.

8Sniffnoy11yA post's score shortly after posting isn't necessarily very meaningful. I'm not sure anyone but the downvoter can answer why the one person who voted first happened to vote it down.
Load More