Where are the new monthly threads when I need them? A pox on the +11 EDT zone!
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
If you're new to Less Wrong, check out this welcome post.
Since Karma Changes was posted, there have been 20 top level posts. With one exception, all of those posts are presently at positive karma. EDIT: I was using the list on the wiki, which is not up to date. Incorporating the posts between the last one on that list and now, there is a total of 76 posts between Karma Changes and today. This one is the only new data point on negatively rated posts, so it's 2 of 76.
I looked at the 40 posts just prior to Karma Changes, and of the forty, six of them are still negative. It looks like before the change, many times more posts were voted into the red. I have observed that a number of recent posts were in fact downvoted, sometimes a fair amount, but crept back up over time.
Hypothesis: the changes included removing the display minimum of 0 for top-level posts. Now that people can see that something has been voted negative, instead of just being at 0 (which could be the result of indifference), sympathy kicks in and people provide upvotes.
Is this a behavior we want? If not, what can we do about it?
One of the expected effects of the karma change is to make people more cautious about what they put in a top level post. Perhaps this is only evidence of that effect.
Eliezer, how is progress coming on the book on rationality? Will the body of it be the sequences here, but polished up? Do you have an ETA?
Eliezer's posts are always very thoughful, thought provoquing and mind expanding - and I'm not the only one to think this, seeing the vast amounts of karma he's accumulated.
However, reviewing some of the weaker posts (such as high status and stupidity and two aces ), and rereading them as if they hadn't been written by Eliezer, I saw them differently - still good, but not really deserving superlative status.
So I was wondering if Eliezer could write a few of his posts under another name, if this was reasonable, to see if the Karma reaped was very different.
This is a reasonable justification for using a sockpuppet, and I'll try to keep it in mind the next time I have something to write that would not be instantaneously identifiable as me.
I'm one of the 5-10.
There is a depth to "this is an Eliezer agument, part of a rich and complicated mental world with many different coherent aspects to it" that is lacking in "this is a random post on a random subject". In the first case, you are seeing a facet of larger wisdom; in the second, just an argument to evaluate on merits.
I thought of a voting tip that I'd like to share: when you are debating someone, and one of your opponent's comment gets downvoted, don't let it stay at -1. Either vote it up to 0, or down to -2, otherwise your opponent might infer that you are the one who downvoted it. Someone accused me of this some time ago, and I've been afraid of it happening again ever since.
It took a long time for this countermeasure to occur to me, probably because the natural reaction when someone accuses you of unfair downvoting is to refrain from downvoting, while the counterintuitive, but strategically correct response is to downvote more.
Upvoted for honesty.
Of course, I'll be back in a few days to downvote you.
You're getting downvoted for overconfidence, not for the content of your point of view.
The utilitarian point of view is that beyond some level of salary, more money has very small marginal utility to an average First World citizen, but would have a huge direct impact in utility on people who are starving in poor countries.
Your point is that the indirect impacts should also be considered, and that perhaps when they are taken into account the net utility increase isn't so clear. The main indirect impact you identify is increasing dependency on the part of the recipients.
Your concern for the autonomy of these starving people is splendid, but the fact remains that without aid their lives will be full of suffering. Your position appears to be "good riddance". You can't fault people for being offended at the implied lack of compassion.
I suspect that your appeal for sympathy towards your position is doubly likely to fall on deaf ears as a result. Losing two karma points isn't the end of the world, and does not constitute suppression. Stop complaining, and invest some effort in presenting your points of view more persuasively.
Downvotes signal "would like to see fewer comments like this one". This certainly applies to trolls and nonsense, but it feels appropriate to use the same signal for comments which, if the author had taken a little more time to compose, readers wouldn't need to spend time correcting one way or another. The calculation I've seen at least once here (and I tend to agree with) is that you should value your readers' time about 10x more than you value yours.
The appropriate thing to do if you receive downvotes and you're neither a troll nor a crackpot seems to simply ask what's wrong. Complaining only makes things worse. Complaining that the community is exhibiting censorship or groupthink makes things much worse.
LW became more active lately, and grew old as experience, so it's likely I won't be skimming "recent comments" (and any comments) systematically anymore (unless I miss the fun and change my mind, which is possible). Reliably, I'll only be checking direct replies to my comments or private messages (red envelope).
A welcome feature to alleviate this problem would be an aggregator for given threads: functionality to add posts, specific comments and users in a set of items to be subscribed on. Then, all comments on the subscribed posts (or all comments within depth k from the top-level comments), and all comments within the threads under subscribed comments should appear together as "recent comments" do now. Each comment in this stream should have links to unsubscribe from the subscribed item that caused this comment to appear in the stream, or to add an exclusion on the given thread within another subscribed thread. (Maybe, being subscribed to everything, including new items, by default, is the right mode, but with ease of unsubscribing.)
This may look like a lot, but right now, there is no reading load-reducing functionality, so as more people start actively commenting, less people will be able to follow.
I thought about it further, and decided that I would have moral qualms about it. First, you are insincerely up-voting someone, and they are using this as peer information about their rationality. Second, you are encouraging a person C to down-vote them (person B) if they think person B's comment should just be at 0. But then when you down-vote B, their karma goes to -2, which person C did not intend to do with his vote.
So I think this policy is just adding noise to the system, which is not consistent with the LW norm of wanting a high signal to noise ratio.
While the LW voting system seems to work, and it is possibly better than the absence of any threshold, my experience is that the posts that contain valuable and challenging content don't get upvoted, while the most upvotes are received by posts that state the obvious or express an emotion with which readers identify.
I feel there's some counterproductivity there, as well as an encouragement of groupthink. Most significantly, I have noticed that posts which challenge that which the group takes for granted get downvoted. In order to maintain karma, it may in fact be important not to annoy others with ideas they don't like - to avoid challenging majority wisdom, or to do so very carefully and selectively. Meanwhile, playing on the emotional strings of the readers works like a charm, even though that's one of the most bias-encouraging behaviors, and rather counterproductive.
I find those flaws of some concern for a site like this one. I think the voting system should be altered to make upvoting as well as downvoting more costly. If you have to pick and choose what comments and articles to upvote/downnvote, I think people will be voting with more reason.
There are various ways to make voti... (read more)
If I understand the Many-Worlds Interpretation of quantum mechanics correctly, it posits that decoherence takes place due to strict unitary time-evolution of a quantum configuration, and thus no extra collapse postulate is necessary. The problem with this view is that it doesn't explain why our observed outcome frequencies line up with the Born probability rule.
Scott Aronson has shown that if the Born rule doesn't hold, then quantum computing allows superluminal signalling and the rapid solution of PP)-complete problems. So we could adopt "no superluminal signalling" or "no rapid solutions of PP-complete problems" as an axiom and this would imply the Born probability rule.
I wanted to ask of those who have more knowledge and have spent longer thinking about MWI: is the above an interesting approach? What justifications could exist for such axioms? (...maybe anthropic arguments?)
ETA: Actually, Aronson showed that in a class of rules equating probability with the p-norm, only the 2-norm had the properties I listed above. But I think that the approach could be extended to other classes of rules.
Ding! This is a reminder. It's been 12 days since you promised to dig some up.
I also tend to vote posts up or down based on what I think the score ought to be. But it seems clear that sympathy plays a part. Liked posts spiral freely off towards infinity but disliked posts don't ever spiral down in a similar way. This gives a distinct bias to the expected payoff of posting borderline posts and so is probably not desirable.
He hugely increased African aid and foreign aid in general (though with big deadly strings). That came as a big surprise to me.
http://www.independent.co.uk/news/world/americas/aid-to-africa-triples-during-bush-presidency-but-strings-attached-430480.html
Here's my reply, after some reflection. The reason I strive for having no comments with negative scores is so that when people see a comment from me that is confusing, controversial or just seems wrong (of course I try to prevent that if possible, but sometimes it isn't), they'll think "It's not like Wei to write nonsense. Maybe I should think about this again" instead of just dismissing it. That kind of power seems worth the effort to me. (Except that it hasn't been working well recently, hence the frustration.)
Of interesting trivia: This open thread is at 256 comments by February 3rd. For comparison:
January's had a total of 709
December is at 260
November is at 490
October is at 399
That doesn't just make rationality irrelevant, it makes everything irrelevant. Love doesn't matter because you don't meet that special someone in every world, and will meet them in at least one world. Education doesn't matter because guessing will get you right somewhere.
I want to be happy and right in as many worlds as possible. Rationality matters.
I hardly think komponisto inflicted "Bayesian damage" on the members of Less Wrong, seeing as they had already overwhelmingly come to the conclusion that Amanda Knox was not guilty before he had even presented his own arguments.
It's not the sequence of answers that's the problem -- it's the questions. You'll be safe if you can vet the questions to ensure zero causal effect from any sequence of answers, but such questions are not interesting to ask almost by definition.
An ~hour long talk with Douglass Hofstadter, author of Godel, Escher, Bach.
Titled: Analogy as the Core of Cognition
http://www.youtube.com/watch?v=n8m7lFQ3njk#t=13m30s
Eliezer has a new fanfic available.
Then you're simply disagreeing with the problem statement. If you 1-box, you get $1M. If you 2-box, you get $1k. If you 2-box because you're considering the impossible possible worlds where you get $1.001M or $0, you still get $1k.
At this point, I no longer think you're adding anything new to the discussion.
Yes. But it was filled, or not, based on a prediction about what you would do. We are not such tricksy creatures that we can unpredictably change our minds at the last minute and two-box without Omega anticipating this, so the best way to make sure the one box has the goodies in it is to plan to actually take only that box.
Well, if I prefer to prefer being wrong, then I plan ahead accordingly, which includes a policy against ridiculous karma games motivated by fleeting emotional reactions.
So my options are:
I'll go with 2. Sorry about your insecurities.
I would prefer votes be public, so disseminating my knowledge of how to abuse anonymous scoring makes this more likely.
"Cf." is sometimes misused around here.
Bet on propositions on InTrade. If you are good, you will make money from the exercise, as well as establish crediblility.
Fun sneaky confidence exercise (reasons why exercise is fun and sneaky to be revealed later):
Please reply to this comment with your probability level that the "highest" human mental functions, such as reasoning and creative thought, operate solely on a substrate of neurons in the physical brain.
<.05
I am no cognitive scientist, but I believe some of my "thinking" takes place outside of brain (elsewhere in my body) and I am almost certain some of it takes place on my paper and computer.
When I signed up for cryonics, I opted for whole body preservation, largely because of this concern. But I would imagine that even without the body, you could re-learn how to move and coordinate your actions, although it might take some time. And possibly a SAI could figure out what your body must have been like just from your brain, not sure.
Now recently I have contracted a disease which will kill most of my motor neurons. So the body will be of less value and I may change to just the head.
The way motor neurons work is there is an upper motor neuron (UMN) which descends from the motor cortex of the brain down into the spinal cord; and there it synapses onto a lower motor neuron (LMN) which projects from the spinal cord to the muscle. Just 2 steps. However actually the architecture is more complex, the LMNs receive inputs not only from UMNs but from sensory neurons coming from the body, indirectly through interneurons that are located within the spinal cord. This forms a sort of loop which is responsible for simple reflexes, but also for stable standing, positioning etc. Then there other kinds of neurons that descend from the brain into the spinal cord, including from the limbic system, the center of emotion. For some reason your spinal cord needs to know something about your emotional state in order to do its job, very odd.
A query to Unknown, with whom I have this bet going:
I recently found within myself a tiny shred of an... (read more)
He (Dubya) raised the self esteem of millions of foreign citizens. Being able to laugh at the expense of the leader of a dominant world power gives significant health benefits.
This is actually a damned good question:
http://www.scientificblogging.com/mark_changizi/why_doesn%E2%80%99t_size_matter%E2%80%A6_brain
This indicates you haven't understood me: pro-empathy IS the theme here on Less Wrong. For a variety of reasons, this community tends to have 'humanist goals'. This is considered to not be in conflict with rationality, because rationality is about achieving your goals, not choosing them. If you have a developed rational argument for why less charity would further humanist goals, there may be some interest, but much less interest if your argument seems based on a lack of humanist goals.
That is right - the choice does not determine the contents. But the choice is not as independent as common intuition suggests. Omega's belief and your choice share common causes. Human decisions are caused - they don't spontaneously spring from nowhere, causally unconnected to the rest of the universe - even if that's how it sometimes feels from the inside. The situational state, and the state of your brain going into the situation, determine the decision that your brain will ultimately produce. Omega is presumed to know enough about these prior state... (read more)
Ceteris ain't paribus. That's the whole point.
Why are you concerned that you win the debate? I'm sure this sounds naive, but surely your concern should be that the truth win the debate?
I gave up on trying to make a human-blind/sandboxed AI when I realized that even if you put it in a very simple world nothing like ours, it still has access to it own source code, or even just the ability to observe and think about it's own behavior.
Presumably any AI we write is going to be a huge program. That gives it lots of potential information about how smart we are and how we think. I can't figure out how to use that information, but I can't rule out that it could, and I can't constrain it's access to that information. (Or rather, if I know how to do that, I should go ahead and make it not-hostile in the first place.)
If we were really smart, we could wake up alone in a room and infer how we evolved.
To re-iterate a request from Normal Cryonics: I'm looking for links to the best writing out there against cryonics, especially anything that addresses the plausibility of reanimation, the more detailed the better.
I'm not looking for new arguments in comments, just links to what's already "out there". If you think you have a good argument against cryonics that hasn't already been well presented, please put it online somewhere and link to it here.
I've created a rebuttal to komponisto's misleading Amanda Knox post, but don't have enough karma to create my own top-level post. For now, I've just put it here:
http://docs.google.com/View?id=dgb3jmh2_5hj95vzgk
If you actually want to debate this, we could do so in the comments section of my post, or alternatively over in the Richard Dawkins forum.
(Though since you say "my intent is merely to debunk komponisto's post rather than establish Amanda's guilt", I'm suspicious. See Against Devil's Advocacy.)
Make sure you've read my comments here in addition to my post itself.
There is one thing I agree with you about, and that is that this statement of mine
is misleading. The misleading part is the phrase "so far as I know", which has been interpreted by people who evidently did not read my preceding survey post to mean that I had not heard about all the other alleged physical evidence. I didn't consider this interpretation because I was assuming that my readers had read both True Justice and Friends of Amanda, knew from my previous post that I had obviously read them both myself, and would understand my statement for what it was -- a dismissal of the rest of the so-called "evidence". However, in retrospect, I should have foreseen this misunderstand... (read more)
We are status oriented creatures especially with regard to social activities. Science is one of those social activities, so it is to be expected that science is infected with status seeking. However it is also one of the more efficient ways we have of getting truths, so it must be doing some things correctly. I think that it may have some ideas that surround it that reduce the problems of it being a social enterprise.
One of the problems is the social stigma of being wrong, which most people on the edge of knowledge probably are. Being wrong does not signal... (read more)
I seem to be entering a new stage in my 'study of Less Wrong beliefs' where I feel like I've identified and assimilated a large fraction of them, but am beginning to notice a collusion of contradictions. This isn't so surprising, since Less Wrong is the grouped beliefs of many different people, and it's each person's job to find their own self-consistent ribbon.
But just to check one of these -- Omega's accurate prediction of your choice in the Newcomb problem, which assumes determinism, is actually impossible, right?
You can get around the universe being ... (read more)
IAWYC except, of course, for this:
As said above and elsewhere, MWI is perfectly deterministic. It's just that there is no single fact of the matter as to which outcome you will observe from within it, because there's not just one time-descendant of you.
This really messes with how I, as an author, rely on karma as feedback for how well my post was received.
I hate all karma games more complicated than, "I liked/disliked/didn't-care-about this post."
I am considering voting up in order to tilt things in favor of making votes de-anonymized. Ironically, as soon as I do so, it's true..
My last sentence was a deliberate snark, but it's "honest" in the sense that I'm attempting to communicate something that I couldn't find a simpler way to say (roughly: that I think you're placing too much importance on "feeling right", and that I dismiss that reaction as not being a "legitimate" motivation in this context).
I have no problem making status-tinged statements if I think they're productive - I'll let the community be the judge of their appropriateness. There's definitely a fine line between efficiency and distract... (read more)
I think the quality of discussion is higher because we don't discuss politics: if we started, we'd pull in political trolls and fanatics. If you consider how common political discussion sites are, and what a city on a hill LW is, I'd be very conservative about anything that might open the gates. We have rarity value, and it could be hard to re-gain.
Perhaps a minimum karma level to discuss politics?
According to some people we here at less wrong are good at determining the truth. Other people are notoriously not.
I don't know that Less Wrong is the appropriate venue for this, but I have felt for some time that I trust the truth-seeking capability here and that it could be used for something more productive than arguments about meta-ethics (no offense to the meta-ethicists intended). I also realize that people are fairly supportive of SIAI here in terms of giving spare cash away, but I feel like the community would be a good jumping-off point for a po... (read more)
Problem: It's really hard to figure out how it will interepret its utility function when it learns about the real world. If we make something that want Vpaperclips, will it also care about making Vpaperclip like things in the real world when if it finds out about us?
BIG problem: Even if it wants something strictly virtual, it can get it easier if it has physical control. It's in its interest to convert the universe to a computer and copy vpaperclips directly in memory, rather than running a virtual factory on virtual energy.
Possible solution: I think ther... (read more)
Bleg for assistance:
I’ve been intermittently discussing Bayes’ Theorem with the uninitiated for years, with uneven results. Typically, I’ll give the classic problem:
3,000 people in the US have Sudden Death Syndrome. I have a test that is 99% accurate; that is, it will wrong on any given person one percent of the time. Steve tests positive for SDS. What is the chance that he has it?
Afterwards, I explain the answer by comparing the false positives to the true positives. And, then I see the Bayes’ Theorem Look, which conveys to me this: "I know Mayne’s g... (read more)
For this specific case, you could try asking the analogous question with a higher probability value. E.g. "if you’ve got a one-in-two DNA match on a suspect, does that mean it’s one-in-two that you’ve got that dude’s DNA?". Maybe you can have some graphic that's meant to represent a several million people, with half of the folks colored as positive matches. When they say "no, it's not one-in-two", you can work your way up to the three million case by showing pictures displaying what the estimated amount of hits would be for a 1 to 3, 1 to 5, 1 to 10, 1 to 100, 1 to 1000 etc. case.
In general, try to use examples that are familiar from everyday life (and thus don't feel like math). For the Bayes' theorem introduction, you could try "a man comes to a doctor complaining about a headache. The doctor knows that both the flu and brain cancer can cause headaches. If you knew nothing else about the case, which one would you think was more likely?" Then, after they've (hopefully) said that the man is more likely to be suffering of a flu, you can mention that brain cancer is much more likely to cause a headache than a flu is, but because flu is so much more common, their answer was nevertheless the correct one.
Other good examples:
Most car accidents occur close to people's homes, not because it's more dangerous close to home, but because people spend most of their driving time close to their homes.
Most pedestrians who get hit by cars get hit at crosswalks, not because it's more dangerous at a crosswalk, but because most people cross at crosswalks.
Most women who get raped get raped by people they know, not because strangers are less dangerous than people they know, but because they spend more time around people they know.
People do to some extent vote based on what they agree with, and at least a few make no bones about that. But people also vote based on style. Based on if it feels like you are trying to learn and contribute to our learning or trying to appear superior and gain status. You look like the latter to me. And I think that you could be arguing the same things, in ways that are no less honest, and get positive karma if you just use different words.
Is there a way to get a "How am I doing?" review or some sort of mentor that I can ask specific questions? The karma feedback just isn't giving me enough detail, but I don't really want to pester everyone every time I have a question about myself.
The basic problem I need to solve is this: When I read an old post, how do I know I am hearing what I am supposed to be hearing? If I have a whole list of nitpicky questions, where do I go? If a question of mine goes unanswered, what do I do?
I don't know anyone here. I don't have the ability to stroll by someone and ask them for help.
Easily solved technically. Show actual figures to the author.
Solution: never de-anonymize votes retroactively.
I just finished reading Jaron Lanier's One-Half of a Manifesto for the second time.
The first time I read it must have been three years ago, and although I felt there were several things wrong with it, I hadn't come to what is now an inescapable conclusion for me: Jaron Lanier is one badly, badly confused dude.
I mean, I knew people could be this confused, but those people are usually postmodernists or theologians or something, not smart computer scientists. Honestly, I find this kind of shocking, and more than a little depressing.
I think Wei Dai was saying that people should vote up strong arguments, even if they disagree with the conclusion. I do this sometimes, and I think it's a good thing to do.
Ok, as one data point, I don't see a particular problem here. The higher rated posts in your examples deserved higher ratings in my opinion. Karma mostly functions as I expect it to function.
PredictionBook.
I am becoming increasingly disinclined to stick out the grad school thing; it's not fun anymore, and really, a doctorate in philosophy is not going to let me do anything substantially different in kind from what I'm doing now once I have it. Nor will it earn me barrels of money or do immense social good, so if it's not fun, I'm kinda low on reasons to stay. I haven't outright decided to leave, but you know what they say. I'm putting out tentative feelers for what else I'd do if I do wind up abandoning ship. Can anyone think of a use for me - ideally one that doesn't require me to eat my savings while I pick up other credentials first?
The basement is the biggest, and matters more for goals that benefit strongly from more resources/security.
Stem cell experts say they believe a small group of scientists is effectively vetoing high quality science from publication in journals.
http://news.bbc.co.uk/2/hi/science/nature/8490291.stm
I want more copies of me to make the correct choice.
Cf. this thread, which is relevant here.
The two examples you linked of bad polling seem to be examples of polling fraud rather than incompetence. It is not that these companies did not understand how to conduct an accurate poll, rather that they don't appear to have been motivated to do so.
It seems to me that accurate polling is quite a well understood problem. Legitimate polling companies exist that are reasonably good at it. In many cases I don't think there is much value (from a truth seeking perspective) in the poll data but I think it generally answers the question "what percentage of people give answer Y to question X?" fairly well. That's just not a very useful piece of data in many cases.
You'd want to make the correct choice in future worlds. What are the chances of you being in that one world where that happens?
Yvain writes in a consciously similar style, and gets even more karma than Eliezer per post, I think.
Mind-killing taboo topic that it is, I'd like to have a comment thread about LW readers' thoughts about US politics.
I recall EY commenting at some point that the way to make political progress is to convert intractable political problems into tractable technical problems. I think this kind of discussion would be more interesting and more profitable than a "traditional" mind-killing political debate.
It might be interesting, for example, to develop formal rationalist political methods. Some principles might include:
Time to update.
1) Some of your earlier comments, especially those most negatively rated, set off all of my "political talking points" alarm bells. I note that many of your later comments aren't so rated, and that you seem to be improving in your message-conveyance.
2) Your replies to replies seem to be going fairly well so far.
3) I agree that it is only potential. Thomblake posted a good link on that very topic, and it is also why I said the case had not been made, and put the phrase in quotes. However, calling it specious and saying I would agree with any syste... (read more)
Honestly, the system is doing exactly what it is supposed to be doing. If you think it is broken, I suspect you are expecting it to do something other than its purpose.
When I get frustrated by the karma system it is because I keep wanting more feedback than it provides. But this is a problem with me, not a problem with the system.
There is some relevant discussion of the issue of how our empathy/instinctive moral reactions conflict with efficient markets in this interview with Hayek. The whole thing is worth watching but the most relevant part of the interview to this discussion starts at 45:25. Unfortunately Vimeo does not support links directly to a timestamp so you have to wait for the video to load before jumping to the relevant point.
ETA a particularly relevant quote:
... (read more)I already upvoted you before reading this comment. It can take a little time for votes to settle. Also, you can set your threshold to a different value. The default is less than -2.
Since this interface is broken, it's not so easy to skim. The page is supposed to have a "prev"[1] link at the bottom, but it doesn't.
ETA: better for skimming is to add not just ?before=t1_1 to the user page, but also &count=100000
[1] I hate the use of prev/next, at least because it isn't standard (eg, it's opposite to livejournal). "earlier" and "later" would be clear.
You can stick ?before=t1_1 onto the end of a user page to get the first comment. yours
Basically, I think what's needed is an API to retrieve a list of comments satisfying some query as an XML document. I'm not sure what kind of queries the system supports internally, so I'll just ask for as much generality and flexibility as possible. For example, I'd like to be able to search by a combination of username, post ID, date, points (e.g., all comments above some number of points), and comment ID (e.g., retrieve a list of comments given a list of IDs, or all comments that come after a certain ID).
If that's too hard, or development time is limite... (read more)
But it's preferable to be wrong.
How about per-capita post scoring?
Why not divide a post's number of up-votes by the number of unique logged-in people who have viewed it? This would correct for the distortion of scores caused by varying numbers of readers. Some old stuff is very good but not read much, and scores are in general inflating as the Less Wrong population grows.
I think such a change would be orthogonal to karma accounting; I'm only suggesting a change in the number displayed next to each post.
I'd like to draw people's attention to a couple of recent "karma anomalies". I think these show a worrying tendency for arguments that support the majority LW opinion to accumulate karma regardless of their actual merits.
A man in a room with a light switch isn't very useful. An AI can't optimize over more bits than we allow it as output. If we give a 1 time 32 bit output register then well, we probably could have brute forced it in the first place. If we give it a kilobyte, then it could probably mindhack us.
(And you're swearing to yourself that you won't monitor it's execution? Really? How do you even debug that?)
You have to keep in mind that the point of AI research is to get to something we can let out of the box. If the argument becomes that we can run it on a headless netless 486 which we immediately explode...then yes, you can probably run that. Probably.
This might be easier to consider as the simpler case of "given we live in a deterministic universe, what does any choice I make matter?" I would say that I still have to make decisions of how to act and choosing not to act is also a choice, so I should do what ever it is that I want to do.
http://wiki.lesswrong.com/wiki/Free_will
Another content opinion question: What and where is considered appropriate to discuss personal progress/changes/introspection regarding Rationality? I assume that LessWrong is not to be used for my personal Rationality diary.
The reason I ask is that the various threads discussing my beliefs seem to pick up some interest and they are very helpful to me personally.
I suppose the underlying question is this: If you had to choose topics for me to write about, what would they be? My specific religious beliefs have been requested by a few people, so that is given. Is there anything else? If I were to talk about my specific beliefs, what is the best way to do so?
As a result of the conquest of Iraq, water was let into the marshes which Saddam Hussein had been letting dry out. This is a clear environmental win.
Here it is. But why don't you just use the search function?
Sounds like a rather drastic context change, and a rather forlorn hope if the AI figures out that it's being tested.
I have had some similar thoughts.
The AI box experiment argues that a "test AI" will be able to escape even if it has no I/O (input/output) other than a channel of communication with a human. So we conclude that this is not a secure enough restraint. Eliezer seems to argue that it is best not to create an AI testbed at all - instead get it right the first time.
But I can think of other variations on an AI box that are more strict than human-communication, but less strict than no-test-AI-at-all. The strictest such example would be an AI simulatio... (read more)
Really? Huh. To me that seems both pretty world-endy and strongly against the spirit of what was implied by your original statement... would you predict this outcome? Is it something that your model allows to happen? I know it's not something I would feel compelled to make excuses for - more like "I TOLD YOU SO!"
What exactly do you think happens in the scenario described?
But how can you have any self-respect, knowing that you prefer to feel right than be right? For me, the feeling of being being wrong is much less-bad than believing I'm so unable to handle being wrong that I'm sabotaging the beliefs of myself and those around me. I would regard myself as pathetic, if I made decisions like that.
XKCD hits a home run with its Valentine's Day comic.
Science Valentine
You're saying some things which I've considered attempting to say but have self-censored to some extent due to expecting negative karma. You aren't necessarily saying them in exactly the way I would have tried to put it, and I don't necessarily agree with everything you've been saying but I broadly agree and have been upvoting most of your recent posts.
Yes, that's true. Now chase "however obtained" up a level -- after all, you have all the information necessary to do so.
It's better for the thief to two-box because it isn't the thief's decision algorithm that determined the contents of the boxes.
Alas, this comment really muddies the waters. It leads to Furcas writing something like this:
Underling asks: if the content of the boxes has already been decided, how can you retroactively effect the content of the boxes?
The problem with what you've written, thomblake, is that you seem to agree with Underling that he can't retroactively change the content of the boxes and thus suggest that the content of the boxes has already been det... (read more)
By rational, I think you mean logical. (We tend to define 'rational' as 'winning' around here.*)
... and -- given a certain set of assumptions -- it is absolutely logical that (a) Omega has already made his prediction, (b) the stuff is already in the boxes, (c) you can only maximize your payoff by choosing both boxes. (This is what I meant by this line of reasoning isn't incorrect, it's just unproductive in finding the solution to this dilemma.)
But consider what other logical assumptions have already snuck into the logic above. We're not familiar with outc... (read more)
What is the correct term for the following distinction:
Scenario A: The fair coin has 50% chance to land heads.
Scenario B: The unfair coin has an unknown chance to land heads, so I assign it a 50% chance to get heads until I get more information.
If A flips up heads it won't change the 50%. If B flips up heads it will change the 50%. This makes Scenario A more [something] than Scenario B, but I don't know the right term.
I want to downvote you for this, because punishing people for telling the truth is a bad thing. On the other hand, you are also telling the truth, so... now I'm confused. ;-)
If it's not a game, why punish me? What's so offensive about me having high karma?
I see the heuristic "don't downvote in an argument you're participating in" as a good one for the kind of corrupted hardware we're running on (as in the Ends Don't Justify Means (Among Humans) post). Given that I could gain or lose (perceived) status in an argument, I'm apt to be especially biased about the quality of people's comments in said argument. I value the prospect of providing more fair and accurate karma feedback in general, even if that means going against object-level intuitions in particular cases.
Usually, if I'm arguing with some... (read more)
Daniel Varga wrote
What I started wondering about when I began assimilating this idea of merging, copying and deleting identities, is what kind of legal/justice system could we depend upon if this was possible to enforce non-criminal behavior?
Right now we can threaten to punish people by restricting their freedom over a period of time that is significant with respect to the length of their lifetime. H... (read more)
It isn't crazy or mad to consider people who vote on your comments as on average equal to you in rationality. Quite the opposite: if each of us assumes that we are more rational than those who vote, this will be like everyone thinking that he is above average in driving ability or whatever.
And in fact, many people do use this information: numerous times someone has said something like, "Since my position is against community consensus I think I will have to modify it," or something along these lines.
Measure your risk intelligence, a quiz in which you answer questions on a confidence scale from 0% to 100% and your calibration is displayed on a graph.
Obviously a linear probability scale is the Wrong Thing - if we were building it, we'd use a deciban scale and logarithmic scoring - but interesting all the same.
">> what you said"
line break
"> what they said"
Looks like:
In Many Worlds Quantum Mechanics, the wave function is fundamental, and the many worlds are a derived consequence. The wave function is time reversable. Running it backwards, you would see worlds merge together, not the world we currently experience splitting into possible precursors. This assymetry is due to simple boundry conditions at the beginning of time.
Thanks for the moral support, but I think what I need more is insights and ideas. :) Maybe I'll just stay away from anything meta, or karma related. In retrospect that seems to be what got me into trouble recently.
My general strategy is to say what I think, moderated slightly by the desire to avoid major negative karma (I hold back on the most offensive responses that occur to me). On average I get positive karma. If my karma started to trend downwards I'd consider revising my tone but I don't think it is productive to worry about the occasional downvote. In fact, without the occasional downvote I would worry that I wasn't adding anything to the conversation.
I may be stretching the openness of the thread a little here, but I have an interesting mechanical engineering hobbyist project, and I have no mechanical aptitude. I figure some people around here might, and this might be interesting to them.
The Avacore CoreControl is a neat little device, based on very simple mechanical principles, that lets you exercise for longer and harder than you otherwise could, by cooling down your blood directly. It pulls a slight vacuum on your hand, and directly applies ice to the palm. The vacuum counteracts the vasocontriction... (read more)
House spouse doesn't have to be a mediocre life. In fact.. it could more or less be the best 'job' ever. It's like a tenured professorship where you actually get to study and research whatever you want!
Huh. I hadn't though of it before, but I'm going to have to add house spouse to my list of acceptable future paths.
We all know politics is the mind-killer, but it sometimes comes up anyway. Eliezer maintains that it is best to start with examples from other perspectives, but alas there is one example of current day politics which I do not know how to reframe: the health care debate.
As far as I can tell, almost every provision in the bill is popular, but the bill is not. This seems to be primarily because Republicans keep lying about it (I couldn't find a good link but there was a clip on the daily show of Obama saying "I can't find a reputable economist who agre... (read more)
How much more grad school do you have to go to your degree? This sounds like a profile of a teacher at some level, probably high school or college. The degree makes college an option. High school teaching may be more enjoyable for you; I don't know.
If you're a year away from your PhD, it probably makes sense to stick it out. If it's three years... three years is a long damn time to be unhappy somewhere.
Whether Singapore is considered "Western" or not is irrelevant. The disagreement was over whether the "economic crisis" forced the current US Government to run up large amount of debt. Singapore shows that not only is it possible to face a global economic crisis without running up large amounts of debt, but that doing so can leave you better off in terms of unemplo... (read more)
Anyone willing to give some uneducated fool a little math coaching? I'm really just starting with math and I probably shouldn't already get into this stuff before reading up more, but it's really bothering me. I came across this page today: http://wiki.lesswrong.com/wiki/Prior_odds
My question, how do you get a likelihood ratio of 11:1 in favor of a diamond? I'm getting this: .88/(.88+7.92)=.1 thus 10% probability for a beep to be a box containing a diamond? Since the diamond-detector is 88% likely to beep on that 1 box and 8% likely to beep on the 99 boxes... (read more)
Would there be interest in a more general discussion forum for rationalists, or does one already exist? I think it would be useful to test the discussion of politics, religion, entertainment, and other topics without ruining lesswrong. It could attract a wider audience and encourage current lurkers to post.
How about a more reasonable topic to discuss - Corporate Organizational Design for a seastead.
You are starting a seastead with certain ideas on how to make money in the long run. How do you make a structure that is better than present governments or corporations?
Corporate design is much simpler than already present nation design.
Also, a good design emerging from this will theoritically be better than any political design in today's nations, since a seastead by definition starts with a huge economic disadvantage.
Why would LW want to discuss this - A well run corporation might be the closest thing in the present world to a superintelligence.
Lets discuss.
This is sort of off-topic for LW, but I recently came across a paper that discusses Reconfigurable Asynchronous Logic Automata, which appears to be a new model of computation inspired by physics. The paper claims that this model yields linear-time algorithms for both sorting and matrix multiplication, which seems fairly significant to me.
Unfortunately the paper is rather short, and I haven't been able to find much more information about it, but I did find this Google Tech Talks video in which Neil Gershenfeld discusses some motivations behind RALA.
It takes O(n) memory units just to store a list of size n. Why should computers have asymptotically more memory units than processing units? You don't get to assume an infinitely parallel computer, but O(n)-parallel is only reasonable.
My first impression of the paper is: We can already do this, it's called an FPGA, and the reason we don't use them everywhere is that they're hard to program for.
By "public discourse" I did mean things like talking points and media interviews. I'm sure many republicans have extremely intelligent private conversations over policy, e.g. Hank Paulson.
You can compare those, because the large debts weren't caused by the "economic crisis". The fact that most Western nations also ran up debt doesn't mean the economic crisis caused the debt increase, only that they chose the same response to the economic crisis (which probably has more to do with increasing their own discretionary power than with lowering unemployment).
Singapore didn't run up huge levels of debt and has a much lower unemployment level than the countries that did run up debt. They could have chosen otherwise, but didn't.
I've been reading Probability Theory by E.T. Jaynes and I find myself somewhat stuck on exercise 3.2. I've found ways to approach the problem that seem computationally intractable (at least by hand). It seems like there should be a better solution. Does anyone have a good solution to this exercise, or even better, know of collection of solutions to the exercises in the book?
At this point, if you have a complete solution, I'd certainly settle for vague hints and outlines if you didn't want to type the whole thing. Thanks.
Why would determinism have anything to say about indexicals? There aren't any Turing-complete models that forbid indexical uncertainty; you can always copy a program and put the copies in different environments. So I don't see what use such a concept of "determinism" would have.
OK. The way I've understood the problem with Omega is that Omega is a perfect predictor so you have 2 options and 2 outcomes:
you two box --> you get $2,000 ($1000 in each box)
you one box --> you get 1M ($1M in one box, $1000 in the second box)
If Omega is not a perfect predictor, it's possible that you two box and you get 1,001,000. (Omega incorrectly predicted you'd one box.)
However, if you are likely to 2box using this reasoning, Omega will adjust his prediction accordingly (and will even reduce your winnings when you do 1box -- so that you can't b... (read more)
Occasionally, I feel like grabbing or creating some sort of general proto-AI (like a neural net, or something) and trying to teach it as much as I can, the goal being for it to end up as intelligent as possible, and possibly even Friendly. I plan to undertake this effort entirely alone, if at all.
May I?
It makes your beliefs about coin A more concentrated than your beliefs about coin B.
I wonder if physicists would admit the effect of genealogy on their interpretation of QM?
People who ask physicists their interpretation of QM: next time, if the physicist admits controversy, ask about genealogy and other forms of epistemic luck.
I mean that as part of the specification of the problem, Omega has all the information necessary to determine what you will choose before you know yourself. There are causal arrows that descend from the situation specified by that information to (i) your choice, and (ii) the contents of the box.
You stated that "the game is rigged". The reasoning behind 2-boxing ignores that fact. In common parlance, a rigged game is unwi... (read more)
No. The method's output depends on its input, which by hypothesis is a specification of the situation that includes all the information necessary to determine the output of the individual's decision algorithm. Hence the decision algorithm is a causal antecedant of the contents of the boxes.
Imagine a simple but related scenario that involves no backwards causation:
You're a 12 year old kid, and you know your mom doesn't want you to play with your new Splogomax unless an adult is with you. Your mom leaves you alone for an hour to run to the store, telling you she'll punish you if you play with the Splogomax, and that, whether there's any evidence of it when she returns, she knows you well enough to know if you're going to play with it, although she'll refrain from passing judgement until she has just gotten back from the store.
Assuming you fear... (read more)
It doesn't when I do it.
Even if it doesn't go to the bottom under the default setting, you can choose "Old" from the dropdown menu next to "Sort By" to view comments in chronological order (this preserves threads).
It goes to the bottom. At least, it has in my experience.
I once asked about commenting on old posts. People seemed okay with it.
Cool. Does IRC work for you? I think I still have a client lurking about somewhere...
And I vaguely remember there being an LW channel at one point. Yep: #lesswrong. And there is a nifty web link in the wiki link. Cool.
EDIT: Yeah, I was wondering about the hhhhhhhhf1. I would have guessed a cat.
So I actually have this idea of doing a series (or just a couple) of top level posts about rationality and basketball (or sports in general). I'm partly holding off because I'm worried that the rationality aspects are too basic and obvious and no one else will care about the basketball parts.
But sports are great for talking about rationality because there is never an ambiguity about the results of our predictions and because there are just bucket-loads of data to work with. On the other hand, a surprising about of irrationality can be still be found even in professional leagues where being wrong means losing money.
Anyway, to answer your question: You get two kinds of information from play at the beginning of the game: First, you get information about what the final score will be from the points that have been scored already. So if my team is up 10 points the other team needs to score 11 more points over the remainder of the game in order to win. The less time remaining in the game the more significant this gets. The other kind of information is information about how the teams are playing that day. But if a team is playing significantly better or worse than you would have predicte... (read more)
While reading old posts and looking for links to topics in upcoming drafts I have noticed that the Tags are severely underutilized. Is there a way to request a tag for a particular post?
Example: Counterfactual has one post and it isn't one of the heavy hitters on the subject.
You have the information that in Newcomblike problems, it is better to (already) be inclined to predictably one-box, because the game is "rigged". So, if you (now) become predictably and generally inclined to one-box, you can win at Newcomblike problems if you encounter them in the future. Even if you only ever run into one.
Of course, Omega is imaginary, so it's entirely a thought experiment, but it's interesting anyway!
Hi LessWrongers,
I'm aware that Newcomb's problem has been discussed a lot around here. Nonetheless, I'm still surprised that 1-boxing seems to be the consensus view here, contrary to the concensus view. Can someone point to the relevant knockdown argument? (I found Newcomb's Problem and Regret of Rationality but the only argument therein seems to be that 1-boxers get what they want, and that's what makes 1-boxing rational. Now, getting what one wants seems to be neither necessary nor sufficient, because you should get it because of your rational choice,... (read more)
If you have ever suppressed your best judgement on something because you feared the social consequences of not supplicating to the speaker vote this comment up.
I try to cultivate a cheerful attitude, which often projects. It failed me this semester, so I'm abandoning ship. You'll need to rely on thomblake for your philosophy grad student needs.
I might or might not try to resume my studies at a later date, but for now, I'm going to spend a month at the SIAI and see if they want to keep me :)
I just read Outliers and I'm curious -- is there anything that would have taken 10000 hours in the EEA that would support Gladwell's "rule"? Is there anything else in neurology/our understanding of the brain that would make the idea that this is the amount of practice that's needed to succeed in something make sense?
To the extent that FAI will depend on the continued exponential growth of computing capacity, I'd say yes.
Graphene transistors promise 100GHz speeds
http://arstechnica.com/science/2010/02/graphene-fets-promise-100-ghz-operation.ars
100-GHz Transistors from Wafer-Scale Epitaxial Graphene
http://www.sciencemag.org/cgi/content/abstract/sci;327/5966/662?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=graphene&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT
Sometimes these labels don't make a lot of sense to the people they're applied to. I've in the past been called a "serious academic", amongst other dubious things.
Here's another one. When reading wikipedia on Chaitin's constant, I came across an article by Chaitin from 1956 (EDIT: oops, it's 2006) about the consequences of the constant (and its uncomputability) on the philosophy of math, that seems to me to just be completely wrongheaded, but for reasons I can't put my finger on. It really strikes the same chords in me that a lot of inflated talk about Godel's Second Incompleteness theorem strikes. (And indeed, as is obligatory, he mentions that too.) I searched on the title but didn't find any refutations. I wonder if anyone here has any comments on it.
In time-reversible deterministic world, information is gained from observation of stuff that wasn't in contact with you in the past, and logical information is also gained (new knowledge about facts following from the premises -- there is no logical transparency). Analogously, an action can be seen as "splitting", where you part with a prepared action, and action parts with you, so that you lose knowledge of that action. If you let info split away in this manner, you may never get it back.
I agree with you, but I think it has to do with the way people vote (mainly voting in favor of things they agree with and against things they disagree with), and with which comments are read by whom. In other words, changing the karma system probably is not a way to address it: people have to change their behavior.
What probability do you assign for it being possible to send information backwards in time, over any time scale?
This would be a cool wiki page, "Community predictions."
Did anyone else do this other than MrHen and pjeby? I read the recent comments page pretty thoroughly, and if there were others, I missed them.
Carl, I meant that as soon as RO understands the concept of a simulation, it will want to crack into the basement. It will seek to crack into the basement only when it understands the way out properly which may not be possible without an understanding of the simulators.
But the main point remains, as soon as RO understands what a simulation is, and it could be living in one and G can be pursued better when it manifests in S2 than in S1, then it will develop an extremely strong sub-goal to crack S1 to go to S2, which might mean that G may not be manifested ... (read more)
How do you get that as being a coincidence? The very same things that make a nation spend prudently are the ones that make it have a reserve fund in the first place! What's America's emergency reserve fund? There isn't one -- just the possibility of borrowing more. (Not necessarily a bad move for a nation with the US's credit rating, but still.)
I bring this up in part because it parallels the differences between US... (read more)
The war in Iraq was the beginning of the end of US hegemony.
Like I said:
A similar way of saying the same thing: change gets easier when debates don't map onto pre-existing signaling narratives. Obviously anything that explicitly threatens religion is going to be a bitch to get through. I don't think critical thinking course in liberal districts would raise a lot of ire even if we were giving students tools that, properly applied, would tell them something about their religious beliefs.
What would I hope to accomplish? I would hope we could come up with policy proposals which might be cheap to enact.
My usual response to this question is that the average Democrat is better than the average Republican, but the very best Republicans are better than the very best Democrats. However, given that my model of the "average Democrat" is the average person in the Bay Area, and my model of the "average Republican" is some mix of Fox news wacko and George W. Bush, I'm not sure I should trust this. Does anyone have any anecdotes about Democrats out side of the Bay Area? Republicans?
I was thinking about what general, universal utility would look like. I managed to tie myself into an interesting mental knot.
I started with: Things occurring as intelligent agents would prefer.
If preferences conflict, weight preferences by the intelligence of the preferring agent.
Define intelligent agents as optimization processes.
Define relative intelligences as the relative optimization strengths of the processes
Define a preference as something an agent optimizes for.
Then, I realized that my definition was a descriptive prediction of events.
Millions of lives saved in Africa through expanded public health.
One questions how meaningful testing done on such a crippled AI would be.
Many-Worlds explained, with pretty pictures.
http://kim.oyhus.no/QM_explaining_many-worlds.html
The story about how I deduced the Many-Worlds interpretation, with pictures instead of formulas.
Enjoy!
Voted down because my writing is confusing or because I said something stupid?
Thanks. Though I'm still highly skeptical, this gives me much more to engage with. This will take me some time to process though, and it might take me a while as I'm preparing for a conference this week.
I recently met someone investigating physics for the first time, and they asked what I thought of Paul Davies' book The Mind of God. I thought I'd post my response here, not because of my views on Davies, but for the brief statement of outlook trying to explain the position from which I'd judge him.
... (read more)Can you elaborate on this without linking to something like The Simple Truth. Not to say that linking is bad, but I'm more curious of your [and anyone else who wants to chime in] take on what you said.
(2nd reply)
I'm beginning to come around to your point of view. Omega rewards you for being illogical.
.... It's just logical to allow him to do so.
Patently false.
I disagree on both points.