All of kurige's Comments + Replies

A Less Wrong singularity article?

1) You can summarize arguments voiced by EY.
2) You cannot write a book that will be published under EY's name.
3) Writing a book takes a great deal of time and effort.

You're reading into connotation a bit too much.

0StefanPernar12yIts called ghost writing :-) but then again the true value add lies in the work and not in the identity of the author. (discarding marketing value in the case of celebrities) I do not think so - am just being German :-) about it: very precise and thorough.
Computer bugs and evolution

10,000-line Perl program.

Ouch.

It's nice to see some programming related content on LW. Thanks.

Deciding on our rationality focus

I would prefer a variation of bullet point number 3:

  • Allow i-rationality discussion, but promote only when it is an application of a related, tightly coupled e-rationality concept.

I am here for e-rationality discussion. It's "cool" to know that deodorant is most effective when applied at night, before I go to bed, but that doesn't do anything to fundamentally change the way I think.

The Strangest Thing An AI Could Tell You

There is a soul. It resides in the appendix. Anybody who has undergone an appendectomy is effectively a p-zombie.

A totalitarian dystopia. Two uniformed officers dragging away a screaming man. “No, you don't understand! I have qualia! I swear!” An older officer tells the younger one, who hesitates for a moment: “Don't pay attention. He had his appendix removed. He's just programmed to say all that stuff as if he's human.”

How to come up with verbal probabilities

Thanks for the examples of how to apply OB/LW techniques to everyday life.

Definitely more articles in this vein would be greatly appreciated.

5Scott Alexander13yAgreed. I especially like technique 1. A related technique is to imagine you're going to be called on it. For example, if I predict there's a 90% chance the economic crisis will be over by 2011, I imagine that the second I say the estimate, Omega will come down from the sky and say whether it is or it isn't. Quite often I find that I'm worrying a bit more than 10% that Omega will announce that I was wrong.
Less Meta

Just to avoid confusing Nominull... This post has now been "promoted", so it does now appear on the front-page, and in RSS feeds.

2JulianMorrison13yYou can get an RSS feed off the "new" page if you want to skip the whole promotion thing entirely.
Escaping Your Past

Epistemic rationality alone might be well enough for those of us who simply love truth (who love truthseeking, I mean; the truth itself is usually an abomination)

What motivation is there to seek out an abomination? I read the linked comment and I disagree strongly... The curious, persistent rationalist should find the truth seeking process rewarding, but shouldn't it be rewarding because your working toward something wonderful? Worded another way - of what value is truth seeking if you hold the very object you seek in contempt?

If you take the strictly c... (read more)

2Z_M_Davis13yBut if you expect the truth to be wonderful, then what do you do when you come across strong evidence for some horrifying hypothesis that makes you want to cry? And if there is no hypothesis that horrifies you, then you really must be a Vulcan ... This is not how I understand the term rationality [http://www.overcomingbias.com/2007/04/feeling_rationa.html]. I find it helpful to keep a strict type distinction: you cut everything untrue out of your beliefs, and fold everything beautiful into your utility function.
1pangloss13yPresumably the position mentioned is simply that one can value truth without valuing particular truths in the sense that you want them to be true. It might be true that an earthquake will kill hundreds, but I don't love that an earthquake will kill hundreds.
1steven046113yYou cannot fix, or kill, what you haven't found. The phrase "truth hunting" might be appropriate. Though if the point is that contempt of the territory does not imply contempt of the map, then I agree.
Proposal: Use the Wiki for Concepts

Eliezer, I don't know if you're familiar with the CIA's Intellipedia, but you seem to have hit the nail on the head.

The CIA have had huge success doing exactly what you describe here. You can read more about it in the paper here. The basic idea is that the intelligence community should harness the synergy of the blog/wiki combo.

From the paper:

The Wiki and the Blog are complimentary companion technologies that together form the core workspace that will allow intelligence officers to share, innovate, adapt, respond, and be—on occasion—brilliant. Blogs will

... (read more)

Thanks for this reference. This concept is what I was going at at the IRC meetup. The main disagreement with Eliezer's model seems to be that he thinks that the blog posts still have to hold the majority of content, with wiki only referencing them with very short introductions, whereas I think that the wiki should grow into a thing in itself over time, converting the content of blog posts into wiki articles. Naturally, articles should be organized in a zoom-in manner, with few-sentences summary, then couple-paragraphs introduction, and only then full-lengt... (read more)

Well-Kept Gardens Die By Pacifism

The karma system is a integral part of the Reddit base code that this site is built on top of. It's designed to do one thing - increase the visibility of good content - and it does that one thing very well.

I agree, though, that there is untapped potential in the karma system. Personally I would love to see - if not by whom - at least when my comments are up/down voted.

3MrHen13yAh, that is good to remember. This seems to tilt my problem further toward fitting a square peg into the round hole. I guess that would be my own fault. :(
Individual Rationality Is a Matter of Life and Death

Also, for it to be an unbiased comparison the two statements, "smart cars for all" and "cryopreservation for only the people who actually died that year" should be limited to the same domain.

If you compare different sets, one substantially larger than the other, then of course cryo is going to be cheaper!

A more balanced statement would be: "buying smart cars to save the lives of only the people who would have otherwise died by car accident in any given year would probably cost less than cryo-surance for the same set of people."

Plus you don't die. Which, for me, is preferable.

Cached Selves

Great post.

Here's some additional reading that supports your argument:

Distract yourself. You're more honest about your actions when you can't exert the mental energies necessary to rationalize your actions.

And the (subconcious) desire to avoid appearing hypocritical is a huge motivator.

I've noticed this in myself often. I faithfully watched LOST through the third season, explaining to my friends who had lost interest around the first season that it was, in fact, an awesome show. And then I realized it kind of sucked.

You're Calling *Who* A Cult Leader?

Picture of Eliezer in monk's robes (That is you, right?), stories about freemason-esque rituals, specific vocabulary with terms like, "the Bayesian conspiracy".

It's all tongue in cheek, and I enjoy it. But if you're trying to not look like a cult, then you're doing it wrong.

0PrometheanFaun8yI disagree. I think it's so easy for a community with widespread, genuine conviction as to their shared radicles to look like a cult, that, well, anyone willing to go through the rather extreme rigors of preventing anyone from seeing you as cult-like.. methinks they protest too much. I say we are- though far from being a cult- cultlike. We are weird, and passionate, and that's all it takes.
Individual Rationality Is a Matter of Life and Death

In the modern world, karate is unlikely to save your life. But rationality can.

The term "bayesian black-belt" has been thrown around a number of times on OB and LW... this, in my mind, seems misleading. As far as I can tell there are two ways in which bayesian reasoning can be applied directly: introspection and academia. Within those domains, sure, the metaphor makes sense... in meatspace life-and-death situations? Not so much.

"Being rational" doesn't develop your quick-twitch muscle fibers or give you a sixth sense.

Perhaps, where ... (read more)

8patrissimo13yBy rationality I am not referring to bayesian reasoning. I simply mean making correct decisions even when (especially when) one's hardwired instincts give the wrong answer. In the first case, I should not have driven. In the second case, I should have told the driver to be more careful. In both cases, I made serious mistakes in life-or-death situations. I call that irrational, and I seek to not replicate such mistakes in the future. You are welcome to call it "common sense" if you prefer. "Common sense" is rather a misnomer, in my opinion, considering how uncommon a quality it is. But I really don't care what it is called. I simply mean, making better decisions, screwing up less, being less of a monkey and more of a human. I find it baffling that people don't find it blindingly obvious that this is one of the most important skills to develop in life.
Counterfactual Mugging

Thank you. Now I grok.

So, if this scenario is logically inconsistent for all values of 'me' then there really is nothing that I can learn about 'me' from this problem. I wish I hadn't thought about it so hard.

0[anonymous]13yLogically inconsistent for all values of '' that would hand over the $100. For all values of '' that would keep the $100 it is logically consistent but rather obfuscated. It is difficult to answer a multiple choice question when considering the correct answer throws null.
Counterfactual Mugging

That's not the situation in question. The scenario laid out by Vladimir_Nesov does not allow for an equal probability of getting $10000 and paying $100. Omega has already flipped the coin, and it's already been decided that I'm on the "losing" side. Join that with the fact that me giving $100 now does not increase the chance of me getting $10000 in the future because there is no repetition.

Perhaps there's something fundamental I'm missing here, but the linearity of events seems pretty clear. If Omega really did calculate that I would give him the... (read more)

5Nebu13yI don't see this situation is impossible, but I think it's because I've interpreted it differently from you. First of all, I'll assume that everyone agrees that given a 50/50 bet to win $10'000 versus losing $100, everyone would take the bet. That's a straightforward application of utilitarianism + probability theory = expected utility, right? So Omega correctly predicts that you would have taken the bet if he had offered it to you (a real no brainer; I too can predict that you would have taken the bet had he offered it). But he didn't offer it to you. He comes up now, telling you that he predicted that you would accept the bet, and then carried out the bet without asking you (since he already knew you would accept the bet), and it turns out you lost. Now he's asking you to give him $100. He's not predicting that you will give him that number, nor is he demanding or commanding you to give it. He's merely asking. So the question is, do you do it? I don't think there's any inconsistency in this scenario regardless of whether you decide to give him the money or not, since Omega hasn't told you what his prediction would be (though if we accept that Omega is infallible, then his prediction is obviously exactly whatever you would actually do in that situation).
5MBlume13yOmega hasn't told you his predictions in the given scenario.

I feel like a man in an Escher painting, with all these recursive hypothetical mes, hypothetical kuriges, and hypothetical omegas.

I'm saying, go ahead and start by imagining a situation like the one in the problem, except it's all happening in the future -- you don't yet know how the coin will land.

You would want to decide in advance that if the coin came up against you, you would cough up $100.

The ability to precommit in this way gives you an advantage. It gives you half a chance at $10000 you would not otherwise have had.

So it's a shame that in the prob... (read more)

1[anonymous]13yThat's absolutely true. In exactly the same way, if the Omega really did calculate that I wouldn't give him the $100 then either he miscalculated, or this situation cannot actually occur. The difference between your counterfactual instance and my counterfactual instance is that yours just has a weird guy hassling you with deal you want to reject while my counterfactual is logically inconsistent for all values of 'me' that I identify as 'me'.
Counterfactual Mugging

Can you please explain the reasoning behind this? Given all of the restrictions mentioned (no iterations, no possible benefit to this self) I can't see any reason to part with my hard earned cash. My "gut" says "Hell no!" but I'm curious to see if I'm missing something.

I work on AI. In particular, on decision systems stable under self-modification. Any agent who does not give the $100 in situations like this will self-modify to give $100 in situations like this. I don't spend a whole lot of time thinking about decision theories that are unstable under reflection. QED.

7[anonymous]13yHere is one intuitive way of looking at it: Before tossing the coin, the Omega perfectly emulates my decision making process. In this emulation he tells me that I lost the coin toss, explains the deal and asks me to give him $100. If this emulated me gives up the $100 then he has a good chance of getting $10,000. I have absolutely no way of knowing whether I am the 'emulated me' or the real me. Vladmir's specification is quite unambiguous. I, me, the one doing the deciding right now in this real world, am the same me as the one inside the Omega's head. If the emulation is in any way different to me then the Omega isn't the Omega. The guy in the Omega's head has been offered a deal that any rational man would accept, and I am that man. So, it may sound stupid that I'm giving up $100 with no hope of getting anything back. But that's because the counterfactual is stupid, not me.

There are various intuition pumps to explain the answer.

The simplest is to imagine that a moment from now, Omega walks up to you and says "I'm sorry, I would have given you $10000, except I simulated what would happen if I asked you for $100 and you refused". In that case, you would certainly wish you had been the sort of person to give up the $100.

Which means that right now, with both scenarios equally probable, you should want to be the sort of person who will give up the $100, since if you are that sort of person, there's half a chance you'll get $10000.

If you want to be the sort of person who'll do X given Y, then when Y turns up, you'd better bloody well do X.

How to Not Lose an Argument

This post goes hand in hand with Crisis of Faith. Eliezer's post is all about creating an internal crisis and your post is all about applying that to a real world debate. Like peanut-butter and jelly.

If you want to correct and not just refute then you cannot bring to the table evidence that can only be seen as evidence from your perspective. Ie. you cannot directly use evolution as evidence when the opposing party has no working knowledge of evolution. Likewise, a christian cannot convince an atheist of the existence of God by talking about the wonders of ... (read more)

5[anonymous]13yThis is a critical point. One of the reasons arguments seem to exist at all - from what I can understand - is that when people look at the same things in different ways, effectively seeing two different things. A christian might look at the world and see the wonder of God's creation, but a physicist might see nothing but billions and billions of tiny particles interacting. Someone pro-life might see an abortion as a murder, while someone pro-choice might see it as part of a woman's right to her own body. You need to frame the argument so both parties are looking at the same thing for any progress to be made. Otherwise, people just become more and more entrenched in their position, while getting more and more frustrated that the other person doesn't see it their way.
8thomblake13yFor our European readers, I would like to note that what kurige meant by 'Like peanut-butter and jelly' was something like 'they go really well together, and in fact one would probably not put one on a sandwich without the other'. Just try not to picture it; you'll be fine.
Never Leave Your Room

There is an excellent example of "priming" the mind here.

The idea is that specific prior knowledge drastically changes the way we process new information. You listen to a sine-wave modulated recording that is initially unintelligible. You then listen to the original recording. You are now primed. Listen again to the modulated recording and suddenly the previously unintelligible recording is clear as day.

I first listened to all of the samples on December 8th, when the link was posted on kottke.org. If I'm not mistaken that means it's been exactly ... (read more)

0Regex6yOf the five recordings on that page I was able to figure out three without listening to the clear speech.
2gjm13yHmm. I found that with all of those I could make out at least some, and in some cases all, of the words in the sine-wave versions without the "priming" original recording. The very first example on that page was pretty much perfectly clear to me on the first listening; others were more work. Then, for the ones where I hadn't been able to make out all the words in the sine-wave version, I listened to the "primer" and tried again with the sine waves. In each case, I found that I could then recognize the words I'd found unclear before, but I wouldn't say they were "clear as day" or anything like; more like "well, OK, I suppose it could be interpreted that way". The effect of priming, for me, therefore appears to be very small. I tried the first pair on my wife, and her response appears to be the canonical one. Obviously I'm just strange. Anyone else have the same experience as me?
3MichaelGR13yThanks for the link, very interesting indeed. In my case, though,I could hear a few words the first time I listened to the sine-wave modulated version. It became much clearer after listening to the primer, though. Sadly, once you've heard the primer, you can't really go back to hearing it the way you heard it the first time, so you can't compare back to back. It's a bit like "hidden" messages in songs; once you hear them, it very hard to revert back to hearing the original lyrics.
The Most Important Thing You Learned

Just did a quick search of this page and it didn't turn up... so, by far, the most memorable and referred-to post I've read on OB is Crisis of Faith.

3AnnaSalamon13yDid practicing the Crisis of Faith technique cause you to change your mind about anything?
Don't Believe You'll Self-Deceive

His attribution of Orwellian doublethink to himself is far more confusing. I have no idea what to make of that. Maybe your advice in this post is on point there. But the "absolutely zero effect" quote seems unobjectionable.

From the original comment:

One thing I've come to realize that helps to explain the disparity I feel when I talk with most other Christians is the fact that somewhere along the way my world-view took a major shift away from blind faith and landed somewhere in the vicinity of Orwellian double-think.

I don't have the origina... (read more)

1Amanojack12yIn other words, it seems you meant "doublethink" in the collective sense based on traditional sentiment, rather than in the actual sense of a logical contradiction between any one specific religious tenet A and any one specific scientific theory B. If there are no actual contradictions, "doublethink" was just an (unfortunate) turn of phrase and there is nothing to be reconciled.
4Tyrrell_McAllister13yOkay, so, when you say that you engage in "doublethink", do you mean that you simultaneously hold two beliefs that are currently "unreconciled", and which you don't yet know how to reconcile, but which you believe can yet be reconciled? If that's right, then I would be curious to know more about this "unreconciled" relation. Can you give other example of pairs of "unreconciled" beliefs that you hold?
Don't Believe You'll Self-Deceive

I don't mean to seem like I'm picking on Kurige, but I think you have to expect a certain amount of questioning if you show up on Less Wrong and say:

One thing I've come to realize that helps to explain the disparity I feel when I talk with most other Christians is the fact that somewhere along the way my world-view took a major shift away from blind faith and landed somewhere in the vicinity of Orwellian double-think.

I realize that my views do not agree with the large majority of those who frequent LW and OB - but I'd just like to take a moment to... (read more)

The Mystery of the Haunted Rationalist

No, my experience with alone/together situations is quite different.

I usually don't laugh when I'm watching a funny movie by myself and, although I might flinch during jump scenes, I don't normally find scary movies to be all that scary when I watch them by myself.

There are hotels that tout themselves as "haunted hotels" and even bring in teams of "ghost experts" to get an official certificate proudly declaring the amount and type of "haunting" taking place at that location.

If it's known to be a joke, then sure, it's all fun a... (read more)

2NancyLebovitz11yThat's pretty much how I see it-- if you've spent your life in a culture where it's common to believe in dangerous ghosts, your default reactions will be affected, and likewise if you've spent your life in a culture where it's common to believe in fan death. I bet it doesn't even take a lifetime-- I'm expecting something like five or ten years. I think a lot of emotional reactions are picked up from images and body language.

You have to be careful when dismissing subconscious fears as irrational. They were put there for a reason, and they may still be relevant. If I was staying in a "haunted house" in a city where it was not isolated or abandoned or anything, I don't think it'd scare me one bit. A secluded/abandoned haunted house might be scary, and for good reasons. It would be unwise to assume that your fear is entirely irrational.

I went to a local park with some friends one night to hang out. Both I and another friend were uneasy about it, but dismissed our fears ... (read more)

Moore's Paradox

If you're reading this, Kurige, you should very quickly say the above out loud, so you can notice that it seems at least slightly harder to swallow - notice the subjective difference - before you go to the trouble of rerationalizing.

There seems to be some confusion here concerning authority. I have the authority to say "I like the color green." It would not make sense for me to say "I believe I like the color green" because I have first-hand knowledge concerning my own likes and dislikes and I'm sufficiently confident in my own menta... (read more)

"I chose to believe in the existance of God - deliberately and conciously."

I cannot conceive of how it is possible to deliberately and consciously choose to believe in something.

I grew up in a religious family. I served as a missionary for my church. I married a woman of the same religion. For most of my first 28 years I believed not only that there was a God but that he had established the church of which I and my family were all members.

But once I started examining my beliefs more closely, I realized that I was engaging in the most dishonest sort of special pleading in their favor. And then it was no longer possible to continue believing.

3mark_spottswood13yYou do not cause yourself to like the color green merely by saying that you do. You are describing yourself, but the act of description does not make the description correct. You could speak falsely, but doing so would not change your preferences as to color. There are some sentence-types that correspond to your concept of "authority." If I accept your offer to create a contract by saying, "we have a contract," I have in fact made is so by speaking. Likewise, "I now pronounce you man and wife." See J.L. Austin's "How to Do Things With Words" for more examples of this. The philosophy of language term for talking like this is that you are making a "performative utterance," because by speaking you are in fact performing an act, rather than merely describing the world. But our speech conventions do not require us to speak performatively in order to make flat assertions. If it is raining outside, I can say, "it is raining," even though my saying so doesn't make it so. I think the mistake you are making is in assuming that we cannot assert that something is true unless we are 100% confident in our assertion.
1topynate13yI presume that you use the Higgs boson example because the boson hasn't been experimentally observed? In other words, the Higgs boson is an example where the evidence for existence is from reasoning to the most likely hypothesis, i.e. abduction. If your belief in God is similar, that means you adopt the hypothesis that God exists because it better explains the available data.The physicist, of course, has access to much stronger evidence than abduction, for instance the LHC experiments, and will give much more weight to such evidence. That's an example of induction, which is key to hypothesis confirmation. Once the LHC results are in, the physicist fully expects to be saying either "the Higgs boson exists" or "the Higgs boson doesn't exist, or if it does it isn't the same thing we thought it was". However, he may well expect with 95% probability to be saying the former and not the latter. I propose that you hesitate to say X when you have no inductive evidence that X. I also venture that in the case of the proposition "God exists", your belief is qualitatively different from that of pre-modern Christians, in that you are less likely to accept 'tests' of God's existence as valid. The medieval church, for instance, thought heliocentrism was heretical, in that it explicitly contradicted Christianity. This amounts to saying that a proof that the Earth orbits the Sun would be a disproof of Christianity, whereas I don't believe that you would see any particular material fact as evidence against God's existence.

Correct me if I'm wrong, but from a Bayesian perspective the only difference between first-hand knowledge and N-th hand knowledge (where N>1) are the numbers. There is nothing special about first-hand.

Suppose you see a dog in the street, and formulate this knowledge to yourself. What just happened? Photons from the sun (or other light sources) hit the dog, bounced, hit your eye, initiated a chemical reaction, etc. Your knowledge is neither special nor trivial, but is a chain of physical events.

Now, what happens when your friend tells you he sees a dog? ... (read more)

Does anyone have a good model of what people in fact do, when they talk about "choosing" a particular belief? At least two possibilities come to mind:

(1) Choosing to act and speak in accordance with a particular belief.

(2) Choosing to "lie" to other parts of one's mind -- to act and speak in accordance with a particular belief internally, so that one's emotional centers, etc., get at least some of their inputs "as though" one held that belief.

Is "choosing to trust someone" any more compatible with lack of self-dec... (read more)

6Erik13yIs it harder for you to say "Evidence indicates that God exists" than for you to say "I believe God exists"? Just curious, it's a bit of a pet theory of mine. If you don't want to expend energy just to provide another data point for me, no hard feelings. If you would be really kind, you could try to indicate how comfortable you are with different qualifiers jimrandomh gave.

The scientist who says "according to our model M, the higgs-boson should exist" has, as his actual beliefs, a wider distribution of hypotheses than model M. He thinks model M could be right, but he is not sure -- his actual beliefs are that there's a certain probability of {M and higgs-bosons}, and another probability of {not M}.

Is something analogous true for your belief in God? I mean, are you saying "There's this framework I believe in, and, if it's true, then God is true... but that framework may or may not be true?"

Moore's Paradox

Weasel words, as you call them, are a necessary part of any rational discussion. The scientific equivalent would be, "evidence indicates" or "statistics show".

9mark_spottswood13yOn this we agree. If we have 60% confidence that a statement is correct, we would be misleading others if we asserted that it was true in a way that signalled a much higher confidence. Our own beliefs are evidence for others, and we should be careful not to communicate false evidence. Stripped down to essentials, Eliezer is asking you to assert that God exists with more confidence than it sounds like you have. You are not willing to say it without weasel words because to do so would be to express more certainty than you actually have. Is that right?
3Johnicholas13yCan you offer any evidence that weasel words are necessary to rational discussion? I can imagine that weasel words are common to scientific discussions, as well as discussions regarding faith. However, I don't see any barriers to people eschewing them.

I'm afraid I must disagree kurige, for two reasons. The first is that they smack of false modesty, a way of insuring yourself against the social consequences of failure without actually taking care not to fail. The second is that the use of such terms don't really convey any new information, and require the use of the passive voice, which is bad style.

"Evidence indicates an increase in ice cream sales" really isn't good science writing, because the immediate question is "What evidence?". It's much better to say "ice cream sales have increased by 15%" and point to the relevant statistics.

Teaching the Unteachable

I was once told that half of Nobel laureates were the students of other Nobel laureates. ... Even after discounting for cherry-picking of students and political pull, this suggests to me that you can learn things by apprenticeship - close supervision, free-form discussion, ongoing error correction over a long period of time - that no Nobel laureate has yet succeeding in putting into any of their many books.

What is it that the students of Nobel laureates learn, but can't put into words?

You can't put mentornship in a book. When I face a problem that may... (read more)

Information cascades

In other words, be aware that popularity breeds popularity.

No, Really, I've Deceived Myself

I just looked it up, and it looks like you were correct about the Bonobos. Should have said "Pan Prior".

No, Really, I've Deceived Myself

This I can understand.

I am a protestant Christian and your friend's experience with "belief" are similar to mine. Or seem to be, from what I gather in your post.

One thing I've come to realize that helps to explain the disparity I feel when I talk with most other Christians is the fact that somewhere along the way my world-view took a major shift away from blind faith and landed somewhere in the vicinity of Orwellian double-think.

The double-think comes into play when you're faced with non-axiomatic concepts such as morality. I believe that there i... (read more)

3Estarlio9yDo you have a list?
-3Dojan9yUp-voted for honesty.
-6less_schlong13y
1[anonymous]13yMeta-comment: I up-voted this comment and James Andrix's comment because they're good data, I'm glad they shared it, and it looks like stuff more eyes should look at within the thread. But I wish "up-voting" didn't give the appearance of agreement. (I'm hoping practical discussion of what to do with votes is okay to keep in relevant threads, in the early stages of LW's community formation?)