r/Fitness does a weekly "Moronic Monday", a judgment-free thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. I thought this seemed like a useful thing to have here - after all, the concepts discussed on LessWrong are probably at least a little harder to grasp than those of weightlifting. Plus, I have a few stupid questions of my own, so it doesn't seem unreasonable that other people might as well.
It seems to me that, unless one is already a powerful person, the best thing one can do to gain optimization power is building relationships with people more powerful than oneself. To the extant that this easily trumps the vast majority of other failings (epistemic rationality wise) as discussed on LW. So why aren't we discussing how to do better at this regularly? A couple explanations immediately leap to mind:
Not a core competency of the sort of people LW attracts.
Rewards not as immediate as the sort of epiphany porn that some of LW generates.
Ugh fields. Especially in regard to things that are considered manipulative when reasoned about explicitly, even though we all do them all the time anyway.
LW's foundational posts are all very strongly biased towards epistemic rationality, and I think that strong bias still affects our attempts to talk about instrumental rationality. There are probably all sorts of instrumentally rational things we could be doing that we don't talk about enough.
Do you have any experience doing this successfully? I'd assume that powerful people already have lots of folks trying to make friends with them.
Specifically for business, I do.
The general angle is asking intelligent, and forward-pointing questions, specifically because deep processing for thoughts (as described in Thinking Fast and Slow) is rare, even within the business community; so demonstrating understanding, and curiosity (both of which are strength of people on LW) is an almost instant-win.
Two of the better guides on how to approach this intelligently are:
The other aspect of this is Speaking the Lingo. The problem with LW is:
1, people developing gravity wells around specific topics , and having a very hard time talking about stuff others are interested in without bringing up pet topics of their own; and
2, the inference distance between the kind of stuff that puts people into powerful position, and the kind of stuff LW develops a gravity well around is, indeed, vast.
The operational hack here is 1, listening, 2, building up the scaffolds on which these people hang their power upon; 3, recognizing whether you have an understanding of how those pieces fit together.
General algorithm for the network... (read more)
I'm not sure that being a rationalist gives you a significant advantage in interpersonal relationships. A lot of our brain seems to be specifically designed for social interactions; trying to use the rational part of your brain to do social interactions is like using a CPU chip to do graphics instead of a GPU; you can do it, but it'll be slower and less efficient and effective then using the hardware that's designed for that.
I like this idea! I feel like the current questions are insufficiently "stupid," so here's one: how do you talk to strangers?
The downsides of talking to strangers are really, really low. Your feelings of anxiety are just lies from your brain.
I've found that writing a script ahead of time for particular situations, with some thoughts of different possible variations in how the conversation could go.
Honestly, not sure I understand the question.
"hi, can I buy you a drink?" is also bad for other reasons, because this often opens a kind of transactional model of things where there's kind of an idea that you're buying her time, either for conversation or for other more intimate activities later. Now, this isn't explicitly the case, but it can get really awkward, so I'd seriously caution against opening with it.
I feel like I read something interesting about this on Mark Manson's blog but it's horribly organized so I can't find it now.
I've been reading PUA esque stuff lately and something they stress is that "the opener doesn't matter", "you can open with anything". This is in contrast to the older, cheesier, tactic based PUAs who used to focus obsessively over finding the right line to open with. This advice is meant for approaching women in bars, but I imagine it holds true for most ocassions you would want to talk to a stranger.
In general if you're in a social situation where strangers are approaching each other, then people are generally receptive to people approaching them and will be grateful that you are putting in the work of initiating contact and not them. People also understand that it's sometimes awkward to initiate with strangers, and will usually try to help you smooth things over if you initially make a rough landing. If you come in awkwardly, then you can gauge their reaction, calibrate to find a more appropriate tone, continue without drawing attention to the initial awkwardness, and things will be fine.
Personally, I think the best way to open a conversation with a stranger would just be to go up to them and say "Hey, I'm __" and offer a handshake. It's straightfo... (read more)
Here's a recent example (with a lady sitting beside me in the aeroplane; translated):
from which it was trivially easy to start a conversation.
Don't leave us hanging! Why the hell could she speak all those languages but not English?
She had been born in Brazil to Italian parents, had gone to school in Italy, and was working in the French-speaking part of Switzerland.
I conjecture that "Hi, I'm Qiaochu" is a very uncommon greeting in Italian :-).
Beware of 'should'. Subscribing to this ideal of equality rules out all sorts of positive human interactions that are not equal yet still beneficial. In fact, limiting oneself to human interactions on an equal footing would be outright socially crippling.
A good way to start is to say something about your situation (time, place, etc.). After that, I guess you could ask their names or something. I consider myself decent at talking to strangers, but I think it's less about what you say and more about the emotions you train yourself to have. If you see strangers as friends waiting to be made on an emotional level, you can just talk to them the way you'd talk to a friend. Standing somewhere with lots of foot traffic holding a "free hugs" sign under the influence of something disinhibiting might be helpful for building this attitude. If you currently are uncomfortable talking to strangers then whenever you do it, afterwards comfort yourself internally the same way you might comfort an animal (after all, you are an animal) and say stuff like "see? that wasn't so bad. you did great." etc. and try to build comfort through repeated small exposure (more).
I was climbing a tree yesterday and realized that I hadn't even thought that the people watching were going to judge me, and that I would have thought of it previously, and that it would have made it harder to just climb the tree. Then I thought that if I could use the same trick on social interaction, it would become much easier. Then I wondered how you might learn to use that trick.
In other words, I don't know, but the question I don't know the answer to is a little bit closer to success.
I recently found a nice mind hack for that: “What would my drunken self do?”
Data point counter to the other two replies you've gotten: I -- and, I perceive, most people, both introverted and extraverted -- am neither overjoyed nor horrified to have someone attempt to start a conversation with me on an airplane. I would say that as long as you can successfully read negative feedback, and disengage from the conversation, it is absolutely reasonable to attempt to start a conversation with a stranger next to you on an airplane.
Now, I can't tell if the objection is to 1) the mere act of attempting to talk to someone on an airplane at all, which I can't really understand, or 2) to the particular manner of your attempt, which does seem a bit talkative / familiar, and could perhaps be toned down.
Someone introducing themselves to you produces "seething, ulcerating rage"? Have you ever considered counseling or therapy?
In comment threads to feminist blog posts in reaction to a particular xkcd comic, I've seen good reasons why certain people might be very pissed off when other people try to talk to them somewhere they cannot get away from, though they mostly apply to women being talked to by men.
Surprisingly difficult if you've been trained to be "nice".
Or, more precisely, if you are that person then do the personality development needed to remove the undesirable aspects of that social conditioning.
(You can not control others behaviour in the past. Unless they are extraordinarily good predictors, in which case by all means wreak acausal havoc upon them to prevent their to-be-counterfactual toxic training.)
Yikes. Duly noted. That is a useful data point, and it's the sort of the thing I need to keep in mind. I'm an extrovert temperamentally, and I grew up in a culture that encourages extroversion. This has mostly been an apparent advantage in social situations, because the people from whom you get an overt response are usually people who either share or appreciate that personality trait. But I've begun to realize there is a silent minority (perhaps a majority?) of people who find behavior like mine excessively familiar, annoying, perhaps even anxiety-inducing. And for various reasons, these people are discouraged from openly expressing their preferences in this regard in person, so I only hear about their objections in impersonal contexts like this.
I usually try to gauge whether people are receptive to spontaneous socializing before engaging in it, but I should keep in mind that I'm not a perfect judge of this kind of thing, and I probably still end up engaging unwilling participants. There is something selfish and entitled about recruiting a stranger into an activity I enjoy without having much of a sense of whether they enjoy it at all (especially if there are social pressures preventing them from saying that they don't enjoy it), and I should probably err on the side of not doing it.
I would guess that the part that caused such a strong reaction was this:
You're not just introducing yourself: you are putting pressure on the other person to be social, both with the notion that you would find sitting in silence "excruciatingly" uncomfortable, and with the implication that a lack of communication is unusual and unacceptable.
Usually if somebody would introduce themselves and try to start a conversation, one could try to disengage, either with a polite "sorry, don't feel like talking" or with (more or less) subtle hints like giving short one-word responses, but that already feels somewhat impolite and is hard for many people. Your opening makes it even harder to try to avoid the conversation.
Hmm... good point. What I typed isn't exactly what I usually say, but I do tend to project my personal opinion that sitting quietly side by side is awkward and alien (to me) behavior. I can see how conveying that impression makes it difficult to disengage. And while I do find the silence pretty damn awkward, other people have no obligation to cater to my hang-ups, and its kind of unfair to (unconsciously) manipulate them into that position. So on consideration, I'm retracting my initial post and reconsidering how I approach these conversations.
My suggestion: say “Hi” while looking at them; only introduce yourself to them if they say “Hi” back while looking back at you, and with an enthusiastic-sounding tone of voice.
(Myself, I go by Postel's Law here: I don't initiate conversations with strangers on a plane, but don't freak out when they initiate conversations with me either.)
As far as I'm concerned, although people like RolfAndreasson exist, they should in no way be included in the model of 'average person'. Seething rage at a mere unsolicited introduction is totally un-ordinary and arguably self-destructive behaviour, and I have no compunction about saying that RA definitely needs to recalibrate his own response, not you.
My impression of your introductory thing is that it's overly involved, maybe slightly overbearing. You don't need to justify yourself, just introduce yourself. A general rule that I've found reliable for social situations is "Don't explain things if explanations haven't been requested (unless you happen to really enjoy explaining this thing)"; it stops me from coming across as (or feeling) desperate and lets people take responsibility for their own potential discomfort.
Don't err on the side of not doing it. People are already encouraged to be way too self-involved, isolated, and "individualistic". Doing things together is good, especially if they challenge you both (whether that's by temporary discomfort, new concepts, or whatever). If they don't want to be involved let them take responsibility for communicating that, because it is their responsibility.
You are sitting so close to someone that parts of your bodies probably touch, you smell them, you feel them, you hear them. The one doing the forcing with all that is the evil aircraft company, and though it's customary to regard such forced close encounters as "non-spaces" by pretending that no, you're not crammed in with a stranger for hours and hours, the reality is that you are.
The question is how you react to that, and offering to acknowledge the presence of the other and to find out their wishes regarding the flight is the common sense thing to do. Like pinging a server, if you will. If you don't ask, you won't find out.
Well, if there are non-verbal hints (looking away etc), by all means, stay quiet. However, you probably clearly notice that a protocol which forbids offering to start a conversation would result in countless acquaintances and friends never meeting, even if both may have preferred conversation.
In the end, even to an introvert, simply stating "Oh hello, I'm so and so, unfortunately I have a lot on my mind, I'm sure you understand" isn't outside the bounds of the reasonable. Do you disagree?
As someone who has been "trapped" in dozens of conversations with someone seemingly nice but uninteresting it's surprisingly hard to straight up tell someone you don't want to talk to them. I
Exactly. I would be far more ok with a social norm that condoned introducing oneself to (and starting conversations with) people on plans if there was also a social norm that condoned saying "I don't want to talk to you. Kindly go away and leave me alone." Current social norms regard this as rude. (I take it our esteemed extrovert colleagues see the problem here.)
Only in a very isolated point of view is introducing yourself to someone nearby an invasion. The rest of the world regards it as an ordinary action. Saying that you've got a different temperament does NOT excuse you from being an ordinary human being who can handle other people doing socially normal things that you have not yet explicitly okayed.
As a stranger, If I walk up to you and randomly try to hug you, THAT'S an invasion. If I try to talk to you, that's just Tuesday (so to speak).
Please note that I'm not in any way suggesting anyone should force their company on another. I'm just saying, if you have ANY major reaction to something as ordinary as someone trying to introduce themselves to you, it is YOU that has the problem and you should be looking at yourself to see why you are having this extreme reaction to a non-extreme circumstance. On the other side of the equation, if you have introduced yourself and received a prompt and clear rejection, if you react majorly to that in any way (including forcing your continued company on them), you also have a problem of a similar nature.
If anyone is on either side of that equation, they have a problem with their emotional calibration... (read more)
I really recommend not framing that sort of thing as a series of orders mixed with insults.
You are claiming to speak for all introverts, which turns this into an "introvert v extrovert" discussion. In other words, you are saying that half the population is forcing themselves onto the introverted half of the population. In reality, introverts are often the MOST happy that someone else initiated a conversation that they would be too shy to start themselves.
In reality, the situation is more like "NTs v non-NTs", and you are speaking for the non-NT part of the population. The same way you say half the population shouldn't force their preferences on the other half, I'm sure you can agree that 5% of the population shouldn't force their preferences (of non-interaction) onto the other 95%. Especially when the cost of nobody ever initiating conversations is significantly higher than the cost of being momentarily bothered by another person.
Actionable advice (for stopping an unwanted interaction): Answer in monosyllables or "hmm.." sounds. DON'T look at the person and smile. Maintain a neutral expression. Pull out your phone or a book, and direct your attention towards it, instead of the person.
Ways to end the conversation in a polite way: Say "... (read more)
I think it was NT as in NeuroTypical (not on the autism spectrum), not NT as in intuitive-thinking.
I believe this is more true of America than a number of other cultures.
In general, if you suggest a course of action to others that includes the word "just", you may be doing it wrong.
Very much this. Here's an excellent essay on the subject of "lullaby words", of which "just" is one. (The author suggests mentally replacing "just" with "have a lot of trouble to" in such formulations.)
If I have lost a puppy,
I desire to believe that I have lost a puppy.
If I have not lost a puppy,
I desire to believe that I have not lost a puppy.
Let me not become attached to puppies I may not want.
I'm in favor of making this a monthly or more thread as a way of subtracting some bloat from open threads in the same way the media threads do.
I also think that we should encourage lots of posts to these threads. After all, if you don't at least occasionally have a stupid question to ask, you're probably poorly calibrated on how many questions you should be asking.
If no question you ask is ever considered stupid, you're not checking enough of your assumptions.
Why does anyone care about anthropics? It seems like a mess of tautologies and thought experiments that pays no rent in anticipated experiences.
An important thing to realize is that people working on anthropics are trying to come up with a precise inferential methodology. They're not trying to draw conclusions about the state of the world, they're trying to draw conclusions about how one should draw conclusions about the state of the world. Think of it as akin to Bayesianism. If someone read an introduction to Bayesian epistemology, and said "This is just a mess of tautologies (Bayes' theorem) and thought experiments (Dutch book arguments) that pays no rent in anticipated experience. Why should I care?", how would you respond? Presumably you'd tell them that they should care because understanding the Bayesian methodology helps people make sounder inferences about the world, even if it doesn't predict specific experiences. Understanding anthropics does the same thing (except perhaps not as ubiquitously).
So the point of understanding anthropics is not so much to directly predict experiences but to appreciate how exactly one should update on certain pieces of evidence. It's like understanding any other selection effect -- in order to properly interpret the significance of pieces of evidence you collect, you need to ... (read more)
If you taboo "anthropics" and replace by "observation selection effects" then there are all sorts of practical consequences. See the start of Nick Bostrom's book for some examples.
The other big reason for caring is the "Doomsday argument" and the fact that all attempts to refute it have so far failed. Almost everyone who's heard of the argument thinks there's something trivially wrong with it, but all the obvious objections can be dealt with e.g. look later in Bostrom's book. Further, alternative approaches to anthropics (such as the "self indication assumption"), or attempts to completely bypass anthropics (such as "full non-indexical conditioning"), have been developed to avoid the Doomsday conclusion. But very surprisingly, they end up reproducing it. See Katja Grace's theisis.
Do you build willpower in the long-run by resisting temptation? Is willpower, in the short-term at least, a limited and depletable resource?
I felt that Robert Kurzban presented a pretty good argument against the "willpower as a resource" model in Why Everyone (Else) Is a Hypocrite:... (read more)
Apparently the answer to the second question depends on what you believe the answer to the second question to be.
Why is everyone so intereted in decision theory? Especially the increasingly convoluted variants with strange acronyms that seem to be popping up
As far as I can tell, LW was created explicitly with the goal of producing rationalists, one desirable side effect of which was the creation of friendly AI researchers. Decision theory plays a prominent role in Eliezer's conception of friendly AI, since a decision theory is how the AI is supposed to figure out the right thing to do. The obvious guesses don't work in the presence of things like other agents that can read the AI's source code, so we need to find some non-obvious guesses because that's something that could actually happen.
Hey, I think your tone here comes across as condescending, which goes against the spirit of a 'stupid questions' thread, by causing people to believe they will lose status by posting in here.
Fair point. My apologies. Getting rid of the first sentence.
the sci-fi bit is only to make it easier to think about. The real world scenarios it corresponds to require the reader to have quite a bit more background material under their belt to reason carefully about.
When I'm in the presence of people who know more than me and I want to learn more, I never know how to ask questions that will inspire useful, specific answers. They just don't occur to me. How do you ask the right questions?
People want to ask me about legal issues all the time. The best way to get a useful answer is to describe your current situation, the cause of your current situation, and what you want to change. Thus:
Then I can say something like: Your desired remedy is not available for REASONS, but instead, you could get REMEDY. Here are the facts and analysis that would affect whether REMEDY is available.
In short, try to define the problem. fubarobfusco has some good advice about how to refine your articulation of a problem. That said, if you have reason to believe a person knows something useful, you probably already know enough to articulate your question.
The point of my formulation is to avoid assumptions that distort the analysis. Suppose someone in the situation I described above said "I was maliciously and negligently injured by that person's driving. I want them in prison." At that point, my response needs to detangle a lot of confusions before I can say anything useful.
To what degree does everyone here literally calculate numerical outcomes and make decisions based on those outcomes for everyday decisions using Bayesian probability? Sometimes I can't tell if when people say they are 'updating priors' they are literally doing a calculation and literally have a new number stored somewhere in their head that they keep track of constantly.
If anyone does this could you elaborate more on how you do this? Do you have a book/spreadsheet full of different beliefs with different probabilities? Can you just keep track of it all in your mind? Or calculating probabilities like this only something people do for bigger life problems?
Can you give me a tip for how to start? Is there a set of core beliefs everyone should come up with priors for to start? I was going to apologize if this was a stupid question, but I suppose it should by definition be one if it is in this thread.
Nope, not for everyday decisions. For me "remember to update" is more of a mantra to remember to change your mind at all - especially based on several pieces of weak evidence, which normal procedure would be to individually disregard and thus never change your mind.
How do I get people to like me? It seems to me that this is a worthwhile goal; being likable increases the fun that both I and others have.
My issue is that likability usually means, "not being horribly self-centered." But I usually find I want people to like me more for self-centered reasons. It feels like a conundrum that just shouldn't be there if I weren't bitter about my isolation in the first place. But that's the issue.
This was a big realization for me personally:
If you are trying to get someone to like you, you should strive to maintain a friendly, positive interaction with that person in which he or she feels comfortable and happy on a moment-by-moment basis. You should not try to directly alter that person's opinion of you, in the sense that if you are operating on a principle of "I will show this person that I am smart, and he will like me", "I will show this person I am cool, and she will like me," or even "I will show this person that I am nice, and he will like me", you are pursuing a strategy that can be ineffective and possibly lead people to see you as self-centered. This might be what people say when they mean "be yourself" or "don't worry about what other people think of you".
Also, Succeed Socially is a good resource.
Also, getting certain people to like you is way, way, way, way harder than getting certain other people to like you. And in many situations you get to choose whom to interact with.
Do what your comparative advantage is.
In actuality,a lot of people can like you a lot even if you are not selfless. It is not so much that you need to ignore what makes you happy, as much as it is that you need to pay attention and energy to what makes other people happy. A trivial if sordid example is you don't get someone wanting to have sex with you by telling them how attractive you are, you will do better by telling them, and making it obvious that, you find them attractive. That you will take pleasure in their increased attentions to you is not held against you because it means you are not selfless not at all. Your need or desire for them is the attractor to them.
So don't abnegate, ignore, deny, your own needs. But run an internal model where other people's needs are primary to suggest actions you can take that will serve them and glue them to you.
Horribly self-centered isn't a statement that you elevate your own needs too high. It is that you are too ignorant and unreactive to other people's needs.
Is there any non-creepy way to indicate to people that you're available and interested in physical intimacy? doing something like just telling everyone you meet "hey you're cute want to make out?" seems like it would go badly.
Slightly increase eye contact. Orient towards. Mirror posture. Use touch during interaction (in whatever ways are locally considered non-creepy).
What's with the ems? People who are into ems seem to make a lot of assumptions about what ems are like and seem completely unattached to present-day culture or even structure of life, seem willing to spam duplicates of people around, etc. I know that Hanson thinks that 1. ems will not be robbed of their humanity and 2. that lots of things we currently consider horrible will come to pass and be accepted, but it's rather strange just how as soon as people say 'em' (as opposed to any other form of uploading) everything gets weird. Does anthropics come into it?
Why the huge focus on fully paternalistic Friendly AI rather than Obedient AI? It seems like a much lower-risk project. (and yes, I'm aware of the need for Friendliness in Obedient AI.)
For what it's worth, Eliezer's answer to your second question is here:
Basically it's a matter of natural selection. Given a starting population of EMs, if some are unwilling to be copied, the ones that are willing to be copied will dominate the population in short order. If EMs are useful for work, eg valuable, then the more valuable ones will be copied more often. At that point, EMs that are willing to be copied and do slave labor effectively for no complaints will become the most copied, and the population of ems will end up being composed largely of copies of the person/people who are 1) ok with being copied, 2) ok with being modified to work more effectively.
Well, no offense, but I'm not sure you are aware of the need for Friendliness in Obedient AI, or rather, just how much F you need in a genie.
If you were to actually figure out how to build a genie you would have figured it out by trying to build a CEV-class AI, intending to tackle all those challenges, tackling all those challenges, having pretty good solutions to all of those challenges, not trusting those solutions quite enough, and temporarily retreating to a mere genie which had ALL of the safety measures one would intuitively imagine necessary for a CEV-class independently-acting unchecked AI, to the best grade you could currently implement them. Anyone who thought they could skip the hard parts of CEV-class FAI by just building a genie instead, would die like a squirrel under a lawnmower. For reasons they didn't even understand because they hadn't become engaged with that part of the problem.
I'm not certain that this must happen in reality. The problem might have much kinder qualities than I anticipate in the sense of mistakes naturally showing up early enough and blatantly enough for corner-cutters to spot them. But it's how things are looking as a default after becoming engaged with the problems of CEV-class AI. The same problems show up in proposed 'genies' too, it's just that the genie-proposers don't realize it.
It seems to me that there are basically two approaches to preventing an UFAI intelligence explosion: a) making sure that the first intelligence explosion is a a FAI instead; b) making sure that intelligence explosion never occurs. The first one involves solving (with no margin for error) the philosophical/ethical/logical/mathematical problem of defining FAI, and in addition the sociological/political problem of doing it "in time", convincing everyone else, and ensuring that the first intelligence explosion occurs according to this resolution. The second one involves just the sociological/political problem of convincing everyone of the risks and banning/discouraging AI research "in time" to avoid an intelligence explosion.
Naively, it seems to me that the second approach is more viable--it seems comparable in scale to something between stopping use of CFCs (fairly easy) and stopping global warming (very difficult, but it is premature to say impossible). At any rate, sounds easier than solving (over a few year/decades) so many hard philosophical and mathematical problems, with no margin for error and under time pressure to do it ahead of UFAI developing.
However, ... (read more)
I think it's easier to get a tiny fraction of the planet to do a complex right thing than to get 99.9% of a planet to do a simpler right thing, especially if 99.9% compliance may not be enough and 99.999% compliance may be required instead.
When I see proposals that involve convincing everyone on the planet to do something, I write them off as loony-eyed idealism and move on. So, creating FAI would have to be hard enough that I considered it too "impossible" to be attempted (with this fact putatively being known to me given already-achieved knowledge), and then I would swap to human intelligence enhancement or something because, obviously, you're not going to persuade everyone on the planet to agree with you.
Given enough time for ideas to develop, any smart kid in a basement could build an AI, and every organization in the world has a massive incentive to do so. Only omnipresent surveillance could prevent everyone from writing a particular computer program.
Once you have enough power flying around to actually prevent AI, you are dealing with AI-level threats already (a not-necessarily friendly singleton).
So FAI is actually the easiest way to prevent UFAI.
The other reason is that a Friendly Singleton would be totally awesome. Like so totally awesome that it would be worth it to try for the awesomeness alone.
Hmmm. What is it going to do that is bad, given that it has the power to do the right thing, and is Friendly?
We have inherited some anti-authoritarian propaganda memes from a cultural war that is no longer relevant, and those taint the evaluation of a Singleton, even though they really don't apply. At least that's how it felt to me when I thought through it.
We discuss this proposal in Responses to Catastrophic AGI Risk, under the sections "Regulate research" and "Relinquish technology". I recommend reading both of those sections if you're interested, but a few relevant excerpts:... (read more)
I had no idea that Herbert's Butlerian Jihad might be a historical reference.
Cutting-edge chip manufacturing of the necessary sort? I believe we are lightyears away and things like 3D printing are irrelevant, and that it's a little like asking how close we are to people running Manhattan Projects in their garage*; see my essay for details.
* Literally. The estimated budget for an upcoming Taiwanese chip fab is equal to some inflation-adjusted estimates of the Manhattan Project.
I sometimes contemplate undertaking a major project. When I do so, I tend to end up reasoning like this:
It would be very good if I could finish this project. However, almost all the benefits of attempting the project will accrue when it's finished. (For example, a half-written computer game doesn't run at all, one semester's study of a foreign language won't let me read untranslated literature, an almost-graduated student doesn't have a degree, and so on.) Undertaking this project will require a lot of time and effort spent on activities that aren't enjoyable for their own sake, and there's a good chance I'll get frustrated and give up before actually completing the project. So it would be better not to bother; the benefits of successfully completing the project seem unlikely to be large enough to justify the delay and risk involved.
As a result, I find myself almost never attempting a project of any kind that involves effort and will take longer than a few days, but I don't want to live my life having done nothing, though. Advice?
I realize this does not really address your main point, but you can have half-written games that do run. I've been writing a game on and off for the last couple of years, and it's been playable the whole time. Make the simplest possible underlying engine first, so it's playable (and testable) as soon as possible.
Why is space colonization considered at all desirable?
Earth is currently the only known biosphere. More biospheres means that disasters that muck up one are less likely to muck up everything.
Less seriously, people like things that are cool.
EDIT: Seriously? My most-upvoted comment of all time? Really? This is as good as it gets?
1: It's awesome. It's desirable for the same reason fast cars, fun computer games, giant pyramids, and sex is.
2: It's an insurance policy against things that might wreck the earth but not other planets/solar systems.
3: Insofar as we can imagine there to be other alien races, understanding space colonization is extremely important either for trade or self defense.
4: It's possible different subsets of humanity can never happily coexist, in which case having arbitrarily large amounts of space to live in ensures more peace and stability.
Eggs, basket, x-risk.
Would you rather have one person living a happy, fulfilled life, or two? Would you rather have seven billion people living with happy, fulfilled lives, or seven billion planets full of people living happy, fulfilled lives?
How do you get someone to understand your words as they are, denotatively -- so that they do not overly-emphasize (non-existent) hidden connotations?
Of course, you should choose your words carefully, taking into account how they may be (mis)interpreted, but you can't always tie yourself into knots forestalling every possible guess about what intentions "really" are.
Establish a strong social script regarding instances where words should be taken denotatively, e.g. Crocker's rules. I don't think any other obvious strategies work. Hidden connotations exist whether you want them to or not.
This is the wrong attitude about how communication works. What matters is not what you intended to communicate but what actually gets communicated. The person you're communicating with is performing a Bayesian update on the words that are coming out of your mouth to figure out what's actually going on, and it's your job to provide the Bayesian evidence that actually corresponds to the update you want.
Become more status conscious. You are most likely inadvertently saying things that sound like status moves, which prompts others to not take what you say at face value. I haven't figured out how to fix this completely, but I have gotten better at noticing it and sometimes preempting it.
Reading the Sequences has improved my epistemic rationality, but not so much my instrumental rationality. What are some resources that would help me with this? Googling is not especially helping. Thanks in advance for your assistance.
Attend a CFAR workshop!
I have decided to take small risks on a daily basis (for the danger/action feeling), but I have trouble finding specific examples. What are interesting small-scale risks to take? (give as many examples as possible)
Apparently some study found that the difference between people with bad luck and those with good luck is that people with good luck take lots of low-downside risks.
Can't help with specific suggestions, but thinking about it in terms of the decision-theory of why it's a good idea can help to guide your search. But you're doing it for the action-feeling...
Climb a tree.
Use a randomizer to choose someone in your address book and call them immediately (don't give yourself enough time to talk yourself out of it). It is a rush thinking about what to say as the phone is ringing. You are risking your social status (by coming off wierd or awkward, in the case you don't have anything sensible to say) without really harming anyone. On the plus side, you may make a new ally or rekindle an old relationship.
Hi, have been reading this site only for a few months, glad that this thread came up. My stupid question : can a person simply be just lazy, and how does all the motivation/fighting akrasia techniques help such a person?
My current view is that most animals are not people, in the sense that they are not subject to moral concern. Of course, I do get upset when I see things such as animal abuse, but it seems to me that helping animals only nets me warm fuzzy feelings. I know animals react to suffering in a manner that we can sympathize with, but it just seems to me that they are still just running a program that is "below" that of humans. I think I feel that "react to pain" does not equal "worthy of moral consideration." The only exceptions to this in my eyes may be "higher mammals" such as other primates. Yet others on this site have advocated concern for animal welfare. Where am I confused?
"We" (humans of this epoch) might work to thwart the appearance of UFAI. Is this actually a "good" thing from a utilitarian point of view?
Or put another way, would our CEV, our Coherent Extrapolated Values, not expand to consider the utilities of vastly intelligent AIs and weight that in importance with their intelligence? In such a way that CEV winds up producing no distinction between UFAI and FAI, because the utility of such vast intelligences moves the utility of unmodified 21st century biological humans to fairly low significan... (read more)
Good news! Omega has offered you the chance to become a truly unconstrained User:mwengler, able to develop in directions you were previously cruelly denied!
Like - let's see - ooh, how about the freedom to betray all the friends you were previously constrained to care about? Or maybe the liberty to waste and destroy all those possessions and property you were viciously forced to value? Or how about you just sit there inertly forever, finally free from the evil colonialism of wanting to do things. Your pick!
Hah. Now I'm reminded of the first episode of Nisemonogatari where they discuss how the phrase "the courage to X" makes everything sound cooler and nobler:
"The courage to keep your secret to yourself!"
"The courage to lie to your lover!"
"The courage to betray your comrades!"
"The courage to be a lazy bum!"
"The courage to admit defeat!"
Nope. For me, it's the fact that they're human. Intelligence is a fake utility function.
With the recent update on HPMOR, I've been reading a few HP fanfictions : HPMOR, HP and the Natural 20, the recursive fanfiction HG and the Burden of Responsibility and a few others. And it seems my brain has trouble coping with that. I didn't have the problem with just canon and HPMOR (even when (re-)reading both in //), but now that I've added more fanfictions to the mix, I'm starting to confuse what happened in which universe, and my brain can't stop trying to find ways to ensure all the fanfictions are just facet of a single coherent universe, which of... (read more)
The usual advice on how to fold a t-shirt starts with the assumption that your t-shirt is flat, but I'm pretty sure that getting the shirt flat takes me longer than folding it. My current flattening method is to grab the shirt by the insides of the sleeves to turn it right-side out, then grab the shoulder seams to shake it flat. Is there anything better?
How do you tell the difference between a preference and a bias (in other people)?
In transparent box Newcomb's problem, in order to get the $1M, do you have to (precommit to) one box even if you see that there is nothing in box A?
If I take the outside view and account for the fact that thirty-something percent of people, including a lot of really smart people, believe in Christianity, and that at least personally I have radically changed my worldview a whole bunch of times, then it seems like I should assign at least a 5% or so probability to Christianity being true. How, therefore, does Pascal's Wager not apply to me? Even if we make it simpler by taking away the infinite utilities and merely treating Heaven as ten thousand years or so of the same level of happiness as the happiest day in my life, and treating Hell as ten thousand years or so of the same level of unhappiness as the unhappiest day in my life, the argument seems like it should still apply.
My admittedly very cynical point of view is to assume that, to a first-order approximation, most people don't have beliefs in the sense that LW uses the word. People just say words, mostly words that they've heard people they like say. You should be careful not to ascribe too much meaning to the words most people say.
In general, I think it's a mistake to view other people through an epistemic filter. View them through an instrumental filter instead: don't ask "what do these people believe?" but "what do these people do?" The first question might lead you to conclude that religious people are dumb. The second question might lead you to explore the various instrumental ways in which religious communities are winning relative to atheist communities, e.g. strong communal support networks, a large cached database of convenient heuristics for dealing with life situations, etc.
In the form of religious stories or perhaps advice from a religious leader. I should've been more specific than "life situations": my guess is that religious people acquire from their religion ways of dealing with, for example, grief and that atheists may not have cached any such procedures, so they have to figure out how to deal with things like grief.
Yes, but there are highly probable alternate explanations (other than the truth of Christianity) for their belief in Christianity, so the fact of their belief is very weak evidence for Christianity. If an alarm goes off whenever there's an earthquake, but also whenever a car drives by outside, then the alarm going off is very weak (practically negligible) evidence for an earthquake. More technically, when you are trying to evaluate the extent to which E is good evidence for H (and consequently, how much you should update your belief in H based on E), you want to look not at the likelihood Pr(E|H), but at the likelihood ratio Pr(E|H)/Pr(E|~H). And the likelihood ratio in this case, I submit, is not much more than 1, which means that updating on the evidence shouldn't move your prior odds all that much.
This seems irrelevant to the truth of Christianity.
That probability is way too high.
How do you account for the other two thirds of people who don't believe in Christianity and commonly believe things directly contradictory to it? Insofar as every religion was once (when it started) vastly outnumbered by the others, you can't use population at any given point in history as evidence that a particular religion is likely to be true, since the same exact metric would condemn you to hell at many points in the past. There are several problems with pascal's wager but the biggest to me is it's impossible to choose WHICH pascal's wager to make. You can attempt to conform to all non-contradictory religious rules extant but that still leaves the problem of choosing which contradictory commandments to obey, as well as the problem of what exactly god even wants from you, if it's belief or simple ritual. The proliferation of equally plausible religions is to me very strong evidence that no one of them is likely to be true, putting the odds of "christianity" being true at lower than even 1 percent and the odds that any specific sect of christianity being true being even lower.
There are also various Christian's who believe that other Christian's who follow Christianity the wrong way will go to hell.
That is eerily similar to an Omega who deliberately favours specific decision theories instead of their results.
Neither do Catholics think their priests turn wine into actual blood. After all, they're able to see and taste it as wine afterwards! Instead they're dualists: they believe the Platonic Form of the wine is replaced by that of blood, while the substance remains. And they think this makes testable predictions, because they think they have dualistic non-material souls which can then somehow experience the altered Form of the wine-blood.
Anyway, Catholicism makes lots of other predictions about the ordinary material world, which of course don't come true, and so it's more productive to focus on those. For instance, the efficacy of prayer, miraculous healing, and the power of sacred relics and places.
If literally the only evidence you had was that the overwhelming majority of people professed to believe in religion, then you should update in favor of religion being true.
Your belief that people are irrational relies on additional evidence of the type that I referenced. It is not contained in the fact of overwhelming belief.
Like how Knox's roommate's death by murder is evidence that Knox committed the murder. And that evidence is overwhelmed by other evidence that suggests Knox is not the murderer.
Can someone explain "reflective consistency" to me? I keep thinking I understand what it is and then finding out that no, I really don't. A rigorous-but-English definition would be ideal, but I would rather parse logic than get a less rigorous definition.
The people who think that nanobots will be able to manufacture arbitrary awesome things in arbitrary amounts at negligible costs... where do they think the nanobots will take the negentropy from?
Just now rushes onto Less Wrong to ask about taking advantage of 4chan's current offer of customized ad space to generate donations for MIRI
Sees thread title
So, would it be a good idea? The sheer volume of 4chan's traffic makes it a decent pool for donations, and given the attitude of its demographic, it might be possible to pitch the concept in an appealing way.
Linking to MIRI's donation page might be useful but please please don' link to LessWrong on 4chan - it could have some horrible consequences.
How do people construct priors? Is it worth trying to figure out how to construct better priors?