Open Thread April 16 - April 22, 2014

by Tenoke1 min read16th Apr 2014192 comments

8

Open Threads
Personal Blog

You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

192 comments, sorted by Highlighting new comments since Today at 12:51 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Last week, after a lot of thought and help from LessWrong, I finally stopped believing in god and dropped my last remnants of Catholicism. It is turned out to be a huge relief, though coping with some of the consequences and realizations that come with atheism has been a little difficult.

Do any of you have any tips you noticed about yourself or others after just leaving religion? I've noticed a few small habits I need to get rid of, but I am worried I'm missing larger, more important ones.

Are there any particular posts I should skip ahead and read? I am currently at the beginning of reductionism. Are their any beliefs you've noticed ex-catholics holding that they don't realize are obviously part of their religion? I do not have any one immediately around me I can ask, so I am very grateful for any input. Thank you!

1ErinFlight7yUnlike religion, here no one claims to be all-knowing or infallible. Which, from my point of view at least, is why LessWrong is so effective. Reading the arguments in the comments of the sequences was almost as important as reading the sequences themselves. I wouldn't mind the paradise part or the living forever part though.
0[anonymous]7yWe? That's generalizing a bit wouldn't you say? It's "LessWrong," not yudkowsky.net after all.
6B_For_Bandana7yYes, of course. I was mostly just trying to be funny. One could keep the joke going and compare the monthly meetups, Winter Solstice meetup, the Effective Altruist movement, the Singularity, and so on to their complements in Christianity.

Speaking from experience: don't kneejerk too hard. It's all too easy to react against everything at all implicitly associated with a religion or philosophy that you now reject the truth-claims of and distort parts of your personality or day to day life or emotions or symbolic thought that have nothing to do with what you have rejected.

1ErinFlight7yThank you. Last week was full of "Is this religious? Yes? No? I can't tell!." My brain has thankfully returned to normal function, and I will avoid intently analyzing every thought for religious connotations. The lack of guilt is nice, and I don't want to bring it back by stressing about the opposite.

Don't forget that reversed stupidity is not intelligence; a belief doesn't become wrong simply because it's widely held by Catholics.

Similarly, there's no need to be scared of responding positively to art or other ideas because they originated from a religious perspective; if atheism required us to do that, it would be almost as bleak a worldview as it's accused of being. Adeste Fideles doesn't stop being a beautiful song when you realize its symbols don't have referents. I think of the Christian mythology as one of my primary fantasy influences—like The Lord of the Rings, Discworld, The Chronicles of Thomas Covenant or Doctor Who—so, if I find myself reacting emotionally to a Christian meme, I don't have to worry that I'm having a conversion experience (or that God exists and is sneakily trying to win me over!): it's perfectly normal, and lawful, for works of fiction to have emotional impact.

1ErinFlight7yThe religious allusions seem even blatant now, but there is no way I'm getting rid of my copy of Chronicles of Narnia. I still feel the urge to look in the back of wardrobes. Thank you. I had a religious song stuck in my head yesterday, but remembered reading you comment so was able to bypass the feeling of guilt.
7Viliam_Bur7yWhat others already said: Don't try to reverse stupidity by avoiding everything conected to Catholicism. You are allowed to pick the good pieces and ignore the bad pieces, instead of buying or rejecting the whole package [http://en.wikipedia.org/wiki/Package-deal_fallacy]. Catholics also took some good parts from other traditions; which by the way means you don't even have to credit them for inventing the good pieces you decide to take. If you talk with other religious people, they will probably try the following trick on you: Give you a huge book saying that it actually answers all your questions, and that you should at least read this one book and consider it seriously before you abandon religion completely. Of course if you read the whole book and it doesn't convince you, they will give you another huge book. And another. And another. The whole strategy is to surround you by religion memes (even more strongly than most religious people are), hoping that sooner or later something will "trigger" your religious feelings. And no matter how many books you read, if at some moment you refuse to read yet another book, you will be accused of leaving the religion only because of your ignorance and stubbornness, because this one specific book certainly did contain all answers to your questions and perfectly convincing counterarguments to your arguments, you just refused to even look at it. This game you cannot win: there is no "I have honestly considered all your arguments and found them unconvincing" exit node; the only options given to you are either to give up, or to do something that will allow your opponents to blame you of being willfully ignorant. (So you might as well do the "ignorant" thing now, and save yourself a lot of time.) Don't try to convince other people, at least not during the first months after deconversion. First, you need to sort out things for yourself (you don't have a convincing success story yet). Second, by the law of reciprocation, if the othe
2[anonymous]7yThis is known as cafeteria Catholicism [https://en.wikipedia.org/wiki/Cafeteria_Catholicism]. (I had only heard that used as an insult, but apparently there are people who self-identify as such.)

It reminds me of Transactional Analysis saying the best way to keep people in mental traps is to provide them two scripts: "this is what you should do if you are a good person", but also "this is what you will do if you become a bad person (i.e. if you refuse the former script)". So even if you decide to rebel, you usually rebel in the prescribed way, because you were taught to only consider these two options as opposites... while in reality there are many other options available.

The real challenge is to avoid both the "good script" and the "bad script".

4ErinFlight7y(Edited)
0ErinFlight7yThank you for the advice. I've started by rereading the scientific explanations of the big bang, evolution, and basically most general scientific principles. Looking at it without constant justification going on in my mind is quite refreshing. So far I've been able to avoid most of the arguments, though I was surprised by how genuinely sad some people were. I'm going to keep quiet about religion for a while, and figure out what other pieces of my worldview I need to take a rational, honest look at.
4polymathwannabe7yI recommend this list: http://rationalwiki.org/wiki/RationalWiki_Atheism_FAQ_for_the_Newly_Deconverted [http://rationalwiki.org/wiki/RationalWiki_Atheism_FAQ_for_the_Newly_Deconverted]
0UmamiSalami7yI find myself to have a much clearer and cooler head when it comes to philosophy and debate around the subject. Previously I had a really hard time squaring utilitarianism with the teachings of religion, and I ended up being a total heretic. Now I feel like everything makes sense in a simpler way.

What are the most effective charities working towards reducing biotech or pandemic x-risk? I see those mentioned here occasionally as the second most important x-risk behind AI risk, but I haven't seen much discussion on the most effective ways to fund their prevention. Have I missed something?

1ThisSpaceAvailable7yBiotech x-risk is a tricky subject, since research into how to prevent it also is likely to provide more information on how to engineer biothreats. It's from nontrivial to know what lines of research will decrease the risk, and which will increase it. One doesn't want a 28 Days Later type situation, where a lab doing research into viruses ends up being the source of a pandemic.
0Oscar_Cunningham7yNote that Friendly AI (if it works) will defeat all (or at least a lot of) x-risk. So AI has a good claim to being the most effective at reducing x-risks, even the ones that aren't AI risk. If you anticipate an intelligence explosion but aren't worried about UFAI then your favourite charity is probably some non-MIRI AI research lab (Google?).

So AI has a good claim to being the most effective at reducing x-risks, even the ones that aren't AI risk.

You're ignoring time. If you expect a sufficiently powerful FAI to arise, say, not earlier than a hundred years from now, and you think that the coming century has significant x-risks, focusing all the resources on the FAI might not be a good idea.

Not to mention that if your P(AI) isn't close to one, you probably want to be prepared for the situation in which an AI never materializes.

3ChristianKl7yAs far as I remember from LW census data the median date for predicted AGI intelligence explosion didn't fall in this century and more people considered bioengineered pandemics the most probably X-risk in this century than UFAI.
1satt7yClose [http://lesswrong.com/lw/jj0/2013_survey_results/]. Bioengineered pandemics were the GCR (global catastrophic risk — not necessarily as bad as a full-blown X-risk) most often (23% of responses) considered most likely. (Unfriendly AI came in third at 14%.) The median singularity year estimate on the survey was 2089 after outliers were removed.

From wikipedia article on rejection therapy:

"At the time of rejection, the player, not the respondent, should be in a position of vulnerability. The player should be sensitive to the feelings of the person being asked."

How does one implement this? One of my barriers to social interactions is the ethical aspect to it; I feel uncomfortable imposing on others or making them uncomfortable. Using other people for one's own therapy seems a bit questionable. Does anyone have anything to share about how to deal with guilt-type feelings and avoid imposing on others with rejection therapy?

6free_rip7yI used to have the same, to the extent that I wouldn't ask even ask teachers, people paid to help me, for help. I hated the feeling that I was a burden somehow. But I got over it in the space of a couple months by getting into a position where people were asking me for help all the time - and that made me realize it wasn't an unpleasant or annoying experience, I actually liked it, and others were probably the same. In most cases you're doing people a favor by giving them a chance to get warm-fuzzies for what's (usually in the case of rejection therapy) a relatively simple request to fulfill. Of course, there are still certain requests that might be uncomfortable to reject, and my thoughts on those are that they're usually the ones where you feel like you've left someone out who really needed your help. So to get over this, don't choose things to ask that are going to go bad if you don't get it - for instance asking for a ride when it's pouring out, or telling someone you need some money to call your kids at home so they don't worry (instead of just 'I need to make a call'). As long as what you ask is casual and you don't seem desperate, people should have no problem rejecting it without feeling bad, and to lessen any impact even more you can smile and say 'no problem, thanks anyway' or something similar to show you're alright without it. Also use your sense, if you ask and they look uncomfortable going 'oh, umm, well...' you should be the one to jump in and say 'hey, it's no problem, you look busy so I'll check with someone else' or something like that, rather than waiting for them to have to say outright 'no'. Some people don't mind just saying no outright, some people do, so be attuned to that and no-one should be uncomfortable. Good luck!
0drethelin7yIn general, people in a public space are to an extent consenting to interact with other humans. If they aren't, we have a system of recognized signals for it: Walking fast, looking downward, listening to music, reading, etc. I don't think you should feel too guilty about imposing a brief few seconds of interaction on people out and about in public.

It's argued there's a risk that in the event of a global catastrophe, humanity would be unable to recover to our current level of capacity because all the easily accessible fossil fuels that we used to get here last time are already burned. Is there a standard, easily Googlable name for this risk/issue/debate?

4Kawoomba7yCan't help you out with an easy moniker, but I remember that problem being brought up as early as in Olaf Stapledon's novel Last and First Men [http://en.wikipedia.org/wiki/Last_and_First_Men], published 1930.
0bramflakes7yI remember a short story posted on LW a few years ago about this. It was told from the perspective of people in a society of pre-industrial tech, wondering how (or even if) their mythical ancestors did these magical feats like riding around in steel carriages faster than any horse and things like that. The moral being that society hadn't reached the required "escape velocity" to develop large-scale space travel and instead had declined once the fossil fuels ran out, never to return. I can't for the life of me find it though.
-2CellBioGuy7yIt's also argued that, fossil fuels being literally the most energy-dense per unit-of-infrastructure-applied energy source in the solar system, our societal complexity is likely to decrease in the future as the hard-to-get ones are themselves drawn down and there becomes no way to keep drawing upon the sheer levels of energy per capita we have become accustomed to over the last 200 years in the wealthier nations. I recommend Tom Murphy's "do the math" [http://physics.ucsd.edu/do-the-math/] blog for a frank discussion of energy densities and quantities and the inability of growth or likely even stasis in energy use to continue.
5Lumifer7yHuh? At which level of technology? And WTF is a "unit of infrastructure"?
-3CellBioGuy7yAt any level of technology. Where else in the solar system do you have that much highly reduced matter next to so much highly oxidized gas with a thin layer of rock between them, and something as simple as a drill and a furnace needed to extract the coal energy and a little fractional distillation to get at the oil? Everything else is more difficult. "Unit of infrastructure" ~= amount of energy and effort and capital needed to get at it.
0Lumifer7yI am not going to believe that. Both because at the caveman level the fossil fuels are pretty much useless and because your imagination with respect to future technology seems severely limited. This entirely depends on the technology level. And how are you applying concepts like "energy-dense" to, say, sunlight or geothermal?
0Nornagest7yEnergy density refers only to fuels and energy storage media and doesn't have much to do with grid-scale investment, although it's important for things like transport where you have to move your power source along with you. (Short version: hydrocarbons beat everything else, although batteries are getting better.) The usual framework for comparing things like solar or geothermal energy to fossil fuels, from a development or policy standpoint, is energy return on investment [http://en.wikipedia.org/wiki/Energy_Return_On_Investment]. (Short version: coal beats everything but hydroelectric, but nuclear and renewables are competitive with oil and gas. Also, ethanol and biodiesel suck.)
-3CellBioGuy7yCoal was used as fuel before the [http://www.ukcoal.com/mining-through-the-ages.html] Roman empire [http://en.wikipedia.org/wiki/Coal_mining_in_the_United_Kingdom]. It didn't lead to an industrial revolution until someone figured out a way to turn it into mechanical energy substituting for human labor instead of just a heat source in a society where that could be made profitable due to a scarcity of labor. That was the easiest, surface-exposed deposits, yes, but you hardly need any infrastructure at all to extract the energy, and even mechanical energy extraction just needs a boiler and some pistons and valves. This was also true of peat in what is now the Netherlands during the early second millennium. What does 'technology level' even mean? There's just things people have figured out how to do and things people haven't. And technology is not energy and you cannot just substitute technology for easy energy, it is not a question of technology level but instead the energy gradients that can be fed into technology. Mostly in terms of true costs and capital (not just dollars) needed to access it, combined with how much you can concentrate the energy at the point of extraction infrastructure. For coal or oil you can get fantastic wattages through small devices. For solar you can get high wattages per square meter in direct sunlight, which you don't get on much of the earth's surface for long and you never get for more than a few hours at a time. Incredibly useful, letting you run information technology and some lights at night and modest food refrigeration off a personal footprint, but not providing the constant torrent of cheap energy we have grown accustomed to. Geothermal energy flux is often high in particular areas where it makes great sense (imagine Iceland as a future industrial powerhouse due to all that cheap thermal energy gradient), over most of the earth not so much. Sunlight is probably our best bet for large chunks the future of technological civilizati
1Lumifer7yYou don't need ANY infrastructure to gather dry sticks in the forest and burn them. Guess that makes the energy density per unit of infrastructure infinite, then... There are lots of energy gradients around. Imagine technology that allows you to sink a borehole into the mantle -- that's a nice energy gradient there, isn't it? Tides provide the energy gradient of megatons of ocean water moving. Or, let's say, technology provides a cheap and effective fusion reactor -- what's the energy gradient there? You've been reading too much environmentalist propaganda which loves to extrapolate trends far into the future while making the hidden assumption that the level of technology will stay the same forever and ever.
0CellBioGuy7yPretty much, until you need to invest in the societal costs to replant and regrow woods after you have cleared them, or you want more concentrated energy at which point you use a different source, or unless you value your time. Yes. Some are easier to capture than others and some are denser than others. Fusion would be a great energy gradient if you can run it at rates massively exceeding those in stars, but everything I've seen suggests that the technology required for such a thing is either not forthcoming or if it is is so complicated that it's probably not worth the effort. It won't but there are some things that technology doesn't change. To use the nuclear example, you always need to perform the same chemical and other steps to nuclear fuels which requires an extremely complicated underlying infrastructure and supply chain and concentrated capital for it. Technology isn't a genetic term for things-that-make-everything-easier, some things can be done and some things can't, and other things can be done but aren't worth the effort, and we will see what some of those boundaries are over time. I hope to at least make it to 2060, so I bet I will get to see the outcome of some of the experiments being performed!
2ChristianKl7ySolar energy used to halve in price every 7 years. in the last 7 it more than halved [http://cleantechnica.com/2013/09/19/cost-solar-power-60-lower-early-2011-us/]. Battery performance also has a nice exponential improvement curve.
2CellBioGuy7yVarious forms of solar are probably one of our better bets, though I'm not convinced that large chunks of the recent gains don't come from massive effective subsidy from China and eventually the cost of the materials themselves could become insignificant compared to complexity and maintenance and end-of-life-recycling cost which are not likely to decrease much. Though battery performance... I haven't seen anything about it that even looks vaguely exponential.
2ChristianKl7ySee http://qr.ae/rbMLh [http://qr.ae/rbMLh] for the batteries.
0Douglas_Knight7yTo spell out a few things: the price of lithium batteries is decreasing. Since they are the most energy-dense batteries, this is great for the cost of electric cars, and maybe for the introduction of new portable devices, but it isn't relevant to much else. In particular, performance is not improving. Moreover, there is no reason to expect them to ever be cheaper than existing less dense batteries. In particular, there is no reason to expect that the cost of storing electricity in batteries will ever be cheaper than the cost of the electricity, so they are worthless for smoothing out erratic sources of power, like wind.
1Eugine_Nier7yI get the impression that most of the "recent gains" consist of forcing the utilities to take it and either subsidizing the price difference or passing the cost on to the customer. At least, the parties involved act like they believe this while attempting to deny it [http://econlog.econlib.org/archives/2014/04/krugmans_strang.html].
0ChristianKl7yBut even if some of the cost is subsidies and the real speed is only halving in price every 7 years that's still good enough. I don't see why there shouldn't be any way to optimise end of life costs and maintenance.
0Squark7yDoes the argument take nuclear energy into account?
3CellBioGuy7yYes. No nuclear power has ever been built without massive subsidies and insurance-guarantees, it only works right now because we externalize the costs of dealing with its waste to the future rather than actually paying the costs, and nuclear power is fantastically more complicated and prone to drastically expensive failures than simply burning things. Concentrating the fuel to the point that it is useful is an incredible chore as well.
0Squark7yAre you claiming nuclear energy has higher cost in $ per joule than burning fossil fuels? If so, can you back it up? If true, how do you know it's going to remain true in the future? What happens when we reach a level of technology in which energy production is completely automatic? What about nuclear fusion?
1CellBioGuy7yThe only reason the costs per joule in dollars are near each other (true factor of about 1.5-3x the cost in dollars between nuclear and the coal everyone knows and loves, according tothe EIA [http://www.eia.gov/forecasts/capitalcost/] ) is that a lot of the true costs of nuclear power plants are not borne in dollars and are instead externalized. Fifty years of waste have been for the most part completely un-dealt-with in the hopes that something will come along, nuclear power plants are almost literally uninsurable [http://www.theguardian.com/world/feedarticle/9608262] to sufficient levels in the market such that governments have to guarantee them substandard insurance by legal fiat (this is also true of very large hydroelectric dams which are probably also a very bad idea), and power plants that were supposed to be retired long ago have had their lifetimes extended threefold by regulators who don't want to incur the cost of their planned replacements and refurbishments. And the whole thing was rushed forwards in the mid 20th century as a byproduct of the national desire for nuclear weapons, and remarkably little growth has occurred since that driver decreased. How do you know it won't? More to the point, it's not a question of technology. It's a question of how much you have to concentrate rare radionuclides in expensive gas centrifuge equipment and how heavily you have to contain the reaction and how long you have to isolate the resultant stuff. Technology does not trump thermodynamics and complexity and fragility. What does this mean and why is it relevant? Near as I can tell, all the research on it so far has shown that it is indeed possible without star-style gravitational confinement, very difficult, and completely uneconomic. We have all the materials you need to fuse readily available, if it were easy to do it economically we would've after fifty years of work. It should be noted that the average energy output of the sun itself is about 1/3 of a watt per
3ChristianKl7yThat an quite unfair comparison. The way we deal with coal waste kills ten of thousands or even hundreds of thousands per year. The way we deal with coal waste might cost more money but doesn't kill as many people. Simply dumping all nuclear waste in the ocean would probably a more safe way of disposing of waste than the way we deal with coal. Even tunnel that were created in coal mining can collapse and do damage.
2CellBioGuy7yCoal isn't a picnic either and I have my own rants about it too. But dealing with coal waste (safely or unsafely) is a question of trucking it, not running complicated chemical and isotopic purification or locking it up so thoroughly.
0Douglas_Knight7yThe obvious explanation of the timing is Three Mile Island and Chernobyl. Do you believe that Japan and Germany built nuclear plants for the purpose of eventually building weapons?
2CellBioGuy7yJapan and Germany are interesting cases, both for the same reason: rich nations with little or declining fossil fuels. Germany's buildout of nuclear power corresponds to the timing of the beginning of the decline in the production of high-quality coal [http://www.bgr.bund.de/EN/Themen/Energie/Bilder/Kohle_Reserven_Bild1_g_en.html] in that country, and Japan has no fossil fuels of its own so nuclear was far more competitive. With plentiful fossil fuels around nobody does nuclear since it's harder, though even the nations which use nuclear invariably have quite a lot of fossil fuel use which I would wager 'subsidizes' it.
0Douglas_Knight7yWhat do you mean by "competitive"? Shipping coal adds very little to its cost, so the economic calculation is hardly different for countries that have it and countries that don't. Perhaps national governments view domestic industries very differently than economists, but you haven't said how to take this into account. I think Japan explicitly invoked "self-sufficiency" in its decision, perhaps meaning concerns about wartime.
0Squark7yWhat do you mean by "un-dealt-with"? What cost do you think it will incur in the future? Interesting point. However the correct cost of insurance has to take into account probability of various failures and I see no such probability assessment in the article. Also, what about Thorium power? Are you sure the problem is with lack of desire for nuclear weapons rather than with anti-nuclear paranoia? But the ratio between the physical requisites and dollars (i.e. labor) depends on technology very strongly. At some point we are likely to have sufficient automation so that little human labor is required for most things, including energy production. In these condition, energy (and most other things) will cost much less than today, with fossil fuels or without them. Obviously it's not easy, but it doesn't mean it's impossible. We have ITER. So what? We already can create temperatures lower than anywhere in the universe [http://en.wikipedia.org/wiki/Absolute_zero#Very_low_temperatures] and nuclear species that don't exist anywhere in the universe, why not better fusion conditions? I don't think scientific and technological progress is "deus ex machina". Given historical record and known physical limits, it is expected there is a lot of progress still waiting to happen. Imagine the energy per capita available to a civilization that builds Dyson spheres.
2CellBioGuy7yMostly sitting around full of transuranic elements with half-lives in the tens of thousands of years in facilities that were meant to be quite temporary, without much in the way of functional or economically competitive breeder reactors even where they have been tried. They will eventually incur one of three costs: reprocessing, geological storage, or release. Near as I can tell it's a way to boost the amount of fertile fuel for breeder reactors by about a factor of five. The technology is similar, with advantages and disadvantages. No matter what you have to run refined material through very complicated and capital-intensive and energy-intensive things, keep things contained, and dispose of waste. These fuel cycles do work and they do produce energy, and if done right some technologies of the suite promoted for the purpose might reduce the waste quite a bit. My gripe is the fact that they work well (not to mention safely) in stable civilizations with lots of capital and concentrated wealth to put towards it that isn't being applied to more basic infrastructure. Given the vagaries of history moving wealth and power around and the massive cheap energy and wealth subsidy that comes from fossil fuels that will go away, I'm not convinced that they can be run for long periods of time at a level that can compensate for the torrents of cheap wealth you get from burning the black rocks. I wouldn't be terrilbly surprised at some nuclear power plants being around in a few thousand years, but I would be surprised at them providing anything like as much per capita as fossil fuels do now due to the complexity and wealth concentration issues. I don't understand how automation changes the energy, material, or complexity costs (think supply chains or fuel flows) associated with a technology. Yes, and fusion research is fascinating. But the fact that while understanding of nuclear physics has been pretty well constant for decades more and more money goes into more and more expen
0Squark7yHow much does it cost to maintain the current facilities? By what factor does it make nuclear energy more expensive? The most important component of economic cost is human labor. We have plenty of energy and materials in the universe left. "complexity" is not a limited resource so I don't understand what "complexity cost" is. Yes, but I think that current technology is very far from the limits of the possible. Sure, because we are the only intelligent life the universe. What's so surprising about that?

To anyone out there embedded in a corporate environment, any tips and tricks to getting ahead? I'm a developer embedded within the business part of a tech organization. I've only been there a little while though. I'm wondering how I can foster medium-term career growth (and shorter-term, optimize performance reviews).

Of course "Do your job and do it well" tops the list, but I wouldn't be asking here if I wanted the advice I could read in WSJ.

From personal observations

"Do your job and do it well"

most emphatically does not top the list. Certainly you have to do an adequate job, but your success in a corporate environment depends on your interpersonal skills more than on anything else. You depend on other people to get noticed and promoted, so you need to be good at playing the game. If you haven't taken a Dale Carnegie course or similar, do so. Toastmasters are useful, too. In general, learning to project a bit more status and competence than you think you merit likely means that people would go along with it.

Just to give an example, I have seen a few competent but unexceptional engineers become CEOs and CTOs over a few short years in a growing company, while other, better engineers never advanced beyond a team lead, if that.

If you are an above average engineer/programmer etc. but not a natural at playing politics, consider exploring your own projects. If you haven't read Patrick McKenzie's blog about it, do so. On the other hand, if striking out on your own is not your dream, and you already have enough drive, social skills and charisma to get noticed, you are not likely to benefit from whatever people on this site can tell you.

Perhaps we could be more specific about the social / political skills. I am probably not good at these skills, but here are a few things I have noticed:

Some of your colleagues have a connection between them unrelated to the work, usually preceding it. (Former classmates. Relatives; not necessarily having the same surname. Dating each other. Dating the other person's family member. Members of the same religious group. Etc.) This can be a strong emotional bond which may override their judgement of the other person's competence. So for example, if one of them is your superior, and the other is your incompetent colleague you have to cooperate with, that's a dangerous situation, and you may not even be aware of it. -- I wish I knew the recommended solution. My approach is to pay attention to company gossip, and to be careful around people who are clearly incompetent and yet not fired. And then I try to take roles where I don't need their outputs as inputs for my work (which can be difficult, because incompetent people are very likely to be in positions where they don't deliver the final product, as if either they or the company were aware of the situation on some level).

If someone compl... (read more)

I'd beware conflating "interpersonal skills" with "playing politics." For CEO at least (and probably CTO as well), there are other important factors in job performance than raw engineering talent. The subtext of your comment is that the companies you mention were somehow duped into promoting these bad engineers to executive roles, but they might have just decided that their CEO/CTO needed to be good at managing or recruiting or negotiating, and the star engineer team lead didn't have those skills.

Second, I think that the "playing politics" part is true at some organizations but not at others. Perhaps this is an instance of All Debates are Bravery Debates.

My model is something like: having passable interpersonal/communication skills is pretty much a no-brainer, but beyond that there are firms where it just doesn't make that much of a difference, because they're sufficiently good at figuring out who actually deserves credit for what that they can select harder for engineering ability than for politics. However, there are other organizations where this is definitely not the case.

4shminux7yCertainly there is a spectrum there. I did not mean it that way in general, but in one particular case both ran the company into the ground, one by picking a wrong (dying) market, the other by picking a poor acquisition target (the code base hiding behind a flashy facade sucked). I am not claiming that if the company promoted someone else they would have done a better job. If we define "playing politics as "using interpersonal relationships to one's own advantage and others' detriment", then I am yet to see a company with more than a dozen employees where this wasn't commonplace. If we define "interpersonal skills" as "the art of presenting oneself in the best possible light", then some people are naturally more skilled at it than others and techies rarely top the list. As for trusting the management to accurately figure out who actually deserves credit, I am not as optimistic. Dilbert workplaces are contagious and so very common. I'm glad that you managed to avoid getting stuck in one.
2benkuhn7yYes, definitely agree that politicians can dupe people into hiring them. Just wanted to raise the point that it's very workplace-dependent. The takeaway is probably "investigate your own corporate environment and figure out whether doing your job well is actually rewarded, because it may not be".
0Lumifer7yI have a working hypothesis that it is, to a large degree, a function of size. Pretty much all huge companies are Dilbertian, very few tiny one are. It's more complicated than just that because in large companies people often manage to create small semi-isolated islands or enclaves with culture different from the surroundings, but I think the general rule that the concentration of PHBs is correlated with the company size holds.
5Viliam_Bur7yI worked mostly for small companies, and Dilbert resonates with me strongly. It probably depends on power differences and communication taboos, which in turn correlate with the company size. In a large company, having a power structure is almost unaviodable; but you can also have a dictator making stupid decisions in a small company.
1Lumifer7yBeing a manager is a radically different job from being an engineer. In fact, I think that (generalization warning!) good engineers make bad managers. Different attitudes, different personalities, different skill sets.
6niceguyanon7yOne particular simple and easy to follow tip, to add to the Toastmasters and taking leadership type courses advice, is that you should also you signal to those around you of your interest in these things as well. Some of the other advice here can take time and be hard to achieve, you don't just turn a switch become charismatic or a great public speaker. So in the meantime while you work on all those awesome skills, don't forget to just simply let others know about your drive, ambitions, and competency. This is easier to pull off than the fake-it-till-you-make-it trick. It's more about show-your-ambition-till-you-make-it. It's easy to do because you don't have to fake anything. It reminds me of this seduction advice I read from Mystery's first book that went something along the lines of, you don't have to be already rich to seduce somebody, you just have to let them know you have ambition and desire to one day be rich/successful.
5[anonymous]7yI recently read this piece on meritocracy [http://michaelochurch.wordpress.com/2014/04/11/meritocracy-is-the-software-engineers-prince-charming-and-why-thats-harmful/] - rung quite true to me from personal experience. I work with a guy of similar ability to me, but I think I would beat him on most technical and simple people skills. However, he still gets ahead from being more ambitious and upfront than I am, and while he's a bit more qualified on paper it's used to far better effect. (No bitterness, he's still a good guy to work with and I know it's up to me to be better. Also I'm in kind of mid-level finance rather than coding.)
2Punoxysm7yI think that article is a bit bitter. It probably applies to some organizations, but I think most places at least manage to consider competence as a substantial part of the mix in promotion decisions. Which is not to say signaling ambition isn't valuable (I absolutely believe it is). Just that the article is bitter.
2ChristianKl7yhttp://lesswrong.com/lw/jsp/political_skills_which_increase_income/ [http://lesswrong.com/lw/jsp/political_skills_which_increase_income/] is an article by a LessWrong person that lists factors. Political abilities are important. That means signal modesty, making apologies when necessary and flattering people above you in the chain of command.

Here's an idea for enterprising web-devs with a lot more free time than me: an online service that manages a person's ongoing education with contemporary project management tools.

Once signed up to this service, I would like to be able to define educational projects with tasks, milestones, deliverables, etc. against which I can record and monitor my progress. If I specify dependencies and priorities, it can carry out wazzy critical path analysis and tell me what I should be working on and in what order. It can send me encouraging/harassing emails if I don't... (read more)

3[anonymous]7yWill add this to my list of ed-tech start-up ideas to validate.
2cursed7yi'm interested in your other ed-tech startup ideas, if you don't mind sharing.
1[anonymous]7yList of them are here: http://www.quantifiedstartup.net/startup/ [http://www.quantifiedstartup.net/startup/]
1Punoxysm7yStudent Relationship Management software? Sounds like a neat idea.
6cousin_it7yOuch, that made my mind come up with a different startup idea, Relationship Management software. Basically it would be a website where you can post updates about your relationship every day, like "Last night we argued for 30 minutes" or "I feel that he's unusually emotionally distant" or something like that. You would also input your partner's astrological sign, and so on. And the website would give you an overall prognosis and some sort of bullshit psychological advice, like "Try to be more conscious of your needs in the relationship" or "At this point it's likely that he's cheating on you". And it would show tons of ads for related products and services. I think some people would love it!
0mare-of-night7yFor a different sort of person, any sort of quantified self about relationships would be interesting. (I heard that an app exists where you record a happy face or a sad face after every time talking to a long distance partner, and it doesn't give you any advice. Unfortunately, I can't remember the name or where I heard of it.)
0somnicule7yFor a minimal product, perhaps just start with the dependencies and priorities side of things? That seems to be the core of such a product, and the rest is dressing it up for usability.

Does anyone have good resources on hypnosis, especially self-hypnosis? I'm mostly looking for how-tos but effectiveness research and theoretical grounding are also welcome.

8bramflakes7yhttp://cognitiveengineer.blogspot.com/ [http://cognitiveengineer.blogspot.com/] by jimmy, our resident evil hypnotist
5ChristianKl7y"Monsters & Magical Sticks: There's No Such Thing As Hypnosis?" is a fine book for explaining what hypnosis is. The recurring punchline is that there's no Hypnosis but there are hypnotic phenomena. Being a good hypnotist is basically about using a bunch of hypnotic phenomena to go where you want to go. Framing an interaction is something very important. A hypnosis therapist I know says that the hypnosis sessions for quitting smoking begins with the call. The patient calls to make an appointment. He answers and asks whether the person has made a decision to quit smoking. If the patient says "no" he tells the patient to call again once he made the decision. Hypnotherapist do a lot of stuff like this.

In the spirit of Matthew McConaughey's Oscar acceptance speech, who is the you-in-ten-years that you are chasing?

The most important writer of Latin American science fiction.

3Gunnar_Zarncke7ySee http://lesswrong.com/lw/g94/link_your_elusive_future_self/ [http://lesswrong.com/lw/g94/link_your_elusive_future_self/] for a reason you can't know.
2Stabilizer7yI have no idea. (Is that a bad thing?)
[-][anonymous]7y 7

I am currently teaching myself basic Spanish. At the moment, I'm using my library's (highly limited) resources to refresh my memory of Spanish learned in high school and college. However, I know I won't go far without practice. To this end, I'd like to find a conversation partner.

Does anyone have any recommendation of resources for language learners? Particularly resources that enable conversation (written or spoke) so learners can improve and actually use what they are learning? The resource wouldn't have to be dedicated solely to Spanish learning. Eventually, I want to learn other languages as well (such as German and French).

ROI in learning a foreign language is low, unless it is English. But if you must, I would say the next best thing to immersive instruction would be to watch spanish hulu as an aid to learning. You'd get real conversions at conversational speeds.

3Metus7ySo after gwern pointed out that there is a transcript and I have read it I made a back of the envelope calculation. Assumptions: According to Wikipedia [https://en.wikipedia.org/wiki/Personal_income_in_the_United_States] people with a bachelor's degree or higher make $56078 per year, so about $27 per hour. Learning German increases yearly income by 4% and learning it takes about 750 class hours, according to the foreign service institute [http://web.archive.org/web/20071014005901/http://www.nvtc.gov/lotw/months/november/learningExpectations.html] . Learning Spanish increases income by 1.4% and takes 600 class hours to learn. If we assume that one class hour costs 11.25$ per hour (by glancing at various prices posted on different sites) we can make a calculation. Assuming the language is learnt instead of working and that the foregone hours have no impact on later earning and that the foregone hours are paid at average salary, the student incurs opportunity cost in addition to the pure class cost. Ignoring all other effects, learning German costs $28657 with return of 2243 p.a. and learning Spanish costs $22926 with return $841 p.a. This works out to 7.8% and 3.6% on initial investment respectively. So after 13 years learning German pays off, after 28 years learning Spanish pay off. Assuming the language is learnt at young age, at least learning German can be worthwhile. More benign assumptions, such as learning outside of class with some kind of program like Duolingo, will increase the return further, making learning even more worthwhile. Of course I did not consider learning something else in those hundreds of hours that could have an even greater effect on income but for high income earners language learning is a very plausible way to increase their income. I assume this goes especially for people that have more obvious use of learning an additional language like translaters or investors.

You're assuming that the correlation is purely causal and none of the increased income correlating with language learning is due to confounds; this is never true and so your ROI is going to be overstated.

This works out to 7.8% and 3.6% on initial investment respectively.

Most people have discount rates >4%, which excludes the latter. Throw in some sort of penalty (50% would not be amiss, given how many correlations crash and burn when treated as causal), and that gets rid of the former.

Language learning for Americans just doesn't work out unless one has a special reason.

(Unless, of course, it's a computer language.)

-1Metus7yWould be nice to know those. The paper states that people in managerial positions get substantially higher relative returns from learning a foreign language. That would be a special reason. Maybe the returns are much higher for low-income earners. That question is uninteresting for the average LW user but still. I further wonder what the return on learning a language is in the future. As an aside, I am surprised how hostile US Americans can be when it is suggested to learn another language.

As an aside, I am surprised how hostile US Americans can be when it is suggested to learn another language.

Personally, I find most suggestions and discussion of Americans learning other languages to be highly irritating. They have not considered all of the relevant factors (continent sized country with 310m+ people speaking English, another 500m+ English-speakers worldwide, standard language of all aviation / commerce / diplomacy / science / technology, & many other skills to learn with extremely high returns like programming), don't seem to care even when it is pointed out that the measured returns are razor-thin and near-zero and the true returns plausibly negative, and it serves as an excuse for classism, anti-Americanism, mood affiliation with cosmopolitanism/liberalism, and all-around snootiness.

It doesn't take too many encounters with someone who is convinced that learning another language is a good use of time which will make one wealthier, morally superior, and more open-minded to start to lose one's patience and become more than a little hostile.

It's a bit like people who justify video-gaming with respect to terrible studies about irrelevant cognitive benefits (FPSe... (read more)

2Metus7yTrue. I come from the other side growing up in Germany and having met with a lot of foreign knowledge workers unwilling to learn even a lick of German. I actually know of several people that are unable to say "No, thank you" or "One beer please" and unwilling to learn. Personally I see this as highly disrespectful of the host country. After stating this opinion the unwillingness is then justified with the international status of English. Anyhow, we are delving off into politics and thus I'd like to end this debate at this point with your understanding. I hope the downvote is not from you and moreso that is not because of that line only.
4gwern7yYes, living in a foreign country is a substantially different proposition (and I'd guess that the correlated increases in income would be much higher). But comparing to Germany highlights part of why it's such a bad idea for Americans: the population of America alone is 3.78x that of Germany, never mind the entire Anglophone world.
2Metus7yDisclaimer: I won't listen to the podcast, because I am boycotting any medium that is not text. Language learning may have extremely low ROI in general but extremely high in special cases. E.g. I would not be surprised by finding that people learning the language of the foreign country they live in increases their subjective wellbeing. Or if people want to work as translators. Or they are investors and are specialising in a region not speaking English as their main language. This almost seems like a fallacy. I might call it "homogenity bias" or "mistaking the average for the whole" just to find out that is already known under a different name and well documented.

Disclaimer: I won't listen to the podcast, because I am boycotting any medium that is not text.

Good news! Freakonomics is, along with Econtalk and patio11, one of the rare podcasts which (if you had click through) you can see provides transcripts for most/all of all their podcasts.

2Metus7yWell fuck me. I saw the streaming bar and closed the tab, so entirely my fault. Thank you for notifying.
2Username7yGreat link. From the cited paper:
2[anonymous]7yThanks for the link. I hadn't actually considered language learning in an ROI-fashion, but it's obviously something I should think about before making heavy investments. I still think it worth the time since my field involves dealing with non-English parties often. Though I have no distinct need for bilingualism at the moment, it will make me more hireable. However, I do need to evaluate my time learning Spanish against, say, the gains of spending that same time learning programming languages.
6polymathwannabe7yI'd like to be your conversation partner. My Spanish is Colombian. PM me for contact details.
0polymathwannabe7yAlso, there's Papora.
5Torello7yIf you are willing/able, the best way is to go to a Spanish school in Mexico or Central America and live with a host family for a month or two. I learned more in two months doing that than my first four university classes combined. This probably doesn't fall under "teaching yourself," but if you are serious the other things can't even touch the ROI of an immersive experience, in terms of time and money to Spanish acquired Fluenz is a great computer-based program, but it's expensive. I used Rosetta Stone a bit, this is way better. Pimsular audio tapes for car rides or MP3 player Duolingo is free, but isn't for active conversation. Look on Meetup.com for a spanish conversation meetup. italki is a good option for a conversation focus. http://markmanson.net/foreign-language [http://markmanson.net/foreign-language] http://www.andrewskotzko.com/how-to-unlock-foreign-languages/ [http://www.andrewskotzko.com/how-to-unlock-foreign-languages/] The last two links are about principles/suggestions. I agree with most of them. This is my advice: say anything that you can, whenever you can. Embarrassment is often the biggest obstacle. When you are beginning with conversations, say anything you can, even if it is a single word, grammatically incorrect, or irrelevant. With regard to what niceguyanon said about low ROI on languages aside from English, I think there are social capital benefits, self-confidence benefits, cognitive functioning benefits that are valuable. Not to mention travel benefits--Spanish makes travel in many countries easy.
0[anonymous]7yThank you very much for those links! They're very helpful. As I mentioned in reply to niceguyanon, my field is one where language acquisition has a higher value than it might for other fields. And, I'll admit, I do feel that the confidence and cultural benefits are worth the investment, for me at least. Expression and communication are important to my work. Becoming a more efficient communicator means making myself more valuable. I know that audio tapes and text books are not how I'm going to learn a language. Like many of my peers, I spent two years in classes repeating over and over "tener, tener... tengo, tengo... tengas, tengas...." and gained nothing out of it except how to say "Me gusto chocolate." I know how language learning doesn't work, if nothing else.
0ThisSpaceAvailable7yYou could look into volunteering at a charity that serves Hispanics, or find an ESL conversation group and see whether they would interested in spending time speaking Spanish with you.

Recommendations for good collections of common Deep Wisdom? General or situation specific would be helpful (e.g. all the different standard advice you get while picking your college major, or going through a tough break up).

3niceguyanon7yCheck these out. http://lesswrong.com/lw/gx5/boring_advice_repository/ [http://lesswrong.com/lw/gx5/boring_advice_repository/] http://lesswrong.com/lw/i64/repository_repository/ [http://lesswrong.com/lw/i64/repository_repository/]

Yet another possible failure mode for naive anthropic reasoning.

[-][anonymous]7y 4

I am curious about whether Borderline Personality Disorder is overrepresented on LessWrong compared to the general population.

Is Wikipedia's article on BPD a good description of your personality any time in the past 5 years? For the sake of this poll, ignore the specific "F60.31 Borderline type" minimum criteria.

[pollid:678]

You are bound to 'find' that BPD is overrepresented here by surveying in this manner. (hint: medical student syndrome)

4[anonymous]7yI could repeat this poll in a venue where the people are similarly prone to medical student syndrome, but not as prone to filling some kind of void with rationality or other epiphanies. That would provide a baseline for comparison. But I don't yet know where exactly I would find such a venue.
-5RichardKennaway7y

There are probably checklist to diagnose Borderline Personality Disorder that are much better than simply reading a Wikipedia article and thinking about whether it applies to you.

0VAuroch7yI found one [http://psychcentral.com/quizzes/borderline.htm], which doesn't look enormously reputable but is probably better than wikipedia.
8brazil847yPeople with borderline personality disorder generally lack "insight," i.e. they are typically unaware that they have BPD; will deny having it; and will get extremely defensive at the suggestion they have it. One can contrast with, for example, obsessive/compulsive disorder sufferers who usually do have pretty good insight. So a survey based on self-reporting is not going to be very helpful. Anyway, I doubt that there are many people on this board with BPD. This is based on my interactions and observations. Also, this discussion board doesn't seem like it would be very attractive to someone with BPD since it doesn't offer a steady stream of validation. For example, it's common on this board for other posters, even those who agree with you on a lot of stuff, to challenge, question, or downvote your posts. For someone with BPD, that would be pretty difficult to handle. The main mental issue I sense on this board (possibly disproportionate to the general population) is Asperger's. There also seems to be a good deal of narcissism, though perhaps not to the point where it would qualify as a mental disorder.
2Viliam_Bur7ySo if a person with BPD would discover LW and decide they like the ideas, what would they most likely do? My model says they would write a lot of comments on LW just to prove how much they love rationality, expecting a lot of love and admiration in return. At first they would express a lot of admiration towards people important in the rationalist community; they would try to make friends by open flattery (by giving what they want to get most). Later they would start suggesting how to do rationality even better (either writing a new sequence, or writing hundreds of comments repeating the same few key ideas), trying to make themselves another important person, possibly the most important one. But they would obviously keep missing the point. After the first negative reactions they would backpedal and claim to be misunderstood. Later they would accuse some people of persecuting them. After seeing that the community does not reward this strategy, they would accuse the whole LW of persecution, and try to split apart their own rationalist subcommunity centered around them.
0NoSuchPlace7yI hate to point this out, but it is already easy enough to ridicule the proper spelling; its spelled Asperger. Edit: Sorry tried to delete this comment, but that doesn't seem to possible for some reason.
0brazil847yFixed. FWIW thanks.
6ChristianKl7yAccording to the Wikipedia article: "People with BPD feel emotions more easily, more deeply and for longer than others do." To me that doesn't seem like the LW crowd, what would make you think that there's an overrepresentation?

Because it's a group of people who are excited for years about a rule for calculating conditional probability?

Yeah, I'm not serious here, but I will use this to illustrate the problem with self-diagnosis based on a description. Without hard facts, or without being aware how exactly the distibution in the population looks like, it's like reading a horoscope.

Do I feel emotions? Uhm, yes. Easily? Uhm, sometimes. More deeply than others? Uhm, depends. For longer than others? I don't have good data, so, uhm, maybe. OMG, I'm a total psycho!!!

0ChristianKl7yNo, there are a lot of data points. One example: At the community we had one session where having empathy was a point. The person who's on stage to explain the rest what empathy is talks about how it's having an accurate mental model of other people and not that empathy is about feeling emotions. I don't want to say that having an accurate mental model of other people isn't useful, but it's not what people mean with the word empathy in a lot of other communities. Empathy usually refers to a process that's about feeling emotions.
0hamnox7yI actually attributed this to a higher than normal base rate of Asperger Syndrome.
1mwengler7yI had an impulse to answer "very descriptive," but I controlled it.
-2Luke_A_Somers7yBetter: Alice had an impulse to answer "not at all descriptive" but she controlled it and said 'very descriptive'.
-4Lumifer7yClearly not :-D
1Gunnar_Zarncke7yI seem to feel emotions less pronounced as average persons. Would have been interesting to add that as an option.

Is anyone going to be at the Eastercon this weekend in Glasgow? Or, in London later in the year, Nineworlds or the Worldcon?

ETA: In case it wasn't implied by my asking that, I will be at all of these. Anyone is free to say hello, but I'm not going to try to arrange any sort of organised meetup, given the fullness of the programmes of these events.

In the last open thread, someone suggested rationality lolcats, and then I made a few memes, but only put them on last minute. In case anyone would like to see them, they are here.

What's a good Bayesian alternative to statistical significance testing? For example, if I look over my company's email data to figure out what the best time of the week to send someone an email is, and I've got all possible hours of the week ordered by highest open rate to lowest open rate, how can I get a sense of whether I'm looking at a real effect or just noise?

2gwern7yIn that scenario, how much does it really matter? It's free to send email at one time of week rather than another, so your only cost is the opportunity cost of picking a bad time to email people, which doesn't seem likely to be too big.
0John_Maxwell7yOur email send by the hour would get far lumpier, so we would have to add more servers in order to handle a much higher peak emails sent per minute. And it takes development effort to configure emails to send at an intelligent time based on the user's timezone.
0John_Maxwell7yOK, here's a proposed solution I came up with. Start with the overall open rate for all emails regardless of time of the week. Use that number, and your intuition for how much variation you are likely to see between different days and times (perhaps informed by studies on this subject that people have already done) to construct some prior distribution over the open probabilities you think you're likely to see. You'll want to choose a distribution over the interval (0, 1) only... I'm not sure if this one [http://en.wikipedia.org/wiki/Beta_distribution] or this one [http://en.wikipedia.org/wiki/Kumaraswamy_distribution] is better in this particular case. Then for each hour of the week, use maximum-a-posteriori estimation (this [https://www.cs.utah.edu/~suyash/Dissertation_html/node8.html] seems like a brief & good explanation) to determine the mode of the posterior distribution, after you've updated on all of the open data you've observed. ( This [https://engineering.purdue.edu/kak/Tutorials/Trinity.pdf] provides an explanation of how to do this.) The mode of an hour's distribution is your probability estimate that an email sent during that particular hour of the week will be opened. Given those probability estimates, you can figure out how many opens you'd get if emails were allocated optimally throughout the week vs how many opens you'd get if they were allocated randomly and figure out if optimal allocation would be worthwhile to set up.
0[anonymous]7yNot Bayesian, but can't you just do ANOVA w/ the non-summarized time of day vs. open rate (using hourly buckets)? That seems like a good first-pass way of telling whether or not there's an actual difference there. I confess that my stats knowledge is really just from natural sciences experiment-design parts of lab classes, so I have a bias towards frequentist look-up-in-a-table techniques just because they're what I've used. Rant for a different day, but I think physics/engineering students really get screwed in terms of learning just enough stats/programming to be dangerous. (I.e., you're just sort of expected to know and use them one day in class, and get told just enough to get by- especially numerical computing and C/Fortran/Matlab).
-2ThisSpaceAvailable7ySuppose you have three hypotheses: (1) It's better to email in the morning (2) It's better to email in the evening (3) They're equally good Why do you care about (3)? If you're just deciding whether to email in the morning or evening, (3) is irrelevant to ranking those two options. The full-fledged Bayesian approach would be to identify the hypotheses (I've simplified it by reducing it down to just three), decide what your priors are, calculate the probability of seeing the data under each of the hypotheses, and then combing that data according to the Bayesian formula to find the posterior probability. However, you don't have to run through the math to see that if your prior for (1) and (2) are equal, and the sample is skewed towards evening, then the posterior for (2) will be larger than the posterior for (1). The only time you'd actually have to run through the math is if your priors weren't equal, and you're trying to decide whether the additional data is enough to overcome the difference in the priors, or if you have some consideration other than just choosing between morning or evening (for instance, you might find it more convenient to just email when you first have something to email about, in which case you're choosing between "email in morning", "email in evening" and "email whenever it's convenient to me"). "Statistical significance" is just a shorthand to avoid having to actually doing a Bayesian calculation. For instance, suppose we're trying to decide whether a study showing that a drug is effective is statistically significant. If the only two choices were "take the drug" and "don't take the drug", and we were truly indifferent between those two options, the issue of significance wouldn't even matter. We should just take the drug. The reason we care about whether the test is significant is because we aren't indifferent to the two choices (we have a bias towards the status quo of not taking the drug, making the drug would cost money, there are pro

Does anyone know of a way to collaboratively manage a flashcard deck in Anki or Mnemosyne? Barring that, what are my options so far as making it so?

Even if only two people are working on the same deck, the network effects of sharing cards makes the card-making process much cheaper. Each can edit the cards made by the other, they can divide the effort between the two of them, and they reap the benefit of insightful cards they might not have made themselves.

1ygert7yYou could use some sort of cloud service: for example, Dropbox. One of the main ideas behind of Dropbox was to have a way for multiple people to easily edit stuff collaboratively. It has a very easy user interface for such things (just keep the deck in a synced folder), and you can do it even without all the technical fiddling you'd need for git.
1Metus7yIf the deck format is some kind of text like XML you could look into using git for distribution and a simple text editor for editing.
0iconreforged7yExactly the right avenue. Unfortunately, Anki typically uses its own idiosyncratic data format that's not very handy for this kind of thing, but it's possible to export and import decks to text, as it turns out. The issue with this is that if you're months into studying a deck and you'd like to merge edits from other contributors, I'm not certain that you simultaneously import the edits and keep all of your progress. Even so, the text deck route has the most promise as far as I can tell.
0ChristianKl7yAnki itself stores it"s data in SQLlite databases. I think there a good chance that Anki itself will get better over time at collaborative deck editing. I think it's one of the reason why Damien made the integrating with the web interface on of the priorities in Anki 2

I found this on Twitter, specifically related to applications for the blind (but the article is more general-purpose): Glasses to simulate polite eye contact

Having read only the article and the previously-mentioned tweet, and no comments and knowing nothing about what it actually looks like, I'm predicting that it falls into the uncanny valley, at best.

3beoShaffer7yI've seen them and believe the correct descriptor is "ridiculous".
0DanielLC7yI was thinking "creepy", but I guess it's that too.

What's the copyright/licensing status of HPMOR?

3ChristianKl7yGiven that it's fanfiction copyright isn't straightforward. Harry Potter is sort of owned by J.K. Rowling. If you want to do something with HPMOR, sent Eliezer an email to ask for permission and he will probably grant it to you.
0gwern7yGood question. I thought http://hpmor.com/info/ [http://hpmor.com/info/] would cover the licensing, but nope. Some googling doesn't turn up any explicit licensing either.
2polymathwannabe7yThat page leaves it clear: "All fanfiction involves borrowing the original author’s characters, situations, and world. It is ridiculous to turn around and complain if your own ideas get borrowed in turn. Anyone is welcome to steal anything from any fanfiction I write."
4RichardKennaway7yI think that only speaks to writing fanfiction of Eliezer's fanfiction, not rights over the text itself. By default, the copyright is solely Eliezer's unless and until he says otherwise.
-1DanielLC7yHe only says you're allowed to steal it. Not to use it with permission. If you take it without permission, that's stealing, so you have permission, which means that you didn't steal it, etc.
1ygert7yNo, no, no: He didn't say that you don't have permission if you don't steal it, only that you do have permission if you do. What you said is true: If you take it without permission, that's stealing, so you have permission, which means that you didn't steal it. However, your argument falls apart at the next step, the one you dismissed with a simple "etc." The fact that you didn't steal it in no way invalidates your permission, as stealing => permission, not stealing <=> permission, and thus it is not necessarily the case that ~stealing => ~permission.
-2DanielLC7yThe exception proves the rule. [https://en.wikipedia.org/wiki/Exception_that_proves_the_rule] Since he gave permission to steal it, that implies that you don't have permission to take it in general.

I was wondering if there are any services out there that will tie charitable donations to my spending on a certain class of good, or with a certain credit card. E.g. Every time I buy a phone app or spend on in-app purchases, a matching amount of money goes to a particular charity.

2drethelin7yThere's a lot of credit cards that will give a fix percentage of money to charity whenever you use them but I don't think any will go up to the amounts I bet you want.

Hi, I've been intermittently lurking here since I started reading HPMOR. So now I joined and the first thing I wanted to bring up is this paper which I read about the possibility that we are living in a simulation. The abstract:

"This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are al... (read more)

[This comment is no longer endorsed by its author]Reply
[-][anonymous]7y 0

I don't have enough karma to create my own post, so I'm cross posting this from a gist

Pascal's Wager and Pascal's Mugging as Fixed Points of the Anthropic Principle

Skepticism Meets Belief

Pascal's Wager and Pascal's Mugging are two thought experiments that explore what happens when rational skepticism meets belief. As skepticism and belief move towards each other, they approach a limit such that it's impossible to cross from one to the other without some outside help.

Pascal's Wager takes the point of view of a rational being attempting to make a decision ... (read more)

[This comment is no longer endorsed by its author]Reply
[-][anonymous]7y 0

I was brought up a pretty devout Catholic, but I stopped going to church and declared myself an atheist to my family before I got out of high school. I have always been pretty proud of myself for having the intelligence and courage to do this. But today I realized that I follow a thirty-something bearded Jewish guy who, along with a small group of disciples, has performed seemingly impossible deeds, preaches in parables, plans to rise from the dead and bring as many of us as he can with him, defeat evil, and create a paradise where we can all live happily ... (read more)

[This comment is no longer endorsed by its author]Reply

Gaia

[This comment is no longer endorsed by its author]Reply
5Kawoomba7yBit of a smorgasbord of a post (or a gish-gallop, if I'm not mincing words). Sorry to say, but much of your reasoning is opaque to me. Possibly because I misunderstand. Infinite priors? Anthropic reasoning applied to 'higher beings', because we emphatize with such a higher being's cogito? You lost me there. I'd say that the possibility of a non-expected FOOM process would be a counterexample, but then again, I have no idea whether you'd qualify a superintelligence of the uFAI variety as a 'higher being'. Didn't see that coming. It may be that you've put a large amount of effort into coming to the conclusions you have, but you really need to put some amount of effort into bridging those inferential gaps.
-1Benvie7yGaia+VR
0ThisSpaceAvailable7yIf you're going to make up new meanings for words, you should at least organize the definitions to be consistent with dependencies: dependent definitions after words they are dependent on, and related definitions as close to each other as possible. In your list, there are numerous words that are defined in terms of words whose definitions appear afterwards. Among other problems, this allows for the possibility of circular definitions. Also, many of the definitions don't make sense. e.g. "An algorithm that guides reproduction over a population of networks toward a given criteria. This is measured as an error rate." Syntactically, "this" would refer to "criteria", which doesn't make sense. If it doesn't refer to criteria, then it's not clear what it does refer to.
0drethelin7yI think your post is a bit rambling and incoherent but I very much support your style of making long comments in the fashion of posts with BOLD section headings etc.

Evil Stupid Thing Alert!

"The Duty to Lie to Stupid Voters" - yes, really

I decided to post it here because it's just so incredibly stupid and naively evil, but also because it's using LW-ish language in a piece on how to - in essence - thoroughly corrupt the libertarian cause. Thought y'all would enjoy it.

Standard rejoinders. Furthermore: even if Brennan is ignorant of the classical liberal value of republicanism, why can't he use his own libertarian philosophy to unfuck himself? How is lying like this ethical under it? Why does he discuss the ben... (read more)

9Username7yI am down voting this because: a) I don't want to see people pushing politics on LW in any form. b) It is entirely nonobvious to me that this is either evil or stupid.
2Lumifer7yConsider two concepts: "credibility" and "multiple rounds". That's what makes it stupid. Consider another idea: "I don't care about multiple rounds because after a single win I can do enough". That's what makes it evil.
6mwengler7yWell I am apparently too stupid to understand why the quoted article is stupid or evil, not to mention incredibly stupid or naively evil. In any consequentialist theory combined with some knowledge of the actual world as it functions that we live in I don't see how you can escape the conclusion that a politician running has a right to lie to voters. An essential conclusion from observing reality is that politicians lie to voters. Upon examination, it is hard NOT to conclude that politicians who don't lie enough don't get elected. If we are consequentialist, then either 1) elected politicians do create consequences and so a politician who will create good consequences had best lie "the right amount" to get elected or 2) elected politicians do not create consequences in which case it is consequentially neutral whether a politician lies, and therefore morally neutral. If you prefer a non-consequentialist or even anti-consequentialist moral system, then bully for you, it is wrong (within your system) for politicians to lie to voters, but that conclusion is inconsequential, except perhaps for a very small number of people, presumably the politician who's soul is saved or who's virtue is kept intact by his pyrrhic act of telling the truth.
6Alejandro17yA lot of the superficial evilness and stupidity is softened by the follow-up post, where in reply to the objection that politicians uniformly following this principle would result in a much worse situation, he says: So maybe he just meant that in some situations the "objectively right" action is to lie to voters, without actually recommending that politicians go out and do it (just as most utilitarians would not recommend that people try to always act like strict naive utilitarians).
0Lumifer7yI'm confused. So would he recommend that the politicians do the "objectively wrong" thing? All of that looks a lot like incoherence, unwillingness to accept the implications of stated beliefs, and general handwaving. So the problem is that the politicians can't lie well enough?? X-D
4Alejandro17yNo, that's not what he means. Quoting from the post [http://bleedingheartlibertarians.com/2014/04/theyll-mess-it-up-not-an-objection-to-a-moral-theory/] (which I apologize for not linking to before): So, to recap. Brennan says "lying to voters is the right thing when good results from it". His critics say, very reasonably, that since politicians and humans in general are biased in their own favor in manifold ways, every politician would surely think that good would result from their lies, so if everyone followed his advice everyone would lie all the time, with disastrous consequences. Brennan replies that this doesn't mean that "lying is right when good results from it" is false; it just means that due to human fallibilities a better general outcome would be achieved if people didn't try to do the right thing in this situation but followed the simpler rule of never lying. My interpretation is that therefore in the post Multiheaded linked to Brennan was not, despite appearances, making a case that actually existing politicians should actually go ahead and lie, but rather making an ivory-tower philosophical point that sometimes them lying would be "the right thing to do" in the abstract sense.
1Lumifer7ySo, is there any insight here other than restating the standard consequentialist position that "doing X is right when it leads to good outcomes"? Especially given how Brennan backpedals into deontological ethics once we start talking about the real world?
2Viliam_Bur7yFor a wrong outcome B, you can usually imagine even worse outcome C. In a situation with perfect information, it is better to choose a right outcome A instead of a wrong outcome B. But in a situation with an imperfect information, choosing B may be preferable to having A with some small probability p, and C with probability 1-p. The lesson about the ethical injuctions seems to me that we should be aware that in some political contexts the value of p is extremely low, and yet because of obvious evolutionary pressures, we have a bias to believe that p is actually very large. Therefore we should recognize such situations with a large p (because that's how it feels from inside), realize the bias, and apply a sufficiently strong correction, which usually means to stop.
1Viliam_Bur7yActually... yes. More precisely, I would expect politicians to be good at lying for the goal of getting more personal power, because that's what the evolution has optimized humans for; and the politicians are here the experts among humans. But I expect all humans, including politicians, to fail at maximizing utility when defined otherwise.
6Lumifer7yConsequentialism has no problems with lying at all.
2Multiheaded7yMany internet libertarians aren't very consequentialist, though. And really, just the basic application of rule-utilitarianism would expose many, many problems with that post. But really, though: while the "Non-Aggression Principle" appears just laughably unworkable to me... given that many libertarians do subscribe to it, is lying to voters not an act of aggression?
-2Lumifer7yDepends on your point of view, of course, but I don't think the bleeding-heart libertarians (aka liberaltarians) are actually libertarians. In any case, it's likely that the guy didn't spend too much time thinking it through. But so what? You know the appropriate xkcd cartoon, I assume...
2ChristianKl7yGiven that the guy is a professional philosopher I doubt ignorance is a good explanation. It's probably a case of someone wanting to be to contrarian for his own good. Or at least the good of his cause. Given that he wrote a book to argue that most people shouldn't vote, he might simply troll for academic controversy to get recognition and citations.