Previous open thread


If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New Comment
174 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Would anyone be interested in forming an R discussion/study/support group?

I have quite modest R skills, but I would like spectacular R skills that are the toast of the town and the envy of all who see them. I suspect I'm not the only person on LW with this desire, so I thought I'd sound out interest in a group to help mutually achieve this.

What I see such a group doing:

  • Sharing interesting or instructional datasets
  • Suggesting interesting projects
  • Showing off awesome stuff you've done
  • Sharing and discussing relevant media, resources and online content
  • General coordination and collaboration

Anyone interested?

I am an active github R contributor and stackoverflow R contributor and I would be willing to coordinate. Send me an email: rkrzyz at gmail

I'd be interested. More in awesome stuff than R itself. I'm currently at #22 out of 99 in a Kaggle contest and am doing it in R, but don't really know what I'm doing. I do find that participating there is not a bad way to practice.
0Peter Wildeford
Congrats! Which competition?
The Schizophrenia Classification Challenge. I haven't done anything difficult, which is the biggest surprise; when I read the description I doubted I'd even be able to produce anything useful.
1Peter Wildeford
I'd be interested.
2Peter Wildeford
You can see some of the R stuff I do here.
I would be interested
I'd be interested in interesting R projects.

I've been ignoring Open Threads throughout my time on LW, but I've found out recently that this was to my detriment. While there is much noise (i.e., stuff I personally don't care about), there are some genuinely interesting things here.

At the same time, I feel like Discussion for me has just died out and no longer has anything interesting, apart from the Open Threads.

The problem (for me), is that Discussion was very easy to follow, while Open Threads are very hard to follow.

Is there an easier way to follow Open Threads? And/or a way we could start moving some of the Open Thread stuff back to Discussion?

As MathiasZaman said, there is a big push right now to move OT content back into Discussion, and from the increased volume I'm seeing in Discussion lately, I think it's succeeding.
There has recently been a push towards having more discussion in Discussion as opposed to the Open Thread, so your problem might already be solved.
There is an RSS feed, I don't know if you consider that easier.
If you find something excellent in OT, you could mention it in a Good Stuff from OT post.
A Best of OT post, much like the Best of Rationality Quotes post, sounds like a wonderful idea.

I phrased it as Good Stuff from OT rather than Best of OT because good stuff (possibly as a more dignified phrase) is easier to identify than best.

2Peter Wildeford
Do these kind of posts exist?
Not yet.
Unless I wan't to post something myself, I just read open threads when they are already closed. Very convenient.
I think discussion would be more useful if the threads were sorted by something like reddit's 'hot' algorythm, rather than being just displayed in order of posting, which causes good threads to be buried by duller new ones. Open threads are sort of like this as they are essentially "top posts this week"
0Peter Wildeford
There's a sort by in the upper right corner above comments, but below the post.
yes but the default setting is what determines most people's behavior. Trivial inconveniences etc.
The default is "Best," not "New." It appears to me to be dominated by old threads. "Hot" has probably been renamed "Leading" and could be made default, but I doubt you are actually seeing the default.

Are utilitarians theoretically obligated to prefer that Brazil win the world cup? Consider: of the 32 participating countries, only the USA has a larger population, but the central place of soccer in Brazilian culture, and their status as hosts mean that they have more at stake in this competition. So total utility would probably be maximized by a Brazil win.

These considerations would seem to make rooting for any other team immoral from a strict utilitarian perspective. This exposes some things I find problematic about utilitarianism. For example, I also have the intuition that it is okay for people to support their own team, even if that teams victory would make hundreds of millions of Brazilians unhappy. If you are a utilitarian player playing against Brazil, are you doing something morally wrong by trying to win? This seems absurd, but I can't see how to escape this conclusion.

These considerations would seem to make rooting for any other team immoral from a strict utilitarian perspective.

Only if rooting for a team makes it more likely for it to win. ;-)

I dislike football and I realized a few days ago that a win for the US would bring me the most utility - the largest number of soccer fans will [probably] be upset this way, which should decrease the global popularity of the sport slightly. At any rate, if you are purely maximazing global utility, then the conclusion looks sounds.
I don't really follow soccer. Why would a US win upset people?
A win for the US would probably increase the interest in soccer in the US, which I think would be a net loss by your standards.
I live in the UK, so I don't care much if soccer interest increases slightly in the States, since that's sort of like caring about poverty in Palo Alto, while living in a hut in rural Africa. At any rate, I am fairly sure that most countries which care about football will be very upset if the US of all countries wins it.
It'd also be a loss to the US, I think. I understand that the US team is, and has always been, not considered that great? If so, then a win by the US team implies that it's an extreme event for the US team; this will attract new fans due to the publicity and novelty of the US team winning; but then, because it was an extreme event, the US team will regress to its mean and consistently lose for the indefinite future, making all its old and new fans unhappy and more than wiping out any gains from having briefly made its old fans happy until the new fans finally attrit.
Sometimes people are loyal to teams that keep losing.
Yes, that's the problem.
I think I know how to escape this problem partially. The same method works for courts of justice. It might be pragmatical to give sentences to people to make them examples so new criminals wouldn't do those crimes. Exaggerated sentences. Everytime a court would dish out a verdict, they would double the numbers for jailtime, surely there will be less new criminals when this pattern is maintained. I think it is the approach the US Goverment has tragically taken on whistleblowers too by the way. Though this logic works in the single event, it's bad policy. What is lost on the general level, from the bigger picture, is fair trials. People lose their ability to believe in a fair court. They can't trust the law, they can't trust society. The football world cup is similar. You're giving up the fun of football competitions on largescale so that you would preserve some fun on the smallerscale. It's a pyrrhic victory. Do you think this reasoning is sufficient to deal with the dilemma?
So you are saying that the competition wouldn't be any fun if everyone believed that one particular team winning was the only acceptable outcome - it would defeat the purpose of the competition (fun) and devalue it to the point that there would no longer be any difference in utility anyway. That's basically the categorical imperative (if everyone broke their promises, there would be no such thing as promising, so the whole concept breaks down and so the rule makes no sense) Is that what you are getting at? The problem is that not everyone does believe that Brazil should win. So I don't think we have a good solution for an individual utilitarian reasoner in a world in which most people do not think the same way.
Now that you mention I too think that it is an instance of categorical imperative. However I think the categorical imperative is an analytical tool for primarily individuals for comparing policies based on different kinds of individual behavior. And yes essentially I am saying what you wrote in your previous comment, but perhaps I'm concentrated on the qualities that could be seen being of utility on the general level, like having honest tournaments and competitions. And I tried to sort of link them to an example of something that would be of utility on the individual level, like fun. Works better with the courts of law, since an environment which has fair trials is easier to perceive as meaningful. However in the hypothetical situation when somebody places no particular meaning for honest tournaments and fair competition, they don't have any particular moral issues with letting their team down and losing on purpose, so that the greater good can happen and lots of people can be happy, then perhaps there is a harder dilemma remaining, in which case it really needs to be weighed, what is more important. Morality isn't necessarily easy either, sometimes decisions are difficult, which is not necessarily to say that your methods of processing the dilemmas would be insufficient, but that can also be the case. People having differing views on matters is likely to produce situations where ideal outcome is hard to find. If that's the case, perhaps then it can also function as an additional reason to appreciate the general stuff like the honest competitions.
Surely the greatest joy will come to a team that hasn't one for a long time.
Brazilians gain utility from fair victory, not a win at all costs. Not trying to win would increase the chance that Brazil wins at the cost of reducing the fairness of the victory. Of course, that doesn't apply to just rooting for the other team.

I'm trying to track down a fallacy or effect that was once explained to me and which I found plausible: The idea that whoever has the more complex and detailed mental model of a topic under question wins a discussion about a question - independent of the actual truth of the matter (and assuming no malicious intent).

The example cited as I remember it was about visual (microscope) inspection of blood samples for some boolean factor (present or not). Two persons got the same samples and were trained to recognize the factor one was always told the truth and t... (read more)

Anecdotally I have observed this. Whomever is more invested in an argument that's at a more-or-less casual level (say, on the internet) can muster more facts and win. Even when the evidence is radically one-sided, it doesn't matter if the person on the right side of the facts doesn't know the evidence well. Anyone of us could lose an argument with people over whether the Earth is flat, if the difference in preparedness was great enough.
Perhaps something like the representativeness heuristic? While more details make something sound more believable, each detail is another thing that could be incorrect.
Looks to be a subtype of the general observation that whoever can establish her authority in an argument wins.
Yes. Call it authority or dominance or whatever. In all cases where significant loss can be avoided by backing down early this again is exploitable by e.g. boasting, aggression, rhetorics, intimidation. The interesting sub-case here is that this can have side-effects where it is not actively exploited but accidentally - as the net effect is that the team reaches a sub-optimal joint result. Kind of a cognitive bias more like over-confidence where lack of communication of confidence results.
I don't know -- in more general terms Alice spent more resources (time, effort) at analyzing the problem and so feels more qualified than Bob who spent less resources. In this particular artificial setup this leads to suboptimal results, but I suspect that in most real-life situations, Alice would have better opinions/solutions/forecasts than Bob and so should have an advantage in a disagreement.
So I find that there's one place this frequently comes up detrimentally in real life: The advocate of something invariably has spent more time studying it than the opponent. This creates a (to my mind) unhealthy bias in some situations in the advocate's favor.

Oh my dear sweet God YES, Goodman and Tenenbaum wrote a book on probabilistic models of cognition. With a programming language and exercises for writing and running the models.

[squee intensifies]

Nice find! Thanks for this.

We often see people offering rewards for compelling arguments for changing their mind. Examples would be Sam Harris), for a counterargument for his book, Jonathon Moseley, for showing that separation of church and state is found in the First Amendment, and perhaps James Randi, for showing the existence of supernatural abilities, could be included. Of course, this sort of reward scheme creates a large incentive to not change your mind. Some of these are clearly publicity stunts, but if I sincerely wanted good evidence against my position, what would be the... (read more)

I guess you should give the same reward, to the most convincing argument, regardless of whether it really convined you or not. It motivates the other people to do their best, and does not influence you in making the decision. I don't like the idea of overcompensating for biases. I understand the reason behind it, but I am afraid that this approach creates its own specific problems. For example, how much should you overcompensate? I mean, if overcompensating is good, then the more you overcompensate, the more virtuous you are...

Does anyone have advice for effective learning in distracting/suboptimal environments? I know LW recommends textbooks and learning by accumulation instead of random walks, but I have at most 1-2 hours of uninterrupted time per day I can spend learning optimally vs. 8+ hours per day I could potentially use to learn sub-optimally (e.g. frequent distractions, sudden interruptions, hours between learning sessions) during downtime at work that is currently going to waste. Are there better formats than textbooks for these environments or would it be more effecti... (read more)

Make notes. Otherwise you risk spending a large part of the 1 hour repeating the stuff you learned during the 1 hour yesterday. If you have little time for learning, only learn one thing, not two in parallel, because that would make it even worse.
Anki works pretty well in short sessions, and distractions don't cause much problems, though you won't get 8 hours out of it.
But you still presumably need uninterrupted study time to write the cards?
Probably; depends how you make your cards (I often make a lot of notes in small notebooks, and then ankify the ankifiable bits, crossing them out as I go along; interruptions aren't that much of a problem there, but they are if you're taking notes while studying something a bit difficult).
Reading textbooks in little pieces might work anyway; it does for me. I can read textbooks nearly as well during a 20- or 30-minute train ride as I can in more stable situations. (Obviously I read less during a half-hour ride than in an hour spent reading at home, but my rate & retention seem similar in both situations.) There are textbooks I can't just read straight through on a bus, like maths & physics textbooks, but I can't just read straight through them anywhere else, either, because I need a pen & paper for the exercises. But a textbook I can read at home, like an oncology or sociology textbook, is usually a textbook I can read almost anywhere else. If you've tried this already and found it not to work, ignore this comment! But if you haven't tried it, it's worth a go.

When I was in California I noticed that Benja Fallenstein seemed to have a much better thought out way of using TagTime than I did. I asked him for more details by email, and he gave me permission to share the below with all of you:

Most importantly, I make sure that all my tags fit on a single screen on my phone; and that most of my pings need to get only two of these tags: (a) a category, and (b) a Liekert scale rating from 1 to 7. (1 = very bad; 2 = bad; 3 = neutral to bad; 4 = neutral; 5 = neutral to good; 6 = good; 7 = very good)

I have TagTime linked... (read more)

Regarding the LW meetup feedback results, I said:

I'll be writing up an analysis of results, but that takes time.

That was a month ago. Since then, I've spent about two pomodoros on it, and didn't get much actually written during those. I have three or four other things that I want to spend pomodoros on, and this has fallen by the wayside.

I want this analysis to get written, but there's no particular reason that it needs to be me who writes it. So if someone else would volunteer to write it, I'd be very grateful. The sanitized results are here. I'm not g... (read more)

This isn't particularly deep analysis, more just aggregation. Here's my take on the results, though, after some totally biased and ad-hoc tallying:

A total of seventy-five users responded. Convenience, scheduling conflicts, and other personal issues were by far the most common reasons not to attend, as a factor in almost half of the responses. Unfortunately there's not much we can do about this, except possibly giving more thought to location when scheduling, and that seems unlikely to happen given the issues I've seen with finding space and time. Two people felt uncomfortable with an otherwise convenient meetup's location, with a third having no personal complaints but describing complaints from others.

After that, a perception of the participants as too nerdy, weird, or socially awkward seems to be the most common complaint, with ten people citing one or more. A couple of these respondents attended no meetups and were presumably working from perceptions of the LW community at large, but most had. This seems to be a pivotal issue with our community's perception, but I'm not sure what to do about it. I imagine many feel it's a feature rather than a bug.

A lack of structure is a... (read more)

Thanks for this summary! This is a very important thing for growing of the community.

I was thinking about whether being "too nerdy, weird, or socially awkward" is a bug or a feature, but it seems to me that we need to be more specific, to look into details. Some things in our community are inherently weird (unusual in the everyday discourse); debating artificial intelligence, for example. But some forms of social awkwardness (harassment, boredom, unproductive debates) can -- and should -- be fixed; I mean, not just for the PR purposes, but because that also is a part of "becoming stronger". Let's see how far towards pleasant interaction can we go without sacrificing other values (such as honesty). I guess we can -- and should -- improve here a lot.

Maybe it's an issue of going meta at solving the wrong problem. If I want to have a group of people who talk about artificial intelligence, I must focus not only on the "artificial intelligence" part, but also on the "having a group of people" part. This is probably our blind spot, because the former feels like an academic subject, while the latter feels almost like an opposite to the academia (so w... (read more)

I rather suspect -- and this is me talking, not my interpretation of the survey data -- that this already concedes too much. I've talked to LWers who appeared to be hung up on honesty to the point of kneecapping themselves socially: not just preferring a more explicit interaction style, but outright refusing to deal with people who partake in perfectly normal social untruths. These sorts of extremes don't seem to be common, but insofar as they're a problem in some segments of the community, they're not going to be solved without at least a few concessions against existing values. Properly exploring this would probably take a top-level post, but I think I can summarize by saying I agree with ChrisHallquist here.
Thank you! Honestly, this is pretty much exactly what I was hoping for - if you were to post this to main, I would consider my duty fulfilled.
Thanks. It lacks polish right now, but I'll see if I can pretty it up a bit and post it to Main later.

Sam Harris recently responded to the winning essay of the "moral landscape challenge".

I thought it was a bit odd that the essay wasn't focused on the claimed definition of morality being vacuous. "Increasing the well-being of conscious creatures" is the sort of answer you get when you cheat at rationalist taboo. The problem has been moved into the word "well-being", not solved in any useful way. In practical terms it's equivalent to saying non-conscious things don't count and then stopping.

It's a bit hard to explain this to pe... (read more)

"Well-being" is a know-it-when-we-see-it sort of thing. Sure it's vague, but I don't begrudge its use. Let's break down the phrase you just objected to (I have not read SH's book, if that matters): "Increasing the well-being" - roughly correlates with increase utility, diminishing suffering, increasing freedom, increasing mindfulness, etc. Good things! And if defining it further gets into hairsplitting over competing utilitarianisms, then you might as well avoid that route. "Of all conscious creatures" - well, you obviously can't do anything immoral to a rock. Maybe you kick a rock and upset the nest of another creature, but you haven't hurt the rock. But you can do immoral things to conscious creatures, which can be argued to be pretty broad; certainly broader than just humans. So I think this is as concrete as many one-sentence summaries of morality.
5Peter Wildeford
But just how much value does "increase the for conscious creatures" provide over just "do the "?

Quantum Mechanics In Your Face, a video debate between MWI, Collapse, Bohmian and QBism proponents, for those interested in this murky issue.

Is there a formal cognitive bias along the lines of "reversed stupidity is not intelligence".

Closely related is the "horns effect," named by analogy to the "halo effect."
1Adam Zerner
Sort of. From what I understand, the halo/horns effects are about how favorable/unfavorable characteristics of the person influence beliefs you have about the person. Like how smart or skilled they are. Not about whether claims unrelated to the person are true.

I have invented a wormhole with ends separated by ten seconds in time. Unfortunately the power requirements scale exponentially with size so its not practical for anything larger than photons, but it does mean I can send information back in time. How would you exploit this?

Pre-empt other people's jokes.


High frequency stock trading.

4Peter Wildeford
See Primer (2004 film).
What happens if mutliple agents have this ability? Does the impact of future knowledge cancel out or do we get some sort of weird hyper fast feedback loops?

Can you chain these wormholes and send information 10 + 10 + 10 + ... seconds back in time?

Attempt Harry's trick to solve NP problems.

Have a program use its own output as input, effectively letting you run programs for infinite amounts of time, which depending on how time travel is resolved may or may not give you a halting oracle.

Also you can now brute force most of mathematics:

one way to do this is using first order logic which is expressive enough to state most problems. First order logic is semi-decidable which means that there are algorithms which will eventually return a proof for correct statements. Since your computer will take at most ten seconds to do this, you will have a proof after ten seconds or know that the statement was incorrect if your computer remains silent.

To expand on this: Moravec's classic "Time Travel and Computing".
What practical benefits or effects on the world do I get out of my new infinite computing power and mathematical proofs? Presumably i can now decrypt all non-quantum encryption, and do various high cost simulations very fast.
It helps with simulation of quantum mechanics, but I don't think that it helps with most classical simulations. As Eugine mentions, there is a concrete way to use time travel to solve NP problems, those where you can recognize the answer if you have it. In fact, it is possible, under one formalization, to use it to solve a class of problems called PSPACE, which just means problems that you could solve with unlimited time, but limited memory, the obvious guess when NoSuchPlace says "infinite time." But look up the method Eugine mentioned - it isn't obvious how to extend it. I don't know any applications of PSPACE problems, because they are impractical, but NP problems come up all the time and there is a big industry of solving examples on the edge of practicality. People often do this by converting them to SAT, the universal NP problem and then apply "SAT solvers"; so googling something like "sat solver applications" gives various suggestions, such as microchip design. Of course, if you really could solve SAT problems, you'd use much larger examples that people don't even bother with today. And if you could really solve PSPACE problems, you'd try even more exotic things.
It won't give you a halting oracle without an infinite computer. The best it can do is effectively give you 2^n computing time, where n is the number of bits in memory.
X-D Someone should tell the mathematicians they are all obsolete now.
Given that you obviously broke both General Relativity and Quantum Field Theory (see Hawking's Chronology Protection Conjecture) on a macroscopic scale, I recommend using an array of those as a source of free unlimited energy. Please disregard the small side effect of vacuum decay leading to the Universe destruction bubble expanding at the speed of light.
Would you mind elaborating? The Wikipedia article on the CPC seems to indicate that our best approximations to quantum gravity basically throw up their hands, and I've never found Hawking's original CPC to be anything more than, well, conjecture.
General relativity without quantum stuff admits closed timelike curves, but does not allow exploiting them due to the uniqueness of the metric. Quantum field theory on a CTC background very likely diverges in the way Hawking described. Actual quantum gravity might offer some hope, but in the weak field limit it is likely to match existing models, so the wormhole in question is very unlikely to be in this regime.
Attempts to form self perpetuating reactions have all spontaneously failed. Unsure why as equipment appears unaltered, suspect some sort of anthropic force at work.
Set up a website where people can send messages to themselves in the past in multiples of ten seconds, for a cost. Program it to automatically increase the cost as you start running out of bandwidth. Let other people figure out what to do with it. There are a few experiments that you should try to see what you could do. For example, it seems like a good idea to have it send a message about a car accident far enough back to prevent it. But if you get the message that your car will crash, you'd have to not drive and send the message to prevent a time paradox, which means that you might get the message even if the car didn't crash. You could experiment by using things like coin flips to determine car crashes. My guess is that if you send a very specific signal in the case of something bad, then you're very unlikely to get that signal unless it would happen. Otherwise, every signal would be self-consistent, and thus equally likely.

The NYT provides a nifty animated graphic visualizing the large sampling error associated with the monthly jobs report: (second screen). What's nice about it is that you can watch the various scenarios' random draws and see how easily you fall into pattern recognition mode, despite knowing it's a simulation.

Not being seduced by good-looking point estimates is one of those things you'd hope everyone would learn as part of basic statistical literacy, but seems to be very ra... (read more)

Link: Steve Yegge on why people who spend literally all day every day in front of a keyboard need to learn to type (2008). Much like many others, but still true. Also includes answers to "I am a special snowflake" objections to people who could learn but just don't.

If you already know how to touch type, how do you know if you should train some more so you can type faster?
The author types at 120 wpm, so that's probably what he recommends. He doesn't say how much work it took him to get there from 60, but it was a conscious decision and special effort. On later occasion, he learned numbers. Although he clearly advocates such speed, he doesn't really argue for it. What motivated him to speed up was text chat. What struck me as a good argument is that lots of people say that they are limited by what they have to say, not typing speed, but being able to type more gives the option of writing different things, such as more back-and-forth on message boards. If that's actually useful. But I didn't take his advice and still only type at 60wpm.
Typing on IRC in full sentences got me from 60wpm to 90wpm. On this test, I got 74wpm (0 errors) on my crappy laptop keyboard, 72wpm (0 errors) on my work PC Microsoft Natural just now.
I tried this test with results: 115 WPM, 606 keystrokes, 573 correct 33 mistakes, 119 WPM, 650 keystrokes, 593 correct 57 mistakes, 102 WPM, 566 keystrokes, 511 correct 55 mistakes, Lots of mistakes :D Could probably get somewhat better results if I attempted this all day. I never type as fast as in that test I just did, is it the same for you too? I feel there's no reason to do that, to get stressed out and concentrate on typing.
I do if I've got some text in my head and I'm getting it down, i.e. writing. It's times like that I realise how much most keyboards I use suck, and wish I had a Model M here.
They do suck, but you can buy an actual buckling-spring keyboard here. Highly recommended.
With skimming the article it seems like another anecdotal report of speaking for people learning to type better. Does it provide any systematic evidence for the claim? My Skeptics stackexchange question about the issue of programmer productivity due to typing is open for a long time without anyone providing real evidence for the claim. As far as I understand the big tech companies don't believe that it's predictive of programmer quality to the extend that the test typing speed at interviews either.
Typing speed of course is not predictive of programming talent, that's a remarkably stupid idea. The right way to think about it is in terms of bottlenecks. When, say, you write code, what slows you down? If your fingers lag behind your mind, you should try to type faster. If they don't, you're good and should focus on improving something else.

I wrote a blog post on prediction markets, and specifically some of the problems with a popular conception of prediction markets that I've seen out in the wild. It may be of interest to people on LW, so I am including a link to it... here. (Robin Hanson already commented, and he didn't seem to hate it, so I feel pretty good about it already)

I don't know how to reply on your blog, so I write my thoughts on a couple of points here. The bet on the event itself is already the insurance. Or viewed differently, the insurance itself is a bet in some kind of prediction market on the event. Which goes to show that it does not need two economic actors with opposite exposition to some event, as in your example of the baker and the farmer w.r.t wheat prices, but it is enough that two actors have different exposition to some event. Though of course in the case of non-opposite exposition there will have to be some kind of premium for one of the sides, meaning that the market price can not be interpreted simply as a probability of the event happening. Edit: The more I think about it, the more I have to disagree with the basic point about liquidity. For a proper market there needs to be unbiased participant and the person being affected by the event. And there always is the second person or the outcome of the event is without any meaningful economic consequence. Weather affects groth of plants, avenues visited by customers and traffic for transport of goods. Sports outcomes affect a populations happiness and their choices of activity afterwards. Scientific results make specific products plausible or impossible. And so on. If I am affected by a specific outcome of an event in a positive way, I can reduce my exposition by betting on the opposite outcome and increase my exposition by betting on the exact outcome. If there is a person being affected in opposite ways than I am, we can take the opposite sides of the bet. Else I need someone willing to merely take on some more risk than he is already exposed to, though the person will want to be compensated for that additional risk in some way. I wonder, is this distortion measurable so to recover the "underlying" probabilites?
I think your point is correct that if there is only economic exposure on one side of a market, then it affects the interpretability of the market prices, as it then becomes an insurance market which requires a premium for the other side of the trade. (With normal insurance, you pay the premium upfront and the insurance underwriter invests that money for earnings, so insurance prices are actually much closer to the actuarially correct price than one would naively expect.) Depending on the size of the market, though, the premium could be small. I agree that a market COULD be formed without symmetric event risk, I just think it's unlikely that we will see one formed. I think the symmetry makes a market much more likely, and economics is first a social science, so proving that something is possible is far from proving that it will occur. A market has costs to operate, so it has to have a compelling reason to exist, and bringing together natural participants is one of these core reasons. Another factor that would make a public market more likely is a larger number of smaller participants (there were a large number of small farmers when the commodity markets were established, for example). Probability of a public prediction market forming is increasing with: number of participants exposed to an identical event, balanced natural exposure to the event from each side, and accuracy of the forecasts for the impact on economic outcomes of that event (if your price goes up, your profit goes up in an easily forecastable way). Most futures markets have all three of these, but since events that are hard to forecast (and thus need prediction markets) also have impacts that are hard to forecast, prediction markets so far seem to have hard time scoring high in that third category. Most binary events that have economic impacts are also either broadly good or bad, which makes the second category difficult. Part of this may be lack of imagination on my part, of course, and as I said pr
Define binary, because my imagination does not come up with many examples that are not more like some linear events, e.g. amount of rain as opposed to "it rains". Also, in almost all of the cases I come up with, there is a benefactor. With rain people stay indoors and use more electricity, consume more television and hang out more on the internet. With a negative scientific result a product becomes unviable which ensures the market position of another market participant. And so on. Keep in mind that for a prediction market to work in an unbiased manner, there only needs to be a benefactor relative to someone that loses, that is total wealth can decrease, but not uniformly. Lastly, the rise of the second and third world will ensure that there are more than enough market participants for most questions, at least I think so. I have no model to predict this, but base this on a gut feeling. Maybe we can construct some examples to get a feeling on what point we disagree on?

I also asked this in the previous Open Thread, but rather late, so I'm asking it again: I noticed that the dropbox link to te pdf of the 2012 Winter Solstice Ritual is dead and I was wondering if anyone had a mirror they'd be willing to share.

Here you go:
Excellent. Thanks.
That link is dead too. Here are two more links: * *
Better yet, here's the 2014 Powerpoint used at NYC:

Hard question:

How should people facing colonization act to avoid cultural and economic subjugation?

Let's give some hindsight benefit - suppose you were transported back to America circa 1800 as a respected chieftain, how could you act to minimize the horrible stuff that would happen to the Native Americans over the next 100 years? What's the best you could hope for given that you couldn't magically make the USA behave better?

How should people facing colonization act to avoid cultural and economic subjugation?

They ought to subjugate themselves, obviously!

Or, to be a little less flip; if you are facing such a fate, it is because your society is overwhelmingly weaker than its rivals. Yes, as Lumifer, below, suggests, the Native Americans needed weaponry, but it's hardly an accident that they lacked it - they weren't capable of manufacturing such things for themselves, or of producing anything of value to offer in exchange for the weaponry. As a result, they were forced to rely on the goodwill and charity of their neighbours, which is just as disastrous for nations as is it for individuals. Even if the USA had left the natives well alone, the Mexicans, or the French, or some other predatory nation would have wiped them out.

What the Native Americans needed to do was to reorganise their society, to give up their traditional way of life, to live in cities, to adopt the settlers' customs, laws, methods of production, and so on. See, for example, the example of Japan 60 years later.

None of that will stop them dying like flies to smallpox. Oh and also, giving up traditional ways of life to live like the Americans didn't work out so well for some of the Cherokee. They played by all the rules, but as soon as prospectors found gold on their land they were pushed aside.
Strikes me that adopting Western customs and technology (such as the smallpox vaccine, Jenner, 1798) would have been exactly the right solution to that issue too. As for the Cherokee - I agree they tried. But they were still too weak to stand up for themselves. My suggestion is not "play by the white man's rules and hope he treats you nicely." It's "copy the white man's ways so you have the strength to resist him."
Huh, I didn't know the smallpox vaccine came about that early. Either way, there were still plenty of nasty diseases from the Old World that had (or still have) no vaccines, like cholera, typhus, typhoid, measles, malaria, influenza, leprosy and bubonic plague. Their cumulative effect sapped native societies of their vigor, and this would have persisted even if they adopted the kind of sanitation technologies that Euros brought. The reason it took Europeans until the 19th century to conquer the African interior was that disease was so difficult to overcome. Until quinine was developed, the half-life of a British garrison on the Gold Coast was less than 18 months. With this severe a disadvantage, I don't think there's anything the native Americans could have done, no matter how enlightened their chieftains.
I've heard it went better for the Cherokee than for other tribes, which is why the Cherokee are the ones most people have heard of.
The most successful tribe at adapting to the conditions of European settlement were the Comanches, who dominated a huge region of the west for about 100 years.
Yes - compared to other tribes they did the best. But it'd be pretty depressing to be a chieftain in 1800 knowing that that's the best you can do.
The most successful example of Native American resistance against colonizers were the Comanches, who did pretty much the opposite of this. Instead of settling down, they shifted from being semi-sedentary to highly mobile. They did not practice agriculture or even animal husbandry. They foraged and lived off of seized livestock. Adapting doesn't mean copying your enemy. When you copy from your enemies, best case scenario you become a match for them one-on-one. Realistically something is usually lost in translation when you copy, and it takes a long time to get up to speed. And in this case it was completely hopeless because Natives were much fewer in number and had various heritable vulnerabilities to disease and alcohol. In other words, when things are asymmetric, you use asymmetric warfare.
In what sense were the Comanche the most successful? Yes, they caused the most problems for the USA, but that is looking at the issue through the wrong end of the telescope. The mark of success is how your own nation flourishes. We are supposed to be looking at this from the Native American perspective. There are today more than twenty times as many Cherokee as Comanche. It's pretty clear which strategy was more effective. You're just wrong that when things are asymmetric you should necessarily use asymmetric warfare. It's equally true that you should trade, using Ricardian comparative advantage. It is just this adversarial, warfare-based frame that I am trying to challenge.
I deliberately gave the "if you were a chieftain" example because spontaneous reorganization is almost as difficult as making your enemies spontaneously nicer. Also there are examples from history of colonized people who suffered less than others.
And I deliberately gave the example of Japan. I don't know enough about Native Americans to say exactly how I'd go about the equivalent of a Meiji Restoration, but that's what I would attempt. I'd pass laws mandating compulsory Westernisation, forcibly settle the nomadic peoples, do my best to Christianise the country, and try and import as much technology and Western practices as I possibly could. And naturally I'd try and crush my rivals to make sure there was no alternative plan. I'd have tried to make Western contact as much of an opportunity as possible - Western imperialism was the best thing that ever happened to the country my family are from. Definitely so. The ones who suffered less are generally the ones who adapted. There is no alternate history where a nation of nomadic hunter-gatherers are wandering the Great Plains hunting buffalo in 2014. And frankly that would have been a pretty miserable outcome even from the Native Americans' perspective. Unfortunately, it's that rather romantic vision that inspires, rather than a more pragmatic one of a rich and populous Native American nation, but which is culturally not much different from its "American" neighbours.
Then Japanese were much more similar to the Europeans then Native Americans. For starters they had a government. Furthermore, they had developed some institutions that were similar to western institutions, or at least more similar than anything else outside the West. First you'd need to create a bureaucracy capable of enforcing laws.
If you want to give an example of successful Westernization, Japan is a terrible example. In the 17th century, the Dutch broke the commercial monopoly the Portuguese had over Japan, and the infighting between Dutch and Portuguese bothered the Japanese so much that they closed off the country. Only the Dutch (who had the wisdom to never use missionaries) were allowed to keep trading, and only through one port in one island. Fast forward to Commodore Perry and his gunboat diplomacy. Panicked, the Japanese quickly copied the ways of the West, including the industrial revolution and the German education system, and by the next century they had become an imperialistic oppressor over much of East Asia. It took WW2 to put a stop to that. Then the Americans took charge of ruling the country until it didn't appear to be a threat anymore. During the 1980's it seemed Japan was headed for big things, but they didn't know what to do with that promise. Maybe they panicked again. Now Japan is a toothless beast, unsure of its future, economically uncertain (still the world's 3rd, but stagnant), and demographically doomed. I was tempted to give Siam as a successful example instead, if only because they managed to never be colonized, but right now they're such a political joke that my first impression on this matter stands: there's no way colonization can end well.

I am confused as to why your potted history indicates that Meiji Japan is a bad example of successful westernisation.

  • On first contact, Japan unwisely attempts to shut out the Westerners, and stagnates for centuries, leading to the humiliation of Bakumatsu. This could easily have ended in the destruction of the Japanese nation; not copying the West was a disaster.
  • Seeing the need to avoid that fate, the Japanese showed the flexibility and wisdom to reform their nation. They quickly copied the ways of the West, which was a roaring success for Japan; they not only avoided destruction, but managed to defeat Western powers (e.g. Russo-Japanese war). Yes, they became an "imperialistic oppressor" (your words) to their neighbours. So what? The question is how should a people facing colonization act, not how should their neighbours hope they act.
  • Despite the destruction of WW2, Japan quickly rebounded, becoming even more Western, and even more successful. Yes, things aren't perfect, no, they aren't doomed, they are one of the richest and most successful countries in the world. The Cree Nation would kill to have their problems.
The reason Siam was never colonized was that it served as a buffer state between British Burma and French Indo-China. This suggests another method to avoid colonization. Play rival would-be colonizers against each other.
America circa 1800 is a hard problem, even by the standards of cultures facing colonization. The colonial aspect usually gets emphasized when people talk about that part of history, and not without reason -- the US and the Spanish at the time did behave appallingly badly. But it wasn't the only issue that Native Americans then were facing, not by a long shot. Disease and its social fallout had mangled the American interior's existing social organization quite effectively before any of the people involved had met a European other than the occasional explorer or scout (see for example the Mississippian culture), and the asymmetrical spread of technology (especially the horse, which I think we can file under "technology" if you turn your head and squint) arrived to stir things up just about when the whole exotic disease thing started getting under control. If we took Europeans off the continent in 1800, those issues would probably have sorted themselves out after a few decades of confusion. But they're more than enough to make mounting any kind of concerted response much, much harder.
Think hard and seriously about which of the two is worse. Do you want to lose (most of) your culture, adapt the newcomers' way of doing things and have a chance of competing with them economically? Or do you want to keep your culture, but be completely outclassed economically, and live at the whims of a more numerous and powerful neighbour? Both ways include a risk of losing both anyway, but the first path looks the safest to me. A bit as an aside, I don't think distinctive cultural identities is something that's inherently valuable to preserve. Some cultures are backwards, disfunctional or parasitic, and their loss is not worth mourning.
Meiji Japan which is a good example of adaptation-and-survival mentioned in this thread did NOT lose most of the traditional Japanese culture.
Pre-Meiji Japan was a large functioning literate sedentary agricultural civilization with a high average IQ. North American Indians were nearly all hunter-gatherers or pastoralists, did not have a tradition of literacy, had low population densities, and probably had a lower average IQ. The Japanese had a big head start.
Not exactly. There were plenty of hunter-gatherers, but both the Great Basin area and the American East and Southeast hosted fairly well-developed sedentary agricultural civilizations until European contact. Both had been under climate stress at the time of contact with the Spanish, and the latter collapsed with the introduction of European diseases, but the descendants of both remained largely agricultural. Populations did crash pretty hard, though.
I am not an expert in the field, but a look at your Wiki links shows that both these civilizations basically collapsed before any significant contact with the Europeans for unrelated reasons.
The Southwest agricultural civilizations show a growth/decline cycle going back hundreds of years before contact; it's probably primarily climate-driven, although some features of the archaeological record suggest that warfare's been an issue too. European contact was just another decline, one that they managed to weather pretty well by Native American standards -- their successors are among the most intact native cultures. The Mississippian culture didn't show that cycle, but it nonetheless was in decline for unrelated reasons at the time of contact (with Spanish explorers); smallpox and other diseases seem to have been the last proverbial nail in its coffin. Note that at that time, European diseases were spreading without direct European involvement: the culture never had any interchange with Europeans aside from the odd explorer, but it didn't need to. By the time the US reached its former territory, it had thoroughly collapsed, such that some of its successor tribes didn't even know why the mounds it's now known for were built. The agricultural traditions associated with both did survive, which was my main point, although some Mississippian descendants seem to have contributed to Plains Indian culture later on. I wanted to say something about Eastern Woodland agriculture (as made famous by Squanto et al.) too, but it didn't fit well into my post and Wikipedia didn't have a good summary. In practical terms it would have been basically Mississippian.
Meiji Japan did lead to an authoritative, militaristic culture whose legacy includes WWII. But also, there's a large difference between being targeted for economic subjugation only (as Japan was) and being targeted for territorial control (as in, imperial subject moving onto your land en masse), as the native Americans, native Australians, and Maori were. Meiji Japan is overall a relative success story, but it depended on more favorable factors than just Meiji era policy.
We're talking about how to survive colonization, not how to build a society the values of which you approve of.
Part of the reason Japan wasn't targeted for territorial control is that it was clear to everyone that Japan would be able to resist.
Agreed, though they did change a lot of their cutlure, and many prominent elements today were totally absent pre-Meiji. I don't know how much of today's Japanese culture someone from early 19th century Japan would recognize... (I'd guess, less than a European or American equivalent, but more than a Chinese equivalent, but I don't know enough to be sure...).
Do you think this is an option that was meaningfully available to Native Americans in the early 19th century?
They definitely had the possibility to choose different strategies, some more or less like that, but the power imbalance was such that either way, the prospects were pretty bad.
Hard to say. Consider the Cherokee Indians, who made a quite valiant attempt to 'close the gap', settling and inventing a written script and everything, only for things to go pear-shaped. But on the other hand, the Cherokee still seem to be around as a bunch of coherent groups, which is more than a lot of Native Americans from that time period could say.
Many nations facing colonization did attempt to adapt and fight. These attempts often ended in bloody wars and subjugation. The empires had enormous technological and military head starts.
Safest? How is genocide from forced labor, forced displacement and lack of immunity to foreign diseases, plus the deliberate and/or negligent destruction of irreplaceable historical monuments and cultural artifacts, in any conceivable way the safest route? Your profile says you live in France, presumably born there. You seem to have little personal experience of the receiving side of colonization. Short version: it's not pretty. Sure, bring me all the antibiotics and ebooks you wish to donate, but if you want to extract the ore I'm sitting on, I'd feel much safer if you don't kill me, claim ownership, and build a city over my grave. Being outclassed economically is worth keeping my neck any day.
I think you've misinterpreted the choice Emile is describing. The choice isn't between being colonized or not, the choice is what to do while you haven't been colonized but there's an empire nearby who might (and let's face it probably will) decide to colonize you soon. His "first path" would be something like Meiji Japan, as others have discussed in this thread.
(Agreed). More broadly, my point is that treating culture as a Sacred Value That Must Not Be Compromised risks leading to suboptimal decisions, and that in a lot of cases, compromising culture is pretty okay (especially if you can reinvent it). Unlike being outgunned and outnumbered and sitting on a ressource everybody wants, how one thinks about culture is something the hypothetical Chieftain can control.
You need either guns or money, preferably both, preferably as much as possible. I don't think a single chieftain could change much. But my best bet would be on trying to create a viable country -- probably on lands that the white colonists don't want. Likely outcome -- you die defending it.
Join a different culture. I've always admired the immigrants to the US who were emigrants from relatively oppressed countries. Some of them were pretty happy to join US culture, others thought they could preserve their former cultures while here, but their children drifted over anyway. Maybe it is environmental, or maybe I am missing some common gene, but I have never had any interest in preserving the culture of my forefather's in any sense in which it was not a winning culture.
That's not always possible, especially if your phenotype doesn't match. It also depends on how do you perceive your identity and whether you can let go of old-culture values including, for example, your religion.
It mostly depends on how the culture you want to join perceives identity; it's easier to become American than to become Jewish.
I disagree -- humans, in particular, adults, are not that malleable. Discarding your old identity is hard. Of course, some cultures are more accepting of newcomers (e.g. US) and some less (e.g. Japan). I think of "Jewish" as mostly ethnicity (if you prefer, a particular gene pool) and somewhat culture. In that sense you cannot "become" Jewish. You probably mean "convert to Judaism", though, and that's not that hard to do. Judaism does not proselytize for historical reasons, but if you want to convert you can do so.
That also applies to some extent to most European nationalities.
Yes, but "Jewish" is part of two different sets: one is "French, German, Italian, Jewish, ..." and the other one is "Christian, Moslem, Jewish, ..." and that gives rise to a lot of confusion.
Fer sure not always possible. But if it is possible, that would be my recommendation. And in the modern world, we have tremendous existence proofs of world wide migration with immigrants from Africa and Asia (among other places) visibly succeeding in many places in Europe, the America's, other parts of Asia, and Oceania.
In a situation of chaotic change and lack of knowledge, flexibility and quick responses to new information are key. A group or organization (not just a pre-industrial tribe) lacks precisely that, and has formal and informal group cohesion mechanisms that impede adaptation. Therefore, I believe a good strategy would be to motivate and assist other Native Americans to leave their tribes, attempt to fare for their own (or for their familiy, warrior band or other small group) in a thousand different ways, and teach others their own solution(s) if it happens to work.

(Repost, because it didn't get much love in the older thread)

Merely knowing about the confirmation bias helps to avoid it.

Or so I think. Ever since reading about the confirmation bias and taking some time to think of examples where I fell prey to it I catch myself following up a thought of this makes so much sense or this fits my exerience so well with a simple confirmation bias and thinking of alternative explanations or counter examples. The use for myself is not yet obvious and it is obvious I do not do this with perfect consistency. Another observation... (read more)

I think this is true of many (if not all) biases.

I have lots of questions for experts in various fields. Some of the questions are very detailed and based on extensive research, while other questions are more along the lines of "I could research this subject for a long time and eventually find the answer to my question buried in some obscure article, or I could just ask you." The problem is, how do I go about asking these questions in such a way that I'll actually get answers?

I could of course just send out questions to a bunch of experts and see if any respond. But as I said I have a lot of qu... (read more)

Put yourself in their shoes. Why should they spend their time on answering "a lot of questions" from a random person? You will need to provide them with reasons to talk to you. What can you offer that they value?
Excellent question. What sort of things do you think experts (professors, generally) might value that a less-expert person like myself might be able to offer? I have had the experience that when I actually do get to meet with and talk to experts one-on-one, then we usually do strike up a relationship of sorts, and they are then more than happy to help me in all sorts of ways. But the very same people were hard to get anything out of before we met in person.
Lots of things, of course: adoration, bacon, sexual favors, etc. etc. :-D In practice, I suspect that some attention, gratefulness, and a demonstration that you're not a clueless idiot with some agenda will go a long way towards making the expert willing to answer your questions. The last part is the problematic one in online communications -- by default you're just "another guy from the internet" and we all know what the average of that looks like. However something in this vein seems like not a bad start to me: "Dear Professor X, I read your papers/books Y and Z and was amazed how you figured out A, B, and C. However I have a question about D because while E it seems to me that F." Demonstrate cluefulness and use flattery :-) P.S. Another important issue is scope. Ask questions that can be concisely answered in a couple of paragraphs. Do not ask questions the answers to which are a graduate degree, a shelf of books, and a really tall stack of printed-out papers (e.g. "What should I eat for health and fitness?").
I get people emailing me math questions every once in awhile. I never answer them (I strongly prefer to answer math questions in a public forum like Quora or StackExchange), but some of them are at least tempting. I am actively turned off by any attempt on their part to use flattery, and those are never tempting. It always sounds fake to me. (Also, some of them call me a professor on accident and that's annoying too.)
Same here. Journals will call me professor on accident and it's also incredibly annoying.
Though in this case iarwain1's questions aren't "the type that [...] can get answers by posting on Quora or even specialty forums", at least in their own judgement. As another data point, I'm open to (mild, proportional) flattery but am also annoyed when people call me a professor by accident (it compels me to point out I'm not a professor, and demonstrates a lack of the cluefulness Lumifer refers to).
Offer them publicity. If you're a journalist in the middle of preparing a piece on their field of study (while carefully making sure to get the facts right, unlike many science reporters), some of them will jump to the chance of having their replies quoted.
For questions that don't merit an email, you might find places where people with the expertise you want gather online. Often, there will be a questions thread, and if there isn't you can start your own.

Interview with Peter Unger focusing on his new book criticizing much of philosophy. I haven't read the book yet, but from the interview it looks like it would be of interest to people here (although it might be too much confirmation bias to read something that preaches this much to the choir).

Prompted by this interview (which is interesting, but there should be a trigger warning for pompousness attached to Unger, even if I agree with most of his points about the virtues of the concrete), I looked up David Lewis. "Possible worlds" reminds me of the talk about Everett branches here. Neither is particularly useful.
Pretty sure Lewis' stuff generalizes the dominant paradigm for causal inference today. It's too bad all the people who know how to do philosophy are too busy posting on reddit subforums.
which subreddit? And there's nothing wrong with subreddits; some are quite high quality.

I want to test my ideas, mostly ideas for technology projects and/or startup ideas. Doing scientific research is best but can be quite costly and time-consuming, so I assume it would be optimal to filter the ideas first in order to select the best ones for testing. I already do things like looking for problems and unintended consequences, looking for relevant studies, and showing them to people hoping to find flaws, but I would bet that somebody has created an idea review process that can be applied for even better preliminary filtering. It would be ide... (read more)

So, I'm curious - how should we update on the probability of time travel, given this?

(Sorry about the paywall. The content, filtered through my undergraduate understanding of QM: essentially, researchers prepared a quantum simulator mathematically equivalent to a CTC and got reasonable results from it that matched up with David Deutsch's predictions.)

Essentially unchanged. Implementations of these things are usually done so that if quantum mechanics works, it will work.
But we don't know that quantum mechanics works on the edges, particularly the ones that give infinities or weird things like CTCs. Shouldn't the confirmation that it does, in fact, continue to work, increase our likelihood of time travel being possible in general?
This is not the edges. The paper is about a simulation, not actual time travel. Simulated time travel, following typical rules, is just a regular ol' quantum system that you assign extra meaning to. It was provable that this would work - though still a technical challenge to make it happen.
Except - and apologies if I'm wrong, since I don't completely understand the article - it seems like they first proved that this was equivalent (under current understanding of QM) to a system that includes a CTC. So it looks like it's proving "If quantum mechanics works, this describes a CTC."
They ignore relativity completely and simulate qubit interaction on a background spacetime with CTCs. Interesting, but has nothing to do with testing the limits of QM.
Ah, I see. Thanks.