Sorted by New


How feasible is long-range forecasting?

I spent years trading in prediction markets so I can offer some perspective.

If you step back and think about it, the question 'How well can the long-term future be forecasted?' doesn't really have an answer. The reason for this is that it completely depends on the domain of the forecasts. Like, consider all facts about the universe. Some facts are very, very predictable. In 10 years, I predict the Sun will exist with 99.99%+ probability. Some facts are very, very unpredictable. In 10 years, I have no clue whether the coin you flip will come up heads or tails. As a result, you cannot really say the future is predictable or not predictable. It depends on which aspect of the future you are predicting. And even if you say, ok sure it depends, but like what's the average answer - even then, the only the way to arrive at some unbiased global sense of whether the future is predictable is to come up with some way of enumerating and weighing all possible facts about the future universe... which is an impossible problem. So we're left with the unsatisfying truth that the future is neither predictable or unpredictable - it depends on which features of the future you are considering.

So when you show the plot above, you have to realize it doesn't generalize very well to other domains. For example, if the questions were about certain things - e.g., will the sun exist in 10 years - it would look high and flat. If the questions were about fundamentally uncertain things - e.g., what will the coin flip be 10 years from now - it would look low and flat. The slope we observe in that plot is less a property of how well the future can be predicted and more a property of the limited set of questions that were asked. If the questions were about uncertain near-term geopolitical events, then that graph shows the rate that information came in to the market consensus. It doesn't really tell us about the bigger picture of predicting the future.

Incidentally, this was my biggest gripe with Tetlock and Gardner's Superforecasting book. They spent a lot of time talking about how Superforecasters could predict the future, but almost no time talking about how the questions were selected and how if you choose different sets of counterfactual questions you can get totally different results (e.g., experts cannot predict the future vs rando smart people can predict the future). I don't really fault them for this, because it's a slippery thorny issue to discuss. I hope I have given you some flavor of it here.

What should rationalists think about the recent claims that air force pilots observed UFOs?

Rationalists should have mental models of the world that say if aliens/AI were out there, a few rare and poorly documented UFO encounters is not at all how we would find out. These stories are not worth the oxygen it takes to contemplate them.

In general, thinking more rationally can change confidence levels in only two directions: either toward more uncertainty or toward more certainty. Sometimes, rationalism says to open your mind, free yourself of prejudice, and overcome your bias. In these cases, you will be guided toward more uncertainty. Other time, rationalism says, c'mon, use your brain and think about the world in a way that's deeply self-consistent and don't fall for surface-level explanations. In these cases, you will be guided toward more certainty.

In my opinion, this is a case where rationalism should make us more certain, not less. Like, if there were aliens, is this really how we would find out? Obviously no.

Disincentives for participating on LW/AF

My hypothesis: They don't anticipate any benefit.

Personally, I prefer to chat with friends and high-status strangers over internet randos. And I prefer to chat in person, where I can control and anticipate the conversation, rather than asynchronously via text with a bunch of internet randos who can enter and exit the conversation whenever they feel like it.

For me, this is why I rarely post on LessWrong.

Seeding and cultivating a community of high value conversations is difficult. I think the best way to attract high quality contributors is to already have high quality contributors (and perhaps having mechanisms to disincentivize the low quality contributors). It's a bit of a bootstrapping problem. LessWrong is doing well, but no doubt it could do better.

That's my initial reaction, at least. Hope it doesn't offend or come off as too negative. Best wishes to you all.

If you've attended LW/SSC meetups, please take this survey!

Observation: I tried to take your survey, but discovered it's only for people who have attended meetups.

Recommendation: Edit your title to be 'If you've attended a LW/SSC meetup, please take the meetups survey!'

Anticipated result: This will save time for non-meetup people who click the survey, start to fill it out, and then realize it wasn't meant for them.

Thoughts on Ben Garfinkel's "How sure are we about this AI stuff?"

Re: your request for collaboration - I am skeptical of ROI of research on AI X-risk, and I would be happy to help offer insight on that perspective, either as a source or as a giver of feedback. Feel free to email me at {last name}{first name}@gmail.com

I'm not an expert in AI, but I have a PhD in semiconductors (which gives me perspective on hardware) and currently work on machine learning at Netflix (which gives me perspective on software). I also was one of the winners of the SciCast prediction market a few years back, which is evidence that my judgment of near-term tech trends is decently calibrated.

What went wrong in this interaction?

I didn't perceive either of you as hostile.

I think you each used words differently.

For example, you interpret the post as saying, "metoo has never gone too far."

What the post actually said was, "I've heard people complain that it 'goes too far,' but in my experience the cases referred to that way tend to be cases where someone... didn't endure much in the way of additional consequences."

I read that sentence that as much more limited in scope than your interpretation. (And because it says 'tend' and not 'never', supplying a couple of data points isn't enough information, by itself, to challenge the author's conclusion.)

In addition, you interpreted "metoo" as broadly meaning action against those accused of sexual misconduct.

However, the author interprets "metoo" more narrowly, as meaning action against those accused of sexual misconduct that would otherwise not have occurred in a counterfactual world without the #metoo movement that took off in 2017.

So in the end you didn't seem to disagree with the author's point, just their word usage.

I can empathize why the author wasn't eager to sustain the interaction with you. You used words differently and asked a bunch of questions asking the author to explain themselves. The author may have logically perceived the conversation as a cost, not a benefit.

This is my perception of your conversation. I hope it is helpful to you.

The housekeeper

If the housekeeper were to earn a wage of 3x rent, 15 other housemates would be required at those price points. That's a lot of cooking and cleaning.

No Really, Why Aren't Rationalists Winning?

What does winning look like?

I think I might be a winner. In the past five years: I have won thousands of dollars across multiple prediction market contests. I earned a prestigious degree (PhD Applied Physics from Stanford) and have held a couple of prestigious high-paying jobs (first as a management consultant at BCG, and now an algorithms data scientist at Netflix). I have a fulfilling social life with friends who make me happy. I give tens of thousands to charity. I enjoy posting to Facebook and surfing the internet. I have the means and motivation to keep learning about areas outside my expertise. I floss and exercise and generally am satisfied with my health.

I think I could be considered both a rationalist and a winner.

But I post rarely to LessWrong because my rational perception is that it takes effort but does not provide return. Generally I think my shortcomings are shortcomings of execution rather than irrationality, and those are the areas I aim to improve upon. My arena for self-improvement is my workplace and my life, not a website. As a result, my stories like mine might be underrepresented in your sampling.

If rationalists were winning, how would we know? What would winning look like?

look at the water

I think this is why attending universities and otherwise surrounding yourself with smart people is crucial. Their game will elevate your game. I often find myself learning more after someone smart asks me questions about a topic I thought I already knew. And the more this happens, the more I am able to short-circuit the process and preemptively ask those questions of myself.

Do Animals Have Rights?

"Thus, if we had to give animals rights – this would result in us being their slaves."

If we give other citizens the right to not be murdered, does that make us their slaves? Obviously not.

If we give animals the right to not be murdered, does that make us their slaves? Again, obviously not.

I'm not sure how someone thinks that giving rights means slavery. Obviously obligations can fall into a spectrum of severity, but I don't think the entire spectrum is worth labeling "slavery."

Load More