All of TedSanders's Comments + Replies

The Point of Trade

A spatial framing:

(1) All objects have positions in space
(2) The desire by people to consume and use objects is not uniform over space (cars are demanded in Los Angeles more than Antarctica)
(3) The productive capacity to create and improve objects is not uniform over space (it's easier to create iron ore from an Australian mine, or a car at a Detroit factory)
(4) Efficiently satisfying the distribution of desires over space by the distribution of productive capacity over space necessarily involves linking separate points in space through transportation of g... (read more)

How feasible is long-range forecasting?

I spent years trading in prediction markets so I can offer some perspective.

If you step back and think about it, the question 'How well can the long-term future be forecasted?' doesn't really have an answer. The reason for this is that it completely depends on the domain of the forecasts. Like, consider all facts about the universe. Some facts are very, very predictable. In 10 years, I predict the Sun will exist with 99.99%+ probability. Some facts are very, very unpredictable. In 10 years, I have no clue whether the coin you flip will come ... (read more)

3ozziegooen2yI've been thinking similar things about predictability recently. Different variables have different levels of predictability, it seems very clear. I'm also under the impression that the examples in the Superforecasting study were quite specific. It seems likely that similar problems to what they studied have essentially low predictability 5-10 years out (and that is interesting information!), but this has limited relevance on other possible interesting questions. While I agree with the specifics, I don't think that the answer to a question like, "What is the average predictability of all possible statements" would be all that interesting. We generally care about a very small subset of "all possible statements." It seems pretty reasonable to me that we could learn about the predictability of the kinds of things we're interested. That said, i feel like we can get most of the benefits of this by just having calibrated forecasters try predicting all of these things, and seeing what their resolution numbers are. So I don't think we need to do a huge amount of work running tests for the sole purpose of better understanding long-term predictability. I left some longer comments in the EA Forum Post discussion [] .

Thanks for this. The work I really want to see from more forecasting projects is an analysis of how much things that typically impact people's lives can be predicted. Things like health, home-ownership, relationships, career, etc. Specifically, people's levels of cooperate/defect against their future self seems really inconsistent. i.e. people work really hard for their future selves along certain dimensions and then defect along lots of others. This is mostly just mimetic momentum, but still. Even rigorous research figuring out exactly what actu... (read more)

I agree with most of what you're saying, but this part seems like giving up way too easily: "And even if you say, ok sure it depends, but like what's the average answer - even then, the only the way to arrive at some unbiased global sense of whether the future is predictable is to come up with some way of enumerating and weighing all possible facts about the future universe... which is an impossible problem. So we're left with the unsatisfying truth that the future is neither predictable or unpredictable - it depends on which features of the future you are

... (read more)
What should rationalists think about the recent claims that air force pilots observed UFOs?

Rationalists should have mental models of the world that say if aliens/AI were out there, a few rare and poorly documented UFO encounters is not at all how we would find out. These stories are not worth the oxygen it takes to contemplate them.

In general, thinking more rationally can change confidence levels in only two directions: either toward more uncertainty or toward more certainty. Sometimes, rationalism says to open your mind, free yourself of prejudice, and overcome your bias. In these cases, you will be guided toward more uncertainty. Other time, r... (read more)

Disincentives for participating on LW/AF

My hypothesis: They don't anticipate any benefit.

Personally, I prefer to chat with friends and high-status strangers over internet randos. And I prefer to chat in person, where I can control and anticipate the conversation, rather than asynchronously via text with a bunch of internet randos who can enter and exit the conversation whenever they feel like it.

For me, this is why I rarely post on LessWrong.

Seeding and cultivating a community of high value conversations is difficult. I think the best way to attract high quality contributors is to already h... (read more)

Online discussions are much more scaleable than in-person ones. And the stuff you write becomes part of a searchable archive.

I also feel that online discussions allow me to organize my thoughts better. And I think it can be easier to get to the bottom of a disagreement online, whereas in person it's easier for someone to just keep changing the subject and make themselves impossible to pin down, or something like that.

If you've attended LW/SSC meetups, please take this survey!

Observation: I tried to take your survey, but discovered it's only for people who have attended meetups.

Recommendation: Edit your title to be 'If you've attended a LW/SSC meetup, please take the meetups survey!'

Anticipated result: This will save time for non-meetup people who click the survey, start to fill it out, and then realize it wasn't meant for them.

3mingyuan3yWow great point, I'm silly! Changed, thanks :)
Thoughts on Ben Garfinkel's "How sure are we about this AI stuff?"

Re: your request for collaboration - I am skeptical of ROI of research on AI X-risk, and I would be happy to help offer insight on that perspective, either as a source or as a giver of feedback. Feel free to email me at {last name}{first name}

I'm not an expert in AI, but I have a PhD in semiconductors (which gives me perspective on hardware) and currently work on machine learning at Netflix (which gives me perspective on software). I also was one of the winners of the SciCast prediction market a few years back, which is evidence that my judgment of near-term tech trends is decently calibrated.

Although I'm not the one trying to run a project, there are a couple credentials I'd be looking for to evaluate in a serious critic. (I very much agree with the OP that "serious critics" are an important thing to have more of).

Not meant to be a comment one way or another on whether you fit this, just that you didn't mention it yet:

  • Fluency in the arguments presented in Superintelligence (ideally, fluency in the broader spectrum of arguments relating to AI and X-Risk, but Superintelligence does a thorough enough job that it works ok
... (read more)
What went wrong in this interaction?

I didn't perceive either of you as hostile.

I think you each used words differently.

For example, you interpret the post as saying, "metoo has never gone too far."

What the post actually said was, "I've heard people complain that it 'goes too far,' but in my experience the cases referred to that way tend to be cases where someone... didn't endure much in the way of additional consequences."

I read that sentence that as much more limited in scope than your interpretation. (And because it says 'tend' and not... (read more)

If the housekeeper were to earn a wage of 3x rent, 15 other housemates would be required at those price points. That's a lot of cooking and cleaning.

No Really, Why Aren't Rationalists Winning?

What does winning look like?

I think I might be a winner. In the past five years: I have won thousands of dollars across multiple prediction market contests. I earned a prestigious degree (PhD Applied Physics from Stanford) and have held a couple of prestigious high-paying jobs (first as a management consultant at BCG, and now an algorithms data scientist at Netflix). I have a fulfilling social life with friends who make me happy. I give tens of thousands to charity. I enjoy posting to Facebook and surfing the internet. I have the means and motivation to ke... (read more)

9Sailor Vulcan3yIn other words, people who win at offline life spend less time on the internet because they're devoting more time offline. And since rationalists are largely an online community rather than offline at least outside of the bay area, this results in rationalists dropping out of the conversation when they start winning. That's a surprisingly plausible alternative explanation. I'll have to think about this.
look at the water

I think this is why attending universities and otherwise surrounding yourself with smart people is crucial. Their game will elevate your game. I often find myself learning more after someone smart asks me questions about a topic I thought I already knew. And the more this happens, the more I am able to short-circuit the process and preemptively ask those questions of myself.

4romeostevensit3yInternalizing generators has been a super useful frame. It's somewhat surprising from the inside how often we fail, upon finding some useful l query, to not abstract out and see if it produces other useful stuff. In short, sure, we often fail to explore, but we also fail to exploit!
Do Animals Have Rights?

"Thus, if we had to give animals rights – this would result in us being their slaves."

If we give other citizens the right to not be murdered, does that make us their slaves? Obviously not.

If we give animals the right to not be murdered, does that make us their slaves? Again, obviously not.

I'm not sure how someone thinks that giving rights means slavery. Obviously obligations can fall into a spectrum of severity, but I don't think the entire spectrum is worth labeling "slavery."

This is excellent. Thank you for writing it!

Psychology Replication Quiz

Interesting. I was surprised at how predictable the studies were. It felt like results that aligned with my intuition were likely to be replicated, and results that didn't (e.g., priming affecting a pretty unrelated task) were unlikely to be replicated. Makes me wonder - what's the value of this science if a layperson like me can score 18/18 (with 3 I don't knows) by gut feel after reading only a paragraph or two? Hmm.

(Then again, I guess my attitude of finding predictable results low-value is what has incentivized so much bad science in the hunt for counterintuitive results with their higher rewards.)

Why focus on AI?

Elephant in the Brain convinced me that many things human say are not to convey information or achieve conscious goals; rather, we say things to signal status and establish social positioning. Here are three hypotheses for why the community focuses on AI that have nothing to do with the probability or impact of AI:

  • Less knowledge about AGI. Because there is less knowledge about AGI than pandemics or climate change, it's easier to share opinions before feeling ignorant and withdrawing from conversations. This results in more conversations.
  • A disbelieving
... (read more)
4ChristianKl4yIf you think there's good information about bioengineered pandemics out there, what sources would you recommend? Multiple LW surveys considered those to be a more likely Xrisk and if there would be a good way to spend Xrisk EA dollar I think it would be likely that the topic would get funding but currently there doesn't seem to be good targets.
[Draft for commenting] Near-Term AI risks predictions

Generally yes, I think it's better when titles reveal the answer rather than the question alone. "Dangerous AI timing" sounds a bit awkward to my ear. Maybe a title like "Catastrophically dangerous AI is plausible before 2030" would work.

1avturchin4yYes, good point.
[Draft for commenting] Near-Term AI risks predictions

I think it's great that you and other people are investing time and thought into writing articles like these.

I also think it's great that you're soliciting early feedback to help improve the work.

I left some comments that I hope you find helpful.

1avturchin4yThanks for comments, I will incorporate them today. I also have a question to you and other readers: Maybe the article has to have more catching title? Something like: " Dangerous AI timing: after 2022 but before 2030"?
What useless things did you understand recently?

Is this actually true? Do you have a source? I have tried Googling for it.

My understanding is that the sky's blue color was caused by Rayleigh scattering. This scattering is higher for shorter wavelengths. There's no broad peak in scattering associated with nitrogen absorption lines (which I imagine would be very narrowband, rather than broadband).

Wikipedia's article on Rayleigh scatting mentions oxygen twice but makes no reference to your theory.

What useless things did you understand recently?

Wavelengths of visible light are around ~500 nm. Even infrared is on the order of micrometers. I don't think the spikes that we're imagining are micrometers apart.

0Elo5y [] Looks like photon density is a thing. I was under the impression that I read somewhere that spikes can intercept (harsh desert) light and make it less harmful to the plant
0Elo5yNo, they can be micrometers wide and long enough to cause disruption of waves
Against responsibility

Thanks for the long and thoughtful post.

My main question: Who are these 'people' that you seem to be arguing against?

It sounds like you're seeing people who believe:

  • "You - you, personally - are responsible for everything that happens."

  • "No one is allowed their own private perspective - everyone must take the public, common perspective."

  • Other humans are not independent and therefore warring with them is better than trading with them ("If you don't treat them as independent... you will default to going to war against them... rat

... (read more)
2Benquo5yThanks for the clear criticism! I do plan to try to write more on exactly where I see people making this and related errors. It's helpful to know that's a point on which some readers don't already share my sense. I'm not saying that people explicitly state that you ought to be in a state of war against everyone else - I'm instead saying that it's implied by some other things people in EA often believe. For instance, the idea that it's good for GiveWell to recommend one set of charities to the public, but advise Good Ventures [] to fund a different set of charities, because the public isn't smart enough to go for the real best giving opportunities. Or that you should try to get people to give more by running a matching donations fundraiser [] . Or that you can and should estimate the value of an intervention by assuming it's equal to the cost []. Or that it's good to exaggerate the effect of an intervention you like [] because then more people will give to the best charities. The thing all these have in common is that they ignore the opportunity cost of assuming control of other people's actions.
What are some science mistakes you made in college?

The best technique I use for "being careful" is to imagine the ways something could go wrong (e.g., my fingers slip and I drop something, I trip on my feet/cord/stairs, I get distracted for second, etc.). By imagining the specific ways something can go wrong, I feel much less likely to make a mistake.

5aarongertler8yIn the HUGR, I've included the advice "learn the sad stories of your lab as soon as possible" -- the most painful mistakes others, past and present, have made in the course of action. Helpful as a specific "ways things can go wrong" list.
Meetup : Small Berkeley Meetup

What differentiates a small meetup from a regular meetup?

3AlexMennen10yHalf of the Berkeley meetups are designated as large meetups as a Schelling point for people who only want to go to half of them.