Year 2 Computer Science student

find me anywhere in

Wiki Contributions


Sorry, I don't feel like completely understanding your POV is worth the time. But I did read you reply 2-3 times. In roughly the same order as your writing.

Yes, so if you observe no sabotage, then you do update about the existence of a fifth column that would have, with some probability, sabotaged (an infinite possibility). But you don't update about the existence of the fifth column that doesn't sabotage, or wouldn't have sabotaged YET, which are also infinite possibilities. 

I'm not sure why infinity matters here, many things have infinite possibilities (like any continuous random variable) and you can still apply a rough estimate on the probability distribution.

I guess it's a general failure of Bayesian reasoning. You can't update 1 confidence beliefs, you can't update 0 confidence beliefs, and you can't update undefined beliefs.

I think this is an argument similar to an infinite recursion of where do priors come from? But Bayesian updates usually produces better estimate than your prior (and always better than your prior if you can do perfect updates, but that's impossible), and you can use many methods to guestimate a prior distribution.

You have a pretty good model about what might cause the sun to rise tomorrow, but no idea, complete uncertainty (not 0 with certainty nor 1 with certainty, nor 50/50 uncertainty, just completely undefined certainty) about what would make the sun NOT rise tomorrow, so you can't (rationally) Bayesian reason about it. You can bet on it, but you can't rationally believe about it.

Unknown Unknowns are indeed a thing. You can't completely rationally Bayesian reason about it, and that doesn't mean you can't try to Bayesian reason about it. Eliezer didn't say you can become a perfect Bayesian reasoner either, he always said you can attempt to reason better, and strive to approach Bayesian reasoning.

Relatedly, in-line private feedback. I saw a really good design for alerting typos here.

To the four people who picked 37 and thought there was a 5% chance other people would also choose it, well played. 

Wow, that's really a replicable phenomenon

Threads are pretty good, most help channels should probably be a forum (or 1 forum + 1 channel). Discord threads do have a significant drawback of lowering visibility by a lot, and people don't like to write things that nobody ever sees.

Discord forum

^ Forum

I didn't read either links, but you can write whatever you want on LessWrong! While most posts you see are very high quality, this is because there is a distinction between frontpage posts (promoted by mods) and personal blogposts (the default). See Site Guide: Personal Blogposts vs Frontpage Posts.

And yes some people do publish blogposts on LessWrong, jefftk being one that I follow.

FAQ: What can I post on LessWrong?

Posts on practically any topic are welcomed on LessWrong. I (and others on the team) feel it is important that members are able to “bring their entire selves” to LessWrong and are able to share all their thoughts, ideas, and experiences without fearing whether they are “on topic” for LessWrong. Rationality is not restricted to only specific domains of one’s life and neither should LessWrong be. [...]

I tend to think of "keep my identity small" as "keep my attachments to identity dimensions weak". 

Very much agree.


  1. Duplicate this to the open thread to increase visibility
  2. I don't know your exact implementation for forming the ranked list, but I worry that if you (for example) simply sort from low likelihood to high likelihood, it encourages people to only submit very low probability predictions.

For possible solutions:

1. This is my problem and I should find a way to stop feeling ugh

2. Have some ways to easily read a summary of long comments (AI or author generated)

3. People should write shorter comments on average

I often have an ugh feeling towards reading long comments.

Posts are usually well written, but long comments are usually rambly, even the highest karma ones. It takes a lot of effort to read the comments on top of reading the post, and the payoff is often small.

But for multiple reasons, I still feel an obligation to read at least some comments, and ugh.

You'd need perhaps 100, maybe even 1,000 times more arguments to get a perfectly open-minded and Bayesian agent to start from the point where the other person started and end up agreeing with you.

Modelling humans with Bayesian agent seems wrong.

For humans, I think the problem usually isn't the number of arguments / number of angles you attacked the problem, but whether you have hit on the few significant cruxes of that person. This is especially because humans are quite far away from perfect Bayesians. For relatively small disargreements (i.e. not at the scale of convincing a Christian that God doesn't exist), usually people just had a few wrong assumptions or cached thoughts. If you can accurately hit those cruxes, then you can convince them. It is very very hard to know which arguments can hit those cruxes though and it is why one of the viable strategies is to keep throwing arguments until one of them work.

(Also unlike convincing Bayesian agents where you can argue for W->X, X->Y, Y->Z in any order, sometimes you need to argue about things in the correct order)

Load More