LESSWRONG
LW

1677
JBlack
2351Ω5111360
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
3JBlack's Shortform
4y
11
3JBlack's Shortform
4y
11
Visionary arrogance and a criticism of LessWrong voting
JBlack4d20

Okay, so no actual Bayesian calculation, just an intuition.

Your post made the claim that it was substantially likely that most downvotes had no corresponding comment. If we look at this as a set of probabilities over readers R, it seems reasonable to model in terms of P(F_r), P(D_r), P(U_r), P(C_r), and P(CFP_r) for each reader r. C_r is the event of a reader providing a comment at all (whether or not it is a CFP comment).

Your expectation required  P(D_r) > P(U_r), since you expected the post to be overall downvoted. This condition also implies that at some point VN holds. You also expected P(F_r | D_r) to be low, say < 0.3. If P(F_r | D_r) were higher, then you could not reasonably expect to see multiple downvotes with no corresponding explanatory comment.

Now let us examine P(CFP_r | C_r). Looking over the site, almost all comments are in some way reactionary to the thing they are commenting on, and all but a tiny minority are >= 30 characters. So P(CFP_r | C_r) > 0.8 is likely in the background and not just under condition F_r. Also looking at other posts, the number of votes seems to be on average about half the number of comments so P(C_r) ~= (1/2) (P(D_r) + P(U_r)).

Having made a specific request (that you did not expect to be followed), did you expect to see fewer comments as a fraction of votes overall, compared with other posts? You didn't appear to think so, or it should have shown in your reasoning above. Likewise for P(CFP_r | C_r, R).

The condition VN is roughly the case D-U=2 (in this case I think it was exactly D=2, U=0), so your expectation E[C | VN, R] should have been around 1 to 3, and E[CFP | VN, R] about 1 to 2. You should also have expected E[F | VN, R] < 0.6.

So it seems to me to be quite a mistake to conclude P(F_r | CFP_r, R, VN) > 0.75.

That's even without considering the nature of the comment itself, which made no criticism of your post at all and appeared to be more informative linking it to a previous discussion on the matter than anything else.

Reply
Visionary arrogance and a criticism of LessWrong voting
JBlack4d10

I note that in your leading argument, you do not claim that you did do a Bayesian evaluation, nor even that someone could have done one and received a result that aligns with your conclusion. Just that it's possible that someone could be presented with a probability to evaluate (which could have any result). That seems like a bad-faith evasion to me.

Did you actually evaluate the probability?

I have downvoted your comment but will revert it if you actually did do a Bayesian evaluation and present it in a follow-up comment, and it looks correct, and in line with your statement that "It is reasonable that I would conclude that". That's the norm you're establishing in this comment chain, after all. A norm that, if it's not obvious already, I think is both harmful to constructive discussion and wasteful of people's time.

After that, then we can move on to the rest of the discussion with the results in mind.

Reply
Visionary arrogance and a criticism of LessWrong voting
JBlack5d50

One failure of logic is that you explicitly stated in your post that you already expected people to not follow this principle:

"I’m anticipating this post to be a straight shot to meta-irony: I have confidently made a non-normative claim, so expect a couple of negative post votes, absent of material feedback."

You cannot then claim that using it was reasonable.

Furthermore: regardless of whether the comment was in fact a response to your request in line with your requested guidelines, your first action in this discussion was to publically punish the one person who you believed to be following the guidelines you requested. You clearly have not examined the incentives you are creating here.

Edit: For the meta-meta-irony, I will also state that I downvoted and disagree-voted your comment replying to thenoviceoof for these reasons.

Reply
The networkist approach
JBlack10d20

One huge difference between the forest example and the society example is that we know forest ecosystems evolved over millions of years to be at least somewhat robust to perturbations. Likewise for individual plants. If you can arrange for their external conditions to at least roughly approximate the conditions they evolved into, they'll probably be fine.

We have no such guarantee for anything whatsoever in human society. We have no precedents that can tell us whether "the network" is even capable of correcting any problems to maintain any decent quality of life without intensive management. History gives some mild hope for optimism, but also a great number of examples of catastrophic failure. Similar concerns apply to AI neural networks, and essentially everything else that humanity has been involved with in the past tens of thousands of years.

Reply
All Exponentials are Eventually S-Curves
JBlack14d23

Some curves that start exponential are actual unimodal, in the long run. Some are oscillatory. Some are chaotic, some have undefined behaviour because the thing being plotted ceases to make sense.

Why do you think all of them are eventually S curves in particular?

Reply
CstineSublime's Shortform
JBlack15d20

There are ancient texts on this matter, such as

https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line

Short answer: once you know that you are listening to someone who wrote the bottom line first, then anything they wrote (or said) above the bottom line is worthless toward determining whether the bottom line is true.

It is still possible that they present information that is of some use in other respects, but only to the extent that it is not correlated with the truth or falsity of the bottom line.

Now, in some world it may be that if the bottom line were false, then fewer people would argue for it and such arguments would be less likely to appear on daytime television. That does not appear to be the world we live in.

Reply
The Cats are On To Something
JBlack15d110

But, I do think that we should prepare to have initiating the catpocalypse as a contingency.

I prefer the term "cataclysm". Though perhaps tiling the lightcone with some fraction of cats should be called a "catastrophe" given both the textual similarity and its intended meaning being related to some form of "cat-astrophysics".

Reply
Help me understand: how do multiverse acausal trades work?
JBlack15d4-3

My post is almost entirely about the enormous hidden assumptions in the word "finding" within your description "finding suitable trading partner in multiverse". The search space isn't just so large that you need galaxies full of computronium, because that's not even remotely close to enough. It's almost certainly not even within an order of magnitude of the number of orders of magnitude that it takes. It's not enough to just find one, because you need to average expected value over all of them to get any value at all.

The expected gains from every such trade are correspondingly small, even if you find some.

Reply
Avi Parrack's Shortform
JBlack16d20

Just as a minor note (to other readers, mostly) decoherence doesn't really have "number of branches" in any physically real sense. It is an artifact of a person doing the modelling choosing to approximate a system that way. You do address this further down, though. On the whole, great post.

Reply
Help me understand: how do multiverse acausal trades work?
JBlack16d3-4

Acausal trades almost certainly don't work.

There are more possible agents than atom-femtoseconds in the universe (to put it mildly), so if you devote even one femtosecond of one atom to modelling the desires of any given acausal agent then you are massively over-representing that agent.

The best that is possible is some sort of averaged distribution, and even then it's only worth modelling agents capable of conducting acausal trade with you - but not you in particular. Just you in the sense of an enormously broad reference class in which you might be placed by agents like them.

Given even an extremely weak form of orthogonality thesis, the net contribution of your entire reference class will be as close to zero as makes no difference - not even enough to affect one atom (or some other insignificantly small equivalent in other physics). If orthogonality doesn't hold even slightly, then you already know that your desires are reflected in the other reference classes and acausal trade is irrelevant.

So the only case that is left is one in which you know that orthogonality almost completely fails, and there are only (say) 10^1 to 10^30 or so reasonably plausible sets of preferences for sufficiently intelligent agents instead of the more intuitively expected 10^10000000000000000000 or more. This is an extraordinarily specific set of circumstances! Then you need that ridiculously specific set to include a reasonably broad but not too broad set of preferences for acausal trade in particular, along with an almost certain expectation that they actually exist in any meaningful sense that matters for your own preferences and likewise that they consider your preference class to meaningfully exist for theirs.

Then, to the extent that you believe that all of these hold and that all of the agents that you consider to meaningfully exist outside your causal influence also hold these beliefs, you can start to consider what you would like to expect in their universes more than anything you could have done with those resources in your own. The answer will almost certainly be "nothing".

Reply
Load More