14y

Me too, I love the idea.

That is exactly the problem I am trying to address. On one hand, I can't figure out how to estimate the likelihood of a situation. On the other hand, it's quite evident that some people would fake a picture as mentioned above since many people find it of some importance. I just can't figure out how to try and evaluate the likelihood of one versus the other. When should I be confused?

34y

I don't really know. The likelihood of 'generating an amusing coincidence you
can post on social media' is clearly quite high: your 1/160,000 merely examines
one kind of amusement, and so obviously is merely an extremely loose lower
bound. The more kinds of coincidences you enumerate, the bigger the total
likelihood becomes, especially considering that people may be motivated to
manufacture stories. Countless examples (but here's a fun recent example on
confabulating stories for spurious candidate-gene hits). The process is so
heterogeneous and differs so much by area (be much more skeptical of hate crime
reports than rolling nat 20s), that I don't think there's really any general
approach other than to define a reference class, collect a sample, factcheck,
and see how many turn out to be genuine... A lot of SSC posts go into the
trouble we have with things like this, such as the 'lizardman constant' or rape
accusation statistics.
Personally, considering how many rounds there are in any D&D game, how often one
does a check, how many players running games there are constantly, how many
people you know within 1 or 2 hops on social media, a lower bound of 1/160,000
for a neutral event is already more than frequent enough for me to not be all
that skeptical; as Littlewood notes of his own examples, many involving
gambling, on a national basis, such things happen frequently.

But how can I apply this sort of logic to the problems I've described above? It still seems to me like I need in theory to sum over all of the probabilities in some set A that contains all these improbable events but I just don't understand how to even properly define A, as its boundaries seem fuzzy and various thing "kinda fit" or "doesn't quite fit, but maybe?" instead of plain true and false.

My main point I think, is that this is a more general problem. Some configurations of observations can seem extremely unlikely, yet the sum over all of these configurations might be fairly probable. Like if an airplane has an engine failure above your home town, and is about to crash unto it. The probability of it crashing right near your house is small (if you live in a big town), but it has to crash near **someone's** house. And that person that had the airplane crash right on his front lawn would go and say "What do you know, what are the odds? So...

24y

Right. For example, suppose you have a biased coin that comes up Heads 80% of
the time, and you flip it 100 times. The single most likely sequence of flips is
"all Heads." (Consider that you should bet heads on any particular flip.) But it
would be incredibly shocking to actually observe 100 Headses in a row
(probability 0.8¹⁰⁰ ≈ 2.037 · 10⁻¹⁰). Other sequences have less probability per
individual sequence, but there are vastly more of them: there's only one way to
get "all Heads", but there are 100 possible ways to get "99 Headses and 1 Tails"
(the Tails could be the 1st flip, or the 2nd, or ...), 4,950 ways to get "98
Headses and 2 Tailses", and so on. It turns out that you're almost certain to
observe a sequence with about 20 Tailses—you can think of this as being where
the "number of ways this reference class of outcomes could be realized" factor
balances out the "improbability of an individual outcome" factor. For more of
the theory here, see Chapter 4 of Information Theory, Inference, and Learning
Algorithms.

5y15

I think it's worth dividing blackmail into two distinct types:

1. Blackmailing on information that is harmful to society.

2. Blackmailing on information that is not harmful to society, but which the victim feels private about.

Your arguments stand somewhat well for the first type. For example, if one is stealing money from the cash register where he works on a weekly basis, then we would not want such behavior to persist. But for the latter type, for example, if someone is secretly a homosexual and is afraid of what his family would say or do if they kne...

5y4

Sounds really cool, too bad the 'more details document' is all in Russian. I suppose it's not like I would go to Russia just for an RPG, but it sounds like fun and I would love hearing more details about it.

I think the title is a little bit misleading, and perhaps he didn't put much emphasis on this, but it seems he isn't claiming correct models are generally bad, just that there are also possible downsides to holding correct models and it's probably a good idea to be aware to these flaws when applying these models to reality.

Also, it seems to me as he is defining 'correct model' as a model in which the reasoning is sound and can be used for some applications, however does not necessarily fully describe every aspect of the problem.

What word do you mean? Friendly AI? It's a term (I'm hardly an expert, but I guess wikipedia should be okay for that https://en.wikipedia.org/wiki/Friendly*artificial* intelligence )

36y

I think they're referring to the fact that they wouldn't expect a Friendly AI to
deconstruct them.
Also, for some reason, the link is wonky - likely because LessWrong 2.0 parses
text contained in _ as italics. Here's the fixed link:
https://en.wikipedia.org/wiki/Friendly_artificial_intelligence

Finally the true face of the LessWrong admins is revealed. I would do the sensible thing and go back to using myspace.