I'm not talking about back and forth between true and false, but between two explanations. You can have a multimodal probability distribution and two distant modes are about equally probable, and when you update, sometimes one is larger and sometimes the other. Of course one doesn't need to choose a point estimate (maximum a posteriori), the distribution itself should ideally be believed in its entirety. But just as you can't see the rabbit-duck as simultaneously 50% rabbit and 50% duck, one sometimes switches between different explanations, similarly to an MCMC sampling procedure.
I don't want to argue this too much because it's largely a preference of style and culture. I think the discussions are very repetitive and it's an illusion that there is much to be learned by spending so much time thinking meta.
Anyway, I evaporate from the site for now.
I don't really understand what you mean about math academia. Those references would be appreciated.
Those are indeed impressive things you did. I agree very much with your post from 2010. But the fact that many people have this initial impression shows that something is wrong. What makes it look like a "twilight zone"? Why don't I feel the same symptoms for example on Scott Alexander's Slate Star Codex blog?
Another thing I could pinpoint is that I don't want to identify as a "rationalist", I don't want to be any -ist. It seems like a tactic to make people identify with a group and swallow "the whole package". (I also don't think people should identify as atheist either.)
I prefer public discussions. First, I'm a computer science student who took courses in machine learning, AI, wrote theses in these areas (nothing exceptional), I enjoy books like Thinking Fast and Slow, Black Swan, Pinker, Dawkins, Dennett, Ramachandran etc. So the topics discussed here are also interesting to me. But the atmosphere seems quite closed and turning inwards.
I feel similarities to reddit's Red Pill community. Previously "ignorant" people feel the community has opened a new world to them, they lived in darkness before, but now they found the "Way" ("Bayescraft") and all this stuff is becoming an identity for them.
Sorry if it's offensive, but I feel as if many people had no success in the "real world" matters and invented a fiction where they are the heroes by having joined some great organization much higher above the general public, who are just irrational automata still living in the dark.
I dislike the heavy use of insider terminology that make communication with "outsiders" about these ideas quite hard because you get used to referring to these things by the in-group terms, so you get kind of isolated from your real-life friends as you feel "they won't understand, they'd have to read so much". When actually many of the concepts are not all that new and could be phrased in a way that the "uninitiated" can also get it.
There are too many cross references in posts and it keeps you busy with the site longer than necessary. It seems that people try to prove they know some concept by using the jargon and including links to them. Instead, I'd prefer authors who actively try to minimize the need for links and jargon.
I also find the posts quite redundant. They seem to be reiterations of the same patterns in very long prose with people's stories intertwined with the ideas, instead of striving for clarity and conciseness. Much of it feels a lot like self-help for people with derailed lives who try to engineer their life (back) to success. I may be wrong but I get a depressed vibe from reading the site too long. It may also be because there is no lighthearted humor or in-jokes or "fun" or self-irony at all. Maybe because the members are just like that in general (perhaps due to mental differences, like being on the autism spectrum, I'm not a psychiatrist).
I can see that people here are really smart and the comments are often very reasonable. And it makes me wonder why they'd regard a single person such as Yudkowsky in such high esteem as compared to established book authors or academics or industry people in these areas. I know there has been much discussion about cultishness, and I think it goes a lot deeper than surface issues. LessWrong seems to be quite isolated and distrusting towards the mainstream. Many people seem to have read stuff first from Yudkowsky, who often does not reference earlier works that basically state the same stuff, so people get the impression that all or most of the ideas in "The Sequences" come from him. I was quite disappointed several times when I found the same ideas in mainstream books. The Sequences often depict the whole outside world as dumber than it is (straw man tactics, etc).
Another thing is that discussion is often too meta (or meta-meta). There is discussion on Bayes theorem and math principles but no actual detailed, worked out stuff. Very little actual programming for example. I'd expect people to create github projects, IPython notebooks to show some examples of what they are talking about. Much of the meta-meta-discussion is very opinion-based because there is no immediate feedback about whether someone is wrong or right. It's hard to test such hypotheses. For example, in this post I would have expected an example dataset and showing how PCA can uncover something surprising. Otherwise it's just floating out there although it matches nicely with the pattern that "some math concept gave me insight that refined my rationality". I'm not sure, maybe these "rationality improvements" are sometimes illusions.
I also don't get why the rationality stuff is intermixed with friendly AI and cryonics and transhumanism. I just don't see why these belong that much together. I find them too speculative and detached from the "real world" to be the central ideas. I realize they are important, but their prevalence could also be explained as "escapism" and it promotes the discussion of untestable meta things that I mentioned above, never having to face reality. There is much talk about what evidence is but not much talk that actually presents evidence.
I needed to develop a sort of immunity against topics like acausal trade that I can't fully specify how they are wrong, but they feel wrong and are hard to translate to practical testable statements, and it just messes with my head in the wrong way.
And of course there is also that secrecy around and hiding of "certain things".
That's it. This place may just not be for me, which is fine. People can have their communities in the way they want. You just asked for elaboration.
PCA doesn't tell much about causality though. It just gives you a "natural" coordinate system where the variables are not linearly correlated.
What do you mean by getting surprised by PCAs? Say you have some data, you compute the principal components (eigenvectors of the covariance matrix) and the corresponding eigenvalues. Were you surprised that a few principal components were enough to explain a large percentage of the variance of the data? Or were you surprised about what those vectors were?
I think this is not really PCA or even dimensionality reduction specific. It's simply the idea of latent variables. You could gain the same intuition from studying probabilistic graphical models, for example generative models.
You asked about emotional stuff so here is my perspective. I have extremely weird feelings about this whole forum that may affect my writing style. My view is constantly popping back and forth between different views, like in the rabbit-duck gestalt image. On one hand I often see interesting and very good arguments, but on the other hand I see tons of red flags popping up. I feel that I need to maintain extreme mental efforts to stay "sane" here. Maybe I should refrain from commenting. It's a pity because I'm generally very interested in the topics discussed here, but the tone and the underlying ideology is pushing me away. On the other hand I feel an urge to check out the posts despite this effect. I'm not sure what aspect of certain forums have this psychological effect on my thinking, but I've felt it on various reddit communities as well.
Qualitative day-to-day dimensionality reduction sounds like woo to me. Not a bit more convincing than quantum woo (Deepak Chopra et al.). Whatever you're doing, it's surely not like doing SVD on a data matrix or eigen-decomposition on the covariance matrix of your observations.
Of course, you can often identify motivations behind people's actions. A lot of psychology is basically trying to uncover these motivations. Basically an intentional interpretation and a theory of mind are examples of dimensionality reduction in some sense. Instead of explaining behavior by reasoning about receptors and neurons, you imagine a conscious agent with beliefs, desires and intentions. You could also link it to data compression (dimensionality reduction is a sort of lossy data compression). But I wouldn't say I'm using advanced data compression algorithms when playing with my dog. It just sounds pretentious and shows a desperate need to signal smartness.
So, what is the evidence that you are consciously doing something similar to PCA in social life? Do you write down variables and numbers, or how can I imagine qualitative dimensionality reduction. How is it different from somebody just getting an opinion intuitively and then justifying it with afterwards?
"impression that more advanced statistics is technical elaboration that doesn't offer major additional insights"
Why did you have this impression?
Sorry for the off-topic, but I see this a lot in LessWrong (as a casual reader). People seem to focus on textual, deep-sounding, wow-inducing expositions, but often dislike the technicalities, getting hands dirty with actually understanding calculations, equations, formulas, details of algorithms etc (calculations that don't tickle those wow-receptors that we all have). As if these were merely some minor additions over the really important big picture view. As I see it this movement seems to try to build up a new backbone of knowledge from scratch. But doing this they repeat the mistakes of the past philosophers. For example going for the "deep", outlook-transforming texts that often give a delusional feeling of "oh now I understand the whole world". It's easy to have wow-moments without actually having understood something new.
So yes, PCA is useful and most statistics and maths and computer science is useful for understanding stuff. But then you swing to the other extreme and say "ideas from advanced statistics are essential for reasoning about the world, even on a day-to-day level". Tell me how exactly you're planning to use PCA day-to-day? I think you may mean you want to use some "insight" that you gained from it. But I'm not sure what that would be. It seems to be a cartoonish distortion that makes it fit into an ideology.
Anyway, mainstream machine learning is very useful. And it's usually much more intricate and complicated than to be able to produce a deep everyday insight out of it. I think the sooner you lose the need for everything to resonate deeply or have a concise insightful summary, the better.
It can still be evidence-based, just on a larger budget. I mean, you can get higher quality examinations, like MRI and CT even if the public insurance couldn't afford it. Just because they wouldn't do it by default and only do it for your money doesn't mean it's not evidence based. Evidence-based medicine doesn't say that this person needs/doesn't need this treatment/examination, it gives a risk/benefit/cost analysis. The final decision also depends on the budget.