LESSWRONG
LW

Causal Chain
430130
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
X explains Z% of the variance in Y
Causal Chain2mo50

The relevant intuition I use comes from the [law of total variance](https://en.m.wikipedia.org/wiki/Law_of_total_variance) (or variance decomposition formula):

Var(Y)=E[Var(Y|X)]+Var(E[Y|X])

An interpretation: if you sample Y through a process of getting partial information step by step, the variance of each step adds up to the variance of sampling Y directly

The first two terms are V_{tot}(Y) and E[Var_{rem}(Y|X)] respectively, while the last part describes the "explained" variance.

To give an intuition for Var(E[Y|X]): 

  • If X gives me some information about Y, then my new mean for Y should change depending on X. If X gives little information, then it should only wiggle my mean estimate of Y a little (low variance), but a very explanatory X will move my mean estimate of Y a lot (high variance)
  • If X gave no information, then E[Y|X] should have no variance (it's always equal to the mean E[Y]).
  • If X completely explains Y, then E[Y|X] can equal any value in the domain of Y. Because every y has a corresponding x, that if sampled, means that P(Y=y|X=x) = 1. Indeed, E[Y|X] will have exactly the same distribution as Y, and so it will contain the full variance as Y
Reply
Saving the world sucks
Causal Chain2y10

I interpret upvotes/downvote as

  1. Do I want other people to read this post
  2. Do I want to encourage the author and others to write more posts like this.

And I favour this post for both of those reasons.

I agree that this post doesn't make philosophical argument for it's position, but I don't require that for every post. I value it as an observation of how the EA movement has affected this particular person, and as criticism.

A couple of strongly Anti-EA friends of mine became so due to a similar moral burnout, so it's particularly apparent to me how little emphasis is put on mental health.

Reply
Conflicts between emotional schemas often involve internal coercion
Causal Chain2y10

This dynamic reminds me of arguments-as-soldiers from The Scout Mindset. If people are used to wielding arguments as soldiers on themselves then it seems relatively easy to extend those patterns to reasoning with others. 

Testing this hypothesis seems tricky. One avenue is "people with more internal conflicts are more predisposed to the soldier mindset". I can see a couple ways in-model for this to be untrue though. 

Reply
New User's Guide to LessWrong
Causal Chain2y30

Some typos:

rationality lessons we've accumulated and made part of our to our thinking

Seems like some duplicated words here.

weird idea like AIs being power and dangerous in the nearish future.

 Perhaps: "weird ideas like AIs being powerful and dangerous"

Reply
Simulacrum 3 As Stag-Hunt Strategy
Causal Chain3y50

This seems like a reasonable mechanism, but I thought we already had one: belief-in-belief makes it easier to lie without being caught.

Reply
Do meta-memes and meta-antimemes exist? e.g. 'The map is not the territory' is also a map
Causal Chain3y30

The phrase "the map is not the territory" is not just a possibly conceivable map, it's part of my map.

Thinking in terms of programming, it's vaguely like I have a class instance s where one of the elements p is a pointer to the instance itself. So I can write *(s.p) == s. Or go further and write *(*(s.p).p) == s.

As far as I want with only the tools offered to me by my current map.

Reply
Writing this post as rationality case study
Causal Chain3y20

My immediate mental response was that I value this post, but it doesn't fit with the mood of lesswrong. Which is kind of sad because this seems practical. But this is heavily biased by how upvotes are divvied out, since I typically read highly-upvoted posts.

It seems less likely to maximize my happiness or my contribution to society, but it doesn't make me not want it

I thought this was clear to me, but then I thought some more and I no longer think it's straightforward. It pattern matched against

  • high value vs low probability
  • personalities are inbuilt biases in human strategy

But deductions from them seem of spurious use.

I agree that it's a good idea to give things a try to collect data before making longer term plans. Since you're explicitly exploring rather than exploiting, I suggest trying low-effort wacky ideas in many different directions (eg. Not on lesswrong)

Reply
What Are You Tracking In Your Head?
Causal Chain3y3-1

This reminds me of dual N-back training. Under this frame, dual N-back would improve your ability to track extra things. It's still unclear to me whether training it actually improves mental skills in other domains.

Reply
What Are You Tracking In Your Head?
Causal Chain3y20

The improvement to my intuitive predictive ability is definitely a factor to why I find it comforting, I don't know what fraction of it is aesthetics, I'd say a poorly calibrated 30%. Like maybe it reminds me of games where I could easily calculate the answer, so my brain assumes I am in that situation as long as I don't test that belief.

I'm definitely only comparing the sizes of changes to the same stat. My intuition also assumes diminishing returns for everything except defense which is accelerating returns - and knowing the size of each step helps inform this.

Reply
Load More
No posts to display.