Natália Mendonça

Sometimes I wish I had a halting oracle.

Wiki Contributions

Comments

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

This misses the fact that people’s ability to negatively influence others might vary very widely, making it so that it is silly to worry about, say, 99.99% of people strongly negatively influencing you, but reasonable to worry about the other 0.01%. If Michael is one of those 0.01%, then Scott’s worldview is not inconsistent.

Petrov Day 2021: Mutually Assured Destruction?

What is the purpose of showing the red button to those without launch codes?

Rob B's Shortform Feed

(Brian Tomasik's view superficially sounds a lot like what Ben Weinstein-Raun is criticizing in his second paragraph, so I thought I'd add here the comment I wrote in response to Ben's post:

> Panhousism isn't exactly wrong, but it's not actually very enlightening. It doesn't explain how the houseyness of a tree is increased when you rearrange the tree to be a log cabin. In fact it might naively want to deny that the total houseyness is increased.

I really don’t see how that is what panhousism would say, at least what I have in mind when I think of panhousism (which is analogous to what I have in mind when I think of (type-A materialist[1]) panpsychism). If all that panhousism means is that (1) “house” is a cluster in thingspace and (2) nothing is infinitely far away from the centroid of the “house” cluster, then it seems very obvious to me that the distance of a tree from the “house” centroid decreases if you rearrange the tree into a log cabin. As an example, focus on the “suitability to protect humans from rain” dimension in thingspace. It’s very clear to me that turning a tree into a log cabin moves it closer to the “house” cluster in that dimension. And the same principle applies to all other dimensions. So I don’t see your point here.

I'm not sure if I should quote Ben's reply to me, since his post is not public, but he pretty much said that his original post was not addressing type-A physicalist panpsychism, although he finds this view unuseful for other reasons.

)

Rob B's Shortform Feed

I think panpsychism is outrageously false, and profoundly misguided as an approach to the hard problem.

What do you think of Brian Tomasik's flavor of panpsychism, which he says is compatible with (and, indeed, follows from) type-A materialism? As he puts it,

It's unsurprising that a type-A physicalist should attribute nonzero consciousness to all systems. After all, "consciousness" is a concept -- a "cluster in thingspace" -- and all points in thingspace are less than infinitely far away from the centroid of the "consciousness" cluster. By a similar argument, we might say that any system displays nonzero similarity to any concept (except maybe for strictly partitioned concepts that map onto the universe's fundamental ontology, like the difference between matter vs. antimatter). Panpsychism on consciousness is just one particular example of that principle.

How much do variations in diet quality determine individual productivity?

Thank you a lot! I’m looking forward to the preprint. If you don’t mind me asking, was your sample fully vegetarian?

How much do variations in diet quality determine individual productivity?

This is pretty interesting, I’ll take a look into it. Thank you.

How much do variations in diet quality determine individual productivity?

Those studies could elucidate evidence in favor of his thesis, though, which is why I’m looking for them.

How much do variations in diet quality determine individual productivity?

I’m looking for answers less like “this thing made me feel better/worse” and more like “these RCTs with a reasonable methodology showed on average a long-term X-point IQ increase/Y-point HAM-D reduction in the intervention groups, and these analogous animal studies found a similar effect,” in which X and Y are numbers generally agreed to be “very large” in each context.

This also seems to be the kind of question that variance component analyses would help elucidate.

I do take a creatine supplement, despite expecting it to not to help cognition/mood/productivity that much.

Anti-Aging: State of the Art

[F]ew members of [LessWrong] seem to be aware of the current state of the anti-aging field, and how close we are to developing effective anti-aging therapies. As a result, there is a much greater (and in my opinion, irrational) overemphasis on the Plan B of cryonics for life extension, rather than Plan A of solving aging. Both are important, but the latter is under-emphasised despite being a potentially more feasible strategy for life extension given the potentially high probability that cryonics will not work.

I think there is a good reason for there being more focus on cryonics than solving aging on LessWrong. Cryonics is a service anyone with the means can purchase right now, whereas there is barely anything anyone can do to slow their aging (modulo getting young blood transfusion and perhaps taking a few drugs, neither of which work that well).

If you are a billionaire, or very knowledgeable about biology, you might be able to contribute somewhat to anti-aging research — but only a very small fraction of the population is either of those things, whereas pretty much anyone that can get life insurance in the US can get cryopreserved.

What would be a good name for the view that the value of our decisions is primarily determined by how they affect causally-disconnected regions of the multiverse?

Things outside of your future light cone (that is, things you cannot physically affect) can “subjunctively depend” on your decisions. If beings outside of your future light cone simulate your decision-making process (and base their own decisions on yours), you can affect things that happen there. It can be helpful to take into account those effects when you’re determining your decision-making process, and to act as if you were all of your copies at once.

Those were some of my takeaways from reading about functional decision theory (described in the post I linked above) and updateless decision theory.

Load More