All of Natália Mendonça's Comments + Replies

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

This misses the fact that people’s ability to negatively influence others might vary very widely, making it so that it is silly to worry about, say, 99.99% of people strongly negatively influencing you, but reasonable to worry about the other 0.01%. If Michael is one of those 0.01%, then Scott’s worldview is not inconsistent.

9TekhneMakre3moIf it's reasonable to worry about the .01%, it's reasonable to ask how the ability varies. There's some reason, some mechanism. This is worth discussing even if it's hard to give more than partial, metaphorical hypotheses. And if there are these .01% of very strong influencers, that is still an exception to strong liberal individualism.
4jessicata3moThat would still admit some people at Leverage having significant mental influence, especially if they got into weird mental tech that almost no one gets into. A lot of the weirdness is downstream of them encountering "body workers" who are extremely good at e.g. causing mental effects by touching people's back a little; these people could easily be extremal, and Leverage people learned from them. I've had sessions with some post-Leverage people where it seemed like really weird mental effects are happening in some implicit channel (like, I feel a thing poking at the left side of my consciousness and the person says, "oh, I just did an implicit channel thing, maybe you felt that"), I've never experienced effects like that (without drugs, and not obviously on drugs either though the comparison is harder) with others including with Michael, Anna, or normal therapists. This could be "placebo" in a way that makes it ultimately not that important but still, if we're admitting that 0.01% of people have these mental effects then it seems somewhat likely that this includes some Leverage people. Also, if the 0.01% is disproportionately influential (which, duh), then getting more detailed models than "charisma" is still quite important.
Petrov Day 2021: Mutually Assured Destruction?

What is the purpose of showing the red button to those without launch codes?

It means that you don't need to have the other person's login credentials to launch the nukes (I don't want to encourage password theft, and also think that the case of someone sharing just their codes is more interesting than someone sharing full access to their account). It also creates common-knowledge of what is happening on the site, in a pretty clear and obvious way.

Rob B's Shortform Feed

(Brian Tomasik's view superficially sounds a lot like what Ben Weinstein-Raun is criticizing in his second paragraph, so I thought I'd add here the comment I wrote in response to Ben's post:

> Panhousism isn't exactly wrong, but it's not actually very enlightening. It doesn't explain how the houseyness of a tree is increased when you rearrange the tree to be a log cabin. In fact it might naively want to deny that the total houseyness is increased.

I really don’t see how that is what panhousism would say, at least what I have in mind when I think of panhou

... (read more)
7Brian_Tomasik5moThanks for sharing. :) Yeah, it seems like most people have in mind type-F monism when they refer to panpsychism, since that's the kind of panpsychism that's growing in popularity in philosophy in recent years. I agree with [https://reducing-suffering.org/not-type-f-monist/#Doesnt_explain_our_belief_in_consciousness] Rob's reasons for rejecting that view.
Rob B's Shortform Feed

I think panpsychism is outrageously false, and profoundly misguided as an approach to the hard problem.

What do you think of Brian Tomasik's flavor of panpsychism, which he says is compatible with (and, indeed, follows from) type-A materialism? As he puts it,

It's unsurprising that a type-A physicalist should attribute nonzero consciousness to all systems. After all, "consciousness" is a concept -- a "cluster in thingspace" -- and all points in thingspace are less than infinitely far away from the centroid of the "consciousness" cluster. By a similar argumen

... (read more)
4Rob Bensinger5moI haven't read Brian Tomasik's thoughts on this, so let me know if you think I'm misunderstanding him / should read more. The hard problem of consciousness at least gives us a prima facie reason to consider panpsychism. (Though I think this ultimately falls apart when we consider 'we couldn't know about the hard problem of consciousness if non-interactionist panpsychism were true; and interactionist panpsychism would mean new, detectable physics'.) If we deny the hard problem, then I don't see any reason to give panpsychism any consideration in the first place. We could distinguish two panpsychist views here: 'trivial' (doesn't have any practical implications, just amounts to defining 'consciousness' so broadly as to include anything and everything); and 'nontrivial' (has practical implications, or at least the potential for such; e.g., perhaps the revelation that panpsychism is true should cause us to treat electrons as moral patients, with their own rights and/or their own welfare). But I see no reason whatsoever to think that electrons are moral patients, or that electrons have any other nontrivial mental property. The mere fact that we don't fully understand how human brains work is not a reason to ask whether there's some new undiscovered feature of particles∼1031times smaller than a human brain that explains the comically larger macro-process -- any more than limitations in our understanding of stomachs would be a reason to ask whether individual electrons have some hidden digestive properties.
4Natália Mendonça5mo(Brian Tomasik's view superficially sounds a lot like what Ben Weinstein-Raun is criticizing in his second paragraph, so I thought I'd add here the comment I wrote in response to Ben's post: I'm not sure if I should quote Ben's reply to me, since his post is not public, but he pretty much said that his original post was not addressing type-A physicalist panpsychism, although he finds this view unuseful for other reasons. )
How much do variations in diet quality determine individual productivity?

Thank you a lot! I’m looking forward to the preprint. If you don’t mind me asking, was your sample fully vegetarian?

2JanBrauner5moWe had two groups, one vegetarian/vegan, and one omnivore.
How much do variations in diet quality determine individual productivity?

This is pretty interesting, I’ll take a look into it. Thank you.

How much do variations in diet quality determine individual productivity?

Those studies could elucidate evidence in favor of his thesis, though, which is why I’m looking for them.

How much do variations in diet quality determine individual productivity?

I’m looking for answers less like “this thing made me feel better/worse” and more like “these RCTs with a reasonable methodology showed on average a long-term X-point IQ increase/Y-point HAM-D reduction in the intervention groups, and these analogous animal studies found a similar effect,” in which X and Y are numbers generally agreed to be “very large” in each context.

This also seems to be the kind of question that variance component analyses would help elucidate.

I do take a creatine supplement, despite expecting it to not to help cognition/mood/productivity that much.

3ChristianKl6moThose studies could not falsify the thesis of Jim Babcock's given that he doesn't assume that the same nutritional intervention has the same effect on different people.
Anti-Aging: State of the Art

[F]ew members of [LessWrong] seem to be aware of the current state of the anti-aging field, and how close we are to developing effective anti-aging therapies. As a result, there is a much greater (and in my opinion, irrational) overemphasis on the Plan B of cryonics for life extension, rather than Plan A of solving aging. Both are important, but the latter is under-emphasised despite being a potentially more feasible strategy for life extension given the potentially high probability that cryonics will not work.

I think there is a good reason for there be... (read more)

I agree that the LessWrong community can have a positive impact on the cryonics field by signing up for cryonics and direct more capital in to this extremely under-funded field. Cryonics is especially relevant for people older than 40 today who are much less likely to make it to longevity escape velocity.

However, I disagree that (1) there is barely anything people can do now to slow their aging and (2) there is barely anything that the average person can do to support the research and development of anti-aging therapies. I plan to write a separate post cov... (read more)

What would be a good name for the view that the value of our decisions is primarily determined by how they affect causally-disconnected regions of the multiverse?

Things outside of your future light cone (that is, things you cannot physically affect) can “subjunctively depend” on your decisions. If beings outside of your future light cone simulate your decision-making process (and base their own decisions on yours), you can affect things that happen there. It can be helpful to take into account those effects when you’re determining your decision-making process, and to act as if you were all of your copies at once.

Those were some of my takeaways from reading about functional decision theory (described in the post I l... (read more)

1Slider1yA far off decision maker can't have direct evidence of your existence as then you would be the cause of their existence. A far off observer can see a process that it can predict will result into you and things that it does may be cocauses with the future between you. I still think that the verb "affect" is wrong here. Say there is a pregnant mother and he friend leaves into another country and lives there in isolation for 18 years but knowing there is likely to be a person sends a birthday gift with a card referring to "happy 18th birthday". Nothing that you do in your childhood or adulthood can affect what the information on the card reads if the far off country is sufficiently isolated. The event of you opening the box will be both a product how you lived your childhood and what the sender chose to put in the box. Even if the gift sender would want to reward better persons with better gifts the choice needs to eb based on what kind of baby you were and not what kind of adult you are. And maybe crucially adult you will have past tht is not the past of baby you. The gift giver has no hope of taking a stance towards this data.
What would be a good name for the view that the value of our decisions is primarily determined by how they affect causally-disconnected regions of the multiverse?

I still don't understand what you mean by "causally-disconnected" here. In physics, it's anything in your future light cone (under some mild technical assumptions).

I think you mean to say “causally-connected,” not “causally-disconnected”?

I’m referring to regions outside of our future light cone.

A causally disconnected part would be caring now about something already beyond the cosmological horizon

Yes, that is what I’m referring to.

2shminux1yAh, okay. I don't see any reason to be concerned about something that we have no effect on. Will try to explain below. Regarding "subjunctive dependency" from the post linked in your other reply: I agree with a version of "They are questions about what type of source code you should be running", formulated as "what type of an algorithm results in max EV, as evaluated by the same algorithm?" This removes the contentious "should" part, that implies that you have an option of running some other algorithm (you don't, you are your own algorithm). The definition of "subjunctive dependency" in the post is something like "the predictor runs a simplified model of your actual algorithm that outputs the same result as your source code would, with high fidelity" and therefore the predictor's decisions "depend" on your algorithm, i.e. you can be modeled as affecting the predictor's actions "retroactively". Note that you, an algorithm, have no control of what that algorithm is, you just are it, even if your algorithm comes equipped with the routines that "think" about themselves. If you also postulate that the predictor is an algorithm, as well, then the question of decision theory in presence of predictors becomes something like "what type of an agent algorithm results in max EV when immersed in a given predictor algorithm?" In that approach the subjunctive dependency is not a very useful abstraction, since the predictor algorithm is assumed to be fixed. In which case there is no reason to consider causally disconnected parts of the agent's universe. Clearly your model is different from the above, since you seriously think about untestables and unaffectables.
2gbear6051yFrom my understanding of the definition of causality, any action made in this moment cannot affect anywhere that is causally-disconnected from where and when we are. After all, if it could then that region definitionally. wouldn't be causally disconnected from us. Are you referring to multiple future regions that are causally connected to the Earth at the current moment but are causally disconnected from each other?
What would be a good name for the view that the value of our decisions is primarily determined by how they affect causally-disconnected regions of the multiverse?

Thanks for your comment :) The definition of causality I meant to use in the question is physical causality, which doesn’t refer to things like affecting what happens in causally-disconnected regions of the multiverse that simulate your decision-making process. I’m going to edit the question to make that clearer.

Engaging Seriously with Short Timelines

Thanks for pointing this out; you’re right that your net worth wouldn’t necessarily be correlated with world GDP in many plausible scenarios of how takeoff could happen. I suppose the viability of things like taxation and redistribution of wealth by governments as well as trade involving humans during and after a takeoff could be the main determinants of whether the correlation between the two would be as strong as it is today or closer to zero. I wonder what I should expect the correlation to be.

ETA: After all, governments don’t redistribute human wealth to either horses or chimpanzees, and humans don’t engage in trade with them.

Engaging Seriously with Short Timelines
if things get crazy you want your capital to grow rapidly.

Why (if by "crazy" you mean "world output increasing rapidly")? Isn't investing to try to have much more money in case world output is very high somewhat like buying insurance to pay the cost of a taxi to the lottery office in case you win the lottery? Your net worth is positively correlated with world GDP, so worlds in which world GDP is higher are worlds in which you have more money, and thus worlds in which money has a lower marginal utility to you. People do tend to valu... (read more)

In an automation scenario, is your net worth correlated with world GDP? (Was the net worth of horses correlated with world GDP growth during the Industrial Revolution? Or chimpanzees during all human history? In an em scenario, who do the gains flow to - is it humans who own no capital and who earn income through labor?)

Competition: Amplify Rohin’s Prediction on AGI researchers & Safety Concerns

The third question is

Does X agree that there is at least one concern such that we have not yet solved it and we should not build superintelligent AGI until we do solve it?

Note the word “superintelligent.” This question would not resolve as “never” if the consensus specified in the question is reached after AGI is built (but before superintelligent AGI is built). Rohin Shah notes something similar in his comment:

even if we build human-level reasoning before a majority is reached, the question could still resolve positively after that, since human-leve

... (read more)
1Gurkenglas1yIt was meant as a submission, except that I couldn't be bothered to actually implement my distribution on that website :) - even/especially after superintelligent AI, researchers might come to the conclusion that we weren't prepared and *shouldn't* build another - regardless of whether the existing sovereign would allow it.
Six economics misconceptions of mine which I've resolved over the last few years

Agreed — I feel like it makes more sense to be proud of changing your mind when that entails acquiring a model of complexity similar to or lower than that of the model you used to have that makes better predictions, rather than merely making your model more complex.

What information on (or relevant to) modal immortality do you recommend?

I recommend Forever and Again: Necessary Conditions for “Quantum Immortality” and its Practical Implications by Alexey Turchin. I don’t endorse everything in there (especially not the usage of “Time” in the x-axis of Figure 3, the assumption that there is such a thing as a “correct” theory of personal identity, and the claim that there is a risk of “losing something important about one’s own existence” when using a teletransporter) but it is one of the articles most relevant to modal imm... (read more)

Project Proposal: Gears of Aging

I don’t see how that contradicts his claim. Having the data required to figure out X is really not the same as knowing X.

Thanks for pointing this out! I fixed it.

5noggin-scratcher2yIronically I read this after the correction and still thought it scanned a little oddly; as if were suggesting that the world naturally leads us to believe that things ought to fall at different rates specifically or only while in a vacuum. For my contribution to the bikeshedding, maybe "ought to fall at different rates, even while in a vacuum".
We don't know that all possible worlds are actual. This could be the only one.

Indeed. This entire post assumes all possible worlds are actual and reasons from there; I didn't mean to argue for their existence.

How were you first informed of the existence of numbers, colors, space, time, or people? It wasn't by non-contradiction.

Correct. But we are quite bad at actually reasoning from the law of non-contradiction; we often tend to act as if we believed contradictory things (as is shown by how frequently we make math errors). I conjecture tha... (read more)

Pecking Order and Flight Leadership

I think saying that people hate prophets is like saying that people hate ads. They hate the bad ones, because those are the ones they consciously notice, whereas the best ads/prophets probably exert their influence without people even thinking of associating them with those categories.

Besides, if "low rank in the pecking order but high decision-making power" applies to people who exert substantial influence with their ideas but don't have a correspondingly impressive amount of wealth or shiny credentials, it's not difficult to think of examples who are very far from hatable.

I agree. I used the modifier “sufficiently” in order to avoid making claims about where a hard line between complex goals and simple goals would lie. Should have made that clearer.

Thank you for the correction. Thinking about it, I think that is true even of humans, in a certain sense. I would guess that the ability to hold several goal-nodes in one’s mind would scale with g and/or working memory capacity. Someone who is very smart and has tolerance for ambiguity would be able to aim for a very complex goal while simultaneously maintaining a great performance in the day-to-day mundane tasks they need to accomplish which might have seemingly no resemblance to the original goal at all.

It seems to be a skill that requires “buckets” http

... (read more)
Dependability

I noticed recently that the tradeoff you have to make to be more dependable in that way is to be less open. Less open to new projects, new information, new people. You have to be less malleable, and more definite. It is largely about being able to knowingly cut the majority of the world from your attention, to ignore what isn't important. I don't think that's a bad thing -- there's much more joy in being focused and determined than in shifting your attention and commitment around. But it is something that comes more naturally to people ... (read more)

2romeostevensit3ySounds like choosing to write the bottom line first in a bounded way, and accepting that tradeoff for certain gains.
Retrospective on a quantitative productivity logging attempt

Who are you and how is it that we don't we know each other yet?

3femtogrammar3yHmm, I looked you up on Facebook and apparently you sent me a friend request god-knows-when (which I presumably ignored because I didn't know you), which I have just accepted.
Life can be better than you think

I’m fairly certain that in the vast majority of the time, negative emotions are ego-dystonic.

They’re not something actively sought out out of a desire for meaning, they’re something essentially inflicted upon the sufferer by parts of their mind that they can’t control.

I think acceptance of negative emotion is often driven from being in that position, a position of helplessness, often driven out of a desire to maintain a good self-image, and avoid entering the negativity loop — and not from a position of having control over whether it happens or not, and seeking it because it brings meaning.

3Matt Goldenberg3yWorking with modalities like Coherence therapy, internal double crux, and Internal Family Systems, I've developed the almost reverse hypothesis. In most cases, it seems like "negative" patterns and emotions come from a subconscious plan - If I experience these negative emotions in these ways, I can (eventually) get my needs meet. Coherence therapy calls this the "pro-symptom position". Examples of pro-symptom positions might include: "If I stop being miserable, then I won't be able to relate to my mother, and she won't love me. So I choose to continue to be miserable." "“If I go on without Dad and decide to get somewhere on my own, then I’m responsible for my own life. That feels really scary, so I’m holding back.” "If I stop feeling anxiety around other people, then I might not be as careful about what I say, and then they might hurt me. So I choose to continue to feel anxiety."
Life can be better than you think

Yeah, it would be interesting to investigate how that would work. I think the insights would serve to set a lower bound to mood, the same as what religion does for many people.

Life can be better than you think

I get bodily fatigue when I don't take it for over five days, I haven't ventured farther than that. No particular reason to.

3shminux3yI asked because odds are that your insights only work for a brain with a certain neurochemistry. I have seen this in those with bipolar. Many have all these amazing insights when (hypo)manic, but none of them have any effect when depressed.