Natália Mendonça

Sometimes I wish I had a halting oracle.


Engaging Seriously with Short Timelines

Thanks for pointing this out; you’re right that your net worth wouldn’t necessarily be correlated with world GDP in many plausible scenarios of how takeoff could happen. I suppose the viability of things like taxation and redistribution of wealth by governments as well as trade involving humans during and after a takeoff could be the main determinants of whether the correlation between the two would be as strong as it is today or closer to zero. I wonder what I should expect the correlation to be.

ETA: After all, governments don’t redistribute human wealth to either horses or chimpanzees, and humans don’t engage in trade with them.

Engaging Seriously with Short Timelines
if things get crazy you want your capital to grow rapidly.

Why (if by "crazy" you mean "world output increasing rapidly")? Isn't investing to try to have much more money in case world output is very high somewhat like buying insurance to pay the cost of a taxi to the lottery office in case you win the lottery? Your net worth is positively correlated with world GDP, so worlds in which world GDP is higher are worlds in which you have more money, and thus worlds in which money has a lower marginal utility to you. People do tend to value being richer than others in addition to merely being rich, but perhaps not enough to generate the numbers you need to make those investments be the obviously best choice.

(h/t to Avraham Eisenberg for this point)

Competition: Amplify Rohin’s Prediction on AGI researchers & Safety Concerns

The third question is

Does X agree that there is at least one concern such that we have not yet solved it and we should not build superintelligent AGI until we do solve it?

Note the word “superintelligent.” This question would not resolve as “never” if the consensus specified in the question is reached after AGI is built (but before superintelligent AGI is built). Rohin Shah notes something similar in his comment:

even if we build human-level reasoning before a majority is reached, the question could still resolve positively after that, since human-level reasoning != AI researchers are out of a job

Unrelatedly, you should probably label your comment “aside.”

Six economics misconceptions of mine which I've resolved over the last few years

Agreed — I feel like it makes more sense to be proud of changing your mind when that entails acquiring a model of complexity similar to or lower than that of the model you used to have that makes better predictions, rather than merely making your model more complex.

What information on (or relevant to) modal immortality do you recommend?

I recommend Forever and Again: Necessary Conditions for “Quantum Immortality” and its Practical Implications by Alexey Turchin. I don’t endorse everything in there (especially not the usage of “Time” in the x-axis of Figure 3, the assumption that there is such a thing as a “correct” theory of personal identity, and the claim that there is a risk of “losing something important about one’s own existence” when using a teletransporter) but it is one of the articles most relevant to modal immortality that I’ve found in the bibliography of Digital Immortality: Theory and Protocol for Indirect Mind Uploading, which you mentioned in your answer (and, for that matter, one of the articles most relevant to modal immortality I’ve found anywhere), and it is fairly thorough.

Project Proposal: Gears of Aging

I don’t see how that contradicts his claim. Having the data required to figure out X is really not the same as knowing X.

What truths are worth seeking?

Thanks for pointing this out! I fixed it.

What truths are worth seeking?
We don't know that all possible worlds are actual. This could be the only one.

Indeed. This entire post assumes all possible worlds are actual and reasons from there; I didn't mean to argue for their existence.

How were you first informed of the existence of numbers, colors, space, time, or people? It wasn't by non-contradiction.

Correct. But we are quite bad at actually reasoning from the law of non-contradiction; we often tend to act as if we believed contradictory things (as is shown by how frequently we make math errors). I conjecture that that is the reason why we need observation to figure things out (assuming all possible worlds exist), although I am not completely sure.

Pecking Order and Flight Leadership

I think saying that people hate prophets is like saying that people hate ads. They hate the bad ones, because those are the ones they consciously notice, whereas the best ads/prophets probably exert their influence without people even thinking of associating them with those categories.

Besides, if "low rank in the pecking order but high decision-making power" applies to people who exert substantial influence with their ideas but don't have a correspondingly impressive amount of wealth or shiny credentials, it's not difficult to think of examples who are very far from hatable.

Load More