From the SIA viewpoint the anthropic update process is essentially just a prior and an update. You start with a prior on each hypothesis (possible universe) and then update by weighting each by how many observers in your epistemic situation each universe has.
This perspective sees the equalization of “anthropic probability mass” between possible universes prior to apportionment as an unnecessary distortion of the process: after all, “why would you give a hypothesis an artificial boost in likelihood just because it posits fewer observers than other hypothese...
On the question of how to modify your prior over possible universe+index combinations based on observer counts, the way that I like to think of the SSA vs SIA methods is that with SSA you are first apportioning probability mass to each possible universe, then dividing that up among possible observers within each universe, while with SIA you are directly apportioning among possible observers, irrespective of which possible universes they are in.
The numbers come out the same as considering it in the way you write in the post, but this way feels more intuitive to me (as a natural way of doing things, rather than “and then we add an arbitrary weighing to make the numbers come out right”) and maybe to others.
If you’re adding the salt after you turn on the burner then it doesn’t actually add to the heating+cooking time.
To steelman the anti-sex-for-rent case, it could be considered that after the tenant has entered into that arrangement, the tenant could feel pressure to keep having sex with the landlord (even if they would prefer not to and would not at that later point choose to enter the contract) due to the transfer cost of moving to a new home. (Though this also applies to monetary rent, the potential for threatening the boundaries of consent is generally seen as more harmful than threatening the boundaries of one’s budget)
This could also be used as a point of levera...
In terms of similarity between telling the truth and lying, think about how much of a change you would have to make to the mindset of a person at each level to get them to level 1 (truth)
Level 2: they’re already thinking about world models, you just need to get them cooperate with you in seeking the truth rather than trying to manipulate you.
Level 3: you need to get them the idea of words as having some sort of correspondence with the actual world, rather than just as floating tribal signifiers. After doing that, you still have to make sure that they are f...
Re: “best vs better”: claiming that something is the best can be a weaker claim than claiming that it it better than something else. Specifically, if two things are of equal quality (and not surpassed) then both are the best, but neither is better than the other.
Apocryphally, I’ve heard that certain types of goods are regarded by regulatory agencies as being of uniform quality, such that there’s not considered to be an objective basis for claiming that your brand is better than another. However, you can freely claim that yours is the best, as there is similarly no objective basis on which to prove that your product is inferior to another (as would be needed to show that it is not the best).
One other mechanism that would lead to the persistence of e.g. antibiotic resistance would be when the mutation that confers the resistance is not costly (e.g. a mutation which changes the shape of a protein targeted by an antibiotic to a different shape that, while equally functional, is not disrupted by the antibiotic). Note that I don’t actually know whether this mechanism is common in practice.
Thanks for writing this nice article. Also thanks for the “Qualia the Purple” recommendation. I’ve read it now and it really is great.
In the spirit of paying it forward, I can recommend https://imagakblog.wordpress.com/2018/07/18/suspended-in-dreams-on-the-mitakihara-loopline-a-nietzschean-reading-of-madoka-magica-rebellion-story/ as a nice analysis of themes in PMMM.
It seems like this might be double-counting uncertainty? Normal EV-type decision calculations already (should, at least) account for uncertainty about how our actions affect the future.
Adding explicit time-discounting seems like it would over-adjust in that regard, with the extra adjustment (time) just being an imperfect proxy for the first (uncertainty), when we only really care about the uncertainty to begin with.
Indeed humans are significantly non-aligned. In order for an ASI to be non-catastrophic, it would likely have to be substantially more aligned than humans are. This is probably less-than-impossible due to the fact that the AI can be built from the get-go to be aligned, rather than being a bunch of barely-coherent odds and ends thrown together by natural selection.
Of course, reaching that level of alignedness remains a very hard task, hence the whole AI alignment problem.
I had another thing planned for this week, but turned out I’d already written a version of it back in 2010
What is the post that this is referring to, and what prompted thinking of those particular ideas now?
I see it in a similar light to “would you rather have more or fewer cells in your body?”. If you made me choose I probably would rather have more, but only insofar as having fewer might be associated with certain bad things (e.g. losing a limb).
Correspondingly, I don’t care intrinsically about e.g. how much algae exists except insofar as that amount being too high or low might cause problems in things I actually care about (such as human lives).
Seeing the relative lack of pickup in terms of upvotes, I just want to thank you for putting this together. I’ve only read a couple of Dath Ilan posts, and this provided a nice coverage of the AI-in-Dath-Ilan concepts, many of the specifics of which I had not read previously.
My understanding of it is that there is conflict between different “types” of the mixed population based on e.g. skin lightness and which particular blend of ethnic groups makes up a person’s ancestry.
EDIT: my knowledge on this topic mostly concerns Mexico, but should still generally apply to Brazil.
That PDF seems like it is a part of a spoken presentation (it’s rather abbreviated for a standalone thing). Does there exist such a presentation? If so, I was not successful in funding it, and would appreciate it if you could point it out.
I similarly offer myself as an author, in either the dungeon master or player role. I could possibly get involved in the management or technical side of things, but would likely not be effective in heading a project (for similar reasons to Brangus), and do not have practical experience in machine learning.
I am best reached through direct message or comment reply here on Lesswrong, and can provide other contact information if someone wants to work with me.
The main post of what amounts of evidence different tests give is this one: https://www.lesswrong.com/posts/cEohkb9mqbc3JwSLW/how-much-should-you-update-on-a-covid-test-result
Also related is part of this post from Zvi (specifically the section starting “Michael Mena”): https://www.lesswrong.com/posts/CoZitvxi2ru9ehypC/covid-9-9-passing-the-peak
Combining the information from the two, it seems like insofar as you care about infectivity rather than the person having dead virus RNA still in their body, the actual amount of evidence from rapid antigen tests wil...
This is a good piece of writing. It reminds me of another piece of fiction (somewhat happier in tone) which I cannot find again. The plot involves a woman trying to rescue her boyfriend from a nemesis in a similar AI-managed world. I think it involves her jumping out of a plane, and landing in the garden of someone who eschews AI-protection for his garden, rendering it vulnerable to destruction without his consent. Does anyone recall the name/location of this story?
The part about hiring proofreading brought a question to mind: where does the operating budget for the lesswrong website come from, both for stuff like that and standard server costs?
If you also consider the indirect deaths due to the collapse of civilization, I would say that 95% lies within the realm of reason. You don’t need anywhere close to 95% of the population to be fully affected by the scissor to bring about 95% destruction.
Sorry if I was ambiguous in my remark. The comparison that I’m musing about is between “fierce” vs “not fierce” nerds, with no particular consideration of those who are not nerds in the first place.
It’s interesting to read posts like this and “Fierce Nerds” while myself being much less ambitious/fierce/driven than the objects of said essays. I wonder what other psychological traits are associated with the difference between those who are more vs less ambitious/fierce/driven, other things being equal.
Nice poem! It’s cool to see philosophical and mathematical concepts expressed through elegant language, though it it somewhat less common, due to the divergence of interests and skills.
I’d say a lot of domains have reasonably-aligned incentives a lot of the time, but that’s a boring non-answer. For a specific example, there’s the classic case of how whenever I go to the grocery store, I’m presented with a panoply of cheap, good quality foodstuffs available for me to purchase. The incentives along the chain from production -> store -> me are reasonably well-aligned.
Thanks for the summary. A minor copyediting note: the sentence «They begin as the caracter becomes uncontent with their situation, and» cuts off part way.
Copyediting note: it appears that the parenthetical statement <(Note: agent here just means “being”, not> got cut off.
You mention the EA investing group. Where is that? A cursory search didn’t seem to bring anything up. Also, more generally speaking, what would be your top few recommendations of places to keep up with the latest rationalist investment advice?
On this note, I would definitely be willing to pay premium to be part of a fund run by a rationalist who’s more intimately involved with the crypto and prediction markets than I am, and would thereby be able to get significantly more edge than I currently can.
It would definitely be neat to read a history of that sort. Having myself not read many of the books that Eliezer references as forerunners, that area of history is one that I at least would like to learn more about.
Yes, I’d just say that there’s a lot resting on that “up to a point”. Lots of goods, cars included,, fairly rapidly saturate in the benefit that they bring, and hence in how much of them get consumed. At least in the US, we’re at the point where there’s almost as many cars as people, and there’s fairly little use to more than one car per person. This puts a pretty hard upper limit on how much increased car production quality/efficiency will show up (and to a lesser extent, has shown up) in material use.
My informal perception is that in the “developed world...
As you briefly mentioned, the focus on input measures (like quantity of materials consumed) can be different from the progress we’re really looking for. In making a progress dashboard, I’d be pretty wary of including such measures in roughly the same way I’d be wary of judging how good a university is by how many employees/student it has — at best the measure is correlated with good things, but even then it’s a cost being paid to get those things, not a benefit in its own right.
Similarly, much of the gain of technology is in making better use of resources,...
A fun interactive demonstration of special relativity. It’s good for getting an intuitive sense for some of the “weird” things that happen in relativistic conditions.
In a world where the fixed costs of creating a being with 0 utility are 0 (very unlike our world), and the marginal costs of utility are increasing (like our world), the best population state would be an ~infinite number of people each with a positive infinitesimal amount of utility relative to nonexistence.
However, the characteristics of personhood and existence would need to be so drastically different in order for the 0 cost to create assumption to be true (or even close to true, even virtual minds take up storage space) that I don’t really think that the conclusion in that particular case teaches us anything much meaningful about universes like our own.
At least to me, intuition is clearly in favor of creating said new people, as long as the positive utility (relative to the zero point of nonexistence) of their lives is greater than the loss in utility to those who already existed.
I do not view this as problematic from a consequentialist perspective, as I see that outcome as a better one than the prior state of fewer, somewhat happier people.
Just to be clear, due to the substantial (somewhat fixed) costs of creating and maintaining a person, the equilibrium point of ambivalence between creating or not cre...
One other essay on roughly this topic is https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/, sorting these considerations into three levels, axiology (what world-states are good), morality (what actions are good), and law (what behavior to enforce).
Another few reasons that I've heard for what's opposing later high school start times are 1) due to limited numbers of buses, doing high school later would require the lower schools to be earlier, and parents don't want their elementary schoolers out before sunrise, and 2) after-school activities like sports would be disrupted, both in an absolute sense (they already sometimes run pretty close to sunset) and a relative sense (a school that moved to a later schedule would either not be able to do sports games with other schools, or would have to have the at...
Thank you for making this sequence. I’ve been cryocrasinaing for a while, in part due to the complexity of the forms and insurance, and I hope that this sequence will give me the confidence to move forward.
I think the point being made in the post is that there’s a ground-truth-of-the-matter as to what comprises Art-Following Discourse.
To move into a different frame which I feel may capture the distinction more clearly, the True Laws of Discourse are not socially constructed, but our norms (though they attempt to approximate the True Laws) are definitely socially constructed.