# All of antanaclasis's Comments + Replies

I think the point being made in the post is that there’s a ground-truth-of-the-matter as to what comprises Art-Following Discourse.

To move into a different frame which I feel may capture the distinction more clearly, the True Laws of Discourse are not socially constructed, but our norms (though they attempt to approximate the True Laws) are definitely socially constructed.

From the SIA viewpoint the anthropic update process is essentially just a prior and an update. You start with a prior on each hypothesis (possible universe) and then update by weighting each by how many observers in your epistemic situation each universe has.

This perspective sees the equalization of “anthropic probability mass” between possible universes prior to apportionment as an unnecessary distortion of the process: after all, “why would you give a hypothesis an artificial boost in likelihood just because it posits fewer observers than other hypothese...

On the question of how to modify your prior over possible universe+index combinations based on observer counts, the way that I like to think of the SSA vs SIA methods is that with SSA you are first apportioning probability mass to each possible universe, then dividing that up among possible observers within each universe, while with SIA you are directly apportioning among possible observers, irrespective of which possible universes they are in.

The numbers come out the same as considering it in the way you write in the post, but this way feels more intuitive to me (as a natural way of doing things, rather than “and then we add an arbitrary weighing to make the numbers come out right”) and maybe to others.

1TobyC1mo
That's a nice way of looking at it. It's still not very clear to me why the SIA approach of apportioning among possible observers is something you should want to do. But it definitely feels useful to know that that's one way of interpreting what SIA is saying.

If you’re adding the salt after you turn on the burner then it doesn’t actually add to the heating+cooking time.

To steelman the anti-sex-for-rent case, it could be considered that after the tenant has entered into that arrangement, the tenant could feel pressure to keep having sex with the landlord (even if they would prefer not to and would not at that later point choose to enter the contract) due to the transfer cost of moving to a new home. (Though this also applies to monetary rent, the potential for threatening the boundaries of consent is generally seen as more harmful than threatening the boundaries of one’s budget)

This could also be used as a point of levera...

2Dumbledore's Army2mo
Thanks for the comment. I think tenants are still better off with a legal contract than not. Analogously, a money-paying tenant with a legal contract has some protections against a landlord raising rents, and gets a notice period and the option to refuse and go elsewhere; a money-paying tenant who pays cash in hand to an illegal landlord probably has less leverage to negotiate. (Although there will be exceptions.) Likewise, a sex-paying tenant is better off with a legal contract. I realise that the law won’t protect everyone and that some people will have bad outcomes no matter what - I deliberately picked this example to make people think about uncomfortable trade offs - but I still think the general approach of trying to give people more choice rather than less is preferable.

In terms of similarity between telling the truth and lying, think about how much of a change you would have to make to the mindset of a person at each level to get them to level 1 (truth)

Level 2: they’re already thinking about world models, you just need to get them cooperate with you in seeking the truth rather than trying to manipulate you.

Level 3: you need to get them the idea of words as having some sort of correspondence with the actual world, rather than just as floating tribal signifiers. After doing that, you still have to make sure that they are f...

Ah I see. Thanks for explaining.

Re: “best vs better”: claiming that something is the best can be a weaker claim than claiming that it it better than something else. Specifically, if two things are of equal quality (and not surpassed) then both are the best, but neither is better than the other.

Apocryphally, I’ve heard that certain types of goods are regarded by regulatory agencies as being of uniform quality, such that there’s not considered to be an objective basis for claiming that your brand is better than another. However, you can freely claim that yours is the best, as there is similarly no objective basis on which to prove that your product is inferior to another (as would be needed to show that it is not the best).

One other mechanism that would lead to the persistence of e.g. antibiotic resistance would be when the mutation that confers the resistance is not costly (e.g. a mutation which changes the shape of a protein targeted by an antibiotic to a different shape that, while equally functional, is not disrupted by the antibiotic). Note that I don’t actually know whether this mechanism is common in practice.

Thanks for writing this nice article. Also thanks for the “Qualia the Purple” recommendation. I’ve read it now and it really is great.

In the spirit of paying it forward, I can recommend https://imagakblog.wordpress.com/2018/07/18/suspended-in-dreams-on-the-mitakihara-loopline-a-nietzschean-reading-of-madoka-magica-rebellion-story/ as a nice analysis of themes in PMMM.

It seems like this might be double-counting uncertainty? Normal EV-type decision calculations already (should, at least) account for uncertainty about how our actions affect the future.

Adding explicit time-discounting seems like it would over-adjust in that regard, with the extra adjustment (time) just being an imperfect proxy for the first (uncertainty), when we only really care about the uncertainty to begin with.

Indeed humans are significantly non-aligned. In order for an ASI to be non-catastrophic, it would likely have to be substantially more aligned than humans are. This is probably less-than-impossible due to the fact that the AI can be built from the get-go to be aligned, rather than being a bunch of barely-coherent odds and ends thrown together by natural selection.

Of course, reaching that level of alignedness remains a very hard task, hence the whole AI alignment problem.

I had another thing planned for this week, but turned out I’d already written a version of it back in 2010

What is the post that this is referring to, and what prompted thinking of those particular ideas now?

I see it in a similar light to “would you rather have more or fewer cells in your body?”. If you made me choose I probably would rather have more, but only insofar as having fewer might be associated with certain bad things (e.g. losing a limb).

Correspondingly, I don’t care intrinsically about e.g. how much algae exists except insofar as that amount being too high or low might cause problems in things I actually care about (such as human lives).

Seeing the relative lack of pickup in terms of upvotes, I just want to thank you for putting this together. I’ve only read a couple of Dath Ilan posts, and this provided a nice coverage of the AI-in-Dath-Ilan concepts, many of the specifics of which I had not read previously.

My understanding of it is that there is conflict between different “types” of the mixed population based on e.g. skin lightness and which particular blend of ethnic groups makes up a person’s ancestry.

EDIT: my knowledge on this topic mostly concerns Mexico, but should still generally apply to Brazil.

That PDF seems like it is a part of a spoken presentation (it’s rather abbreviated for a standalone thing). Does there exist such a presentation? If so, I was not successful in funding it, and would appreciate it if you could point it out.

I similarly offer myself as an author, in either the dungeon master or player role. I could possibly get involved in the management or technical side of things, but would likely not be effective in heading a project (for similar reasons to Brangus), and do not have practical experience in machine learning.

I am best reached through direct message or comment reply here on Lesswrong, and can provide other contact information if someone wants to work with me.

The main post of what amounts of evidence different tests give is this one: https://www.lesswrong.com/posts/cEohkb9mqbc3JwSLW/how-much-should-you-update-on-a-covid-test-result

Also related is part of this post from Zvi (specifically the section starting “Michael Mena”): https://www.lesswrong.com/posts/CoZitvxi2ru9ehypC/covid-9-9-passing-the-peak

Combining the information from the two, it seems like insofar as you care about infectivity rather than the person having dead virus RNA still in their body, the actual amount of evidence from rapid antigen tests wil...

2Yoav Ravid2y

This is a good piece of writing. It reminds me of another piece of fiction (somewhat happier in tone) which I cannot find again. The plot involves a woman trying to rescue her boyfriend from a nemesis in a similar AI-managed world. I think it involves her jumping out of a plane, and landing in the garden of someone who eschews AI-protection for his garden, rendering it vulnerable to destruction without his consent. Does anyone recall the name/location of this story?

5Markvy2y
https://www.lesswrong.com/posts/sMsvcdxbK2Xqx8EHr/just-another-day-in-utopia [https://www.lesswrong.com/posts/sMsvcdxbK2Xqx8EHr/just-another-day-in-utopia]

Copyediting: “Miriam removed off her cornea too” should probably not have the “off”.

2lsusr2y
Fixed. Thanks.

The part about hiring proofreading brought a question to mind: where does the operating budget for the lesswrong website come from, both for stuff like that and standard server costs?

9Ruby2y
Our most recent round of funding was from OpenPhilanthropy and the Survival & Flourishing Fund [https://survivalandflourishing.fund/sff-2021-h1-recommendations].

Do you have any recommendations of such stories?

3Dagon2y
Watchmen was pretty good on this front.  Worm (https://parahumans.wordpress.com/) is LONG, but great.

If you also consider the indirect deaths due to the collapse of civilization, I would say that 95% lies within the realm of reason. You don’t need anywhere close to 95% of the population to be fully affected by the scissor to bring about 95% destruction.

Sorry if I was ambiguous in my remark. The comparison that I’m musing about is between “fierce” vs “not fierce” nerds, with no particular consideration of those who are not nerds in the first place.

It’s interesting to read posts like this and “Fierce Nerds” while myself being much less ambitious/fierce/driven than the objects of said essays. I wonder what other psychological traits are associated with the difference between those who are more vs less ambitious/fierce/driven, other things being equal.

6lsusr2y
Anxiety. Lack of slack. Natural amphetamines. [https://www.lesswrong.com/posts/p3BCWtA2WMwzLmLd4/life-at-three-tails-of-the-bell-curve#Natural_Amphetamines] If the natural amphetamines correlation is true then that gets us a whole basket of correlations including low appetite, skipping meals, high energy, high NEAT (non-exercise automatic thermogenesis) and difficulty sleeping.
3Pattern2y
Correlation is arguably, at odds with other things being equal.

Nice poem! It’s cool to see philosophical and mathematical concepts expressed through elegant language, though it it somewhat less common, due to the divergence of interests and skills.

I’d say a lot of domains have reasonably-aligned incentives a lot of the time, but that’s a boring non-answer. For a specific example, there’s the classic case of how whenever I go to the grocery store, I’m presented with a panoply of cheap, good quality foodstuffs available for me to purchase. The incentives along the chain from production -> store -> me are reasonably well-aligned.

2JohnGreer2y
Yes, I agree that a grocery store is a great example. I suppose I'm looking for examples where people recognized a problem, changed the incentives, and then it fixed/improved things.

J&J (1 shot): mild tiredness the next day, no other symptoms.

Thanks for the summary. A minor copyediting note: the sentence «They begin as the caracter becomes uncontent with their situation, and» cuts off part way.

1Jsevillamol2y
Thank you! Now fixed :)
I interpreted it to mean that uncontent characters change things in the words (i.e. that character stories are likely to lead to event stories).

Is there anywhere that there are transcripts available for these conversations?

I hear you antanaclasis. I. Hear. You.

1spencerg2y
Unfortunately, we don't have transcripts for these! Sorry about that. I recommend listening at 1.5x-2.5x speed.

Copyediting note: it appears that the parenthetical statement <(Note: agent here just means “being”, not> got cut off.

3ozziegooen2y
Fixed, thanks!

I think it is? That was kind of the implication that I read into it at least.

You mention the EA investing group. Where is that? A cursory search didn’t seem to bring anything up. Also, more generally speaking, what would be your top few recommendations of places to keep up with the latest rationalist investment advice?

2ryan_b2y
I have no idea if this is the answer, but there's a cluster if investing discussion on the EA side around mission hedging [https://forum.effectivealtruism.org/posts/iZp7TtZdFyW8eT5dA/a-generalized-strategy-of-mission-hedging-investing-in-evil]. That may be relevant.
3sapphire2y
There are several groups now because people wanted to keep some topics seperated. Sadly I don't think any of them are open to the public.

On this note, I would definitely be willing to pay premium to be part of a fund run by a rationalist who’s more intimately involved with the crypto and prediction markets than I am, and would thereby be able to get significantly more edge than I currently can.

5sharpobject2y
I should have this ready within 2 months.

It would definitely be neat to read a history of that sort. Having myself not read many of the books that Eliezer references as forerunners, that area of history is one that I at least would like to learn more about.

1Eric Raymond2y
I actually have not seen such a bibliography, though I could infer a lot from his language choices in essays like Twelve Virtues.  Can you share a pointer to his list of forerunners?   I don't expect there is much on it that will surprise me, but I would very much like to read it nevertheless.

Yes, I’d just say that there’s a lot resting on that “up to a point”. Lots of goods, cars included,, fairly rapidly saturate in the benefit that they bring, and hence in how much of them get consumed. At least in the US, we’re at the point where there’s almost as many cars as people, and there’s fairly little use to more than one car per person. This puts a pretty hard upper limit on how much increased car production quality/efficiency will show up (and to a lesser extent, has shown up) in material use.

My informal perception is that in the “developed world...

2jasoncrawford2y
Maybe, but cars aren't the only things we make out of metal, either. I deliberately made the categories as broad as possible while staying objective. Still, I agree there's an issue here.

As you briefly mentioned, the focus on input measures (like quantity of materials consumed) can be different from the progress we’re really looking for. In making a progress dashboard, I’d be pretty wary of including such measures in roughly the same way I’d be wary of judging how good a university is by how many employees/student it has — at best the measure is correlated with good things, but even then it’s a cost being paid to get those things, not a benefit in its own right.

Similarly, much of the gain of technology is in making better use of resources,...

3jasoncrawford2y
But cars getting better and cheaper shows up as more total cars getting sold, which does show up as increased material usage, at least up to a point.

Velocity Raptor

A fun interactive demonstration of special relativity. It’s good for getting an intuitive sense for some of the “weird” things that happen in relativistic conditions.

In a world where the fixed costs of creating a being with 0 utility are 0 (very unlike our world), and the marginal costs of utility are increasing (like our world), the best population state would be an ~infinite number of people each with a positive infinitesimal amount of utility relative to nonexistence.

However, the characteristics of personhood and existence would need to be so drastically different in order for the 0 cost to create assumption to be true (or even close to true, even virtual minds take up storage space) that I don’t really think that the conclusion in that particular case teaches us anything much meaningful about universes like our own.

At least to me, intuition is clearly in favor of creating said new people, as long as the positive utility (relative to the zero point of nonexistence) of their lives is greater than the loss in utility to those who already existed.

I do not view this as problematic from a consequentialist perspective, as I see that outcome as a better one than the prior state of fewer, somewhat happier people.

Just to be clear, due to the substantial (somewhat fixed) costs of creating and maintaining a person, the equilibrium point of ambivalence between creating or not cre...

1Oskar Mathiasen2y
Does you intuition still hold in the [Least Convenient Possible World](https://www.lesswrong.com/posts/neQ7eXuaXpiYw7SBy/the-least-convenient-possible-world) where costs of creating new beings is 0?

One other essay on roughly this topic is https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/, sorting these considerations into three levels, axiology (what world-states are good), morality (what actions are good), and law (what behavior to enforce).

Another few reasons that I've heard for what's opposing later high school start times are 1) due to limited numbers of buses, doing high school later would require the lower schools to be earlier, and parents don't want their elementary schoolers out before sunrise, and 2) after-school activities like sports would be disrupted, both in an absolute sense (they already sometimes run pretty close to sunset) and a relative sense (a school that moved to a later schedule would either not be able to do sports games with other schools, or would have to have the at...

Thank you for making this sequence. I’ve been cryocrasinaing for a while, in part due to the complexity of the forms and insurance, and I hope that this sequence will give me the confidence to move forward.

2mingyuan2y
I hope so too! Feel free to DM me if you ever feel stuck or confused!
1. The FDA (and to a lesser extent regulatory agencies generally) being extremely over-reluctant to approve things, because of the misaligned incentive that heavily punishes approving something that ends up being bad, but doesn't generally punish failing to approve something that would have been good. For the greater public good, individuals within the organization would have to take on substantially more personal risk, with little to no corresponding personal gain.
2. Much lumber and other treated wood is treated with formaldehyde, a carcinogen, which then vapor
...
3hamnox2y
A broad range of examples, lots of variety! That's perfect for gesturing at the overarching idea, lest we blow up conversations focusing on specifics in a narrow set. 3. pretty sure the biggest blocker to changing school times is parents' work schedules.  If the schedules diverge too much parents would have a harder time providing transportation to school and mandating adult supervision at all times. 6. it's not entirely about relative armament. there is base benefit to being able to get rid of neighbors you don't like or who are sitting on top of resources you want, independent of whether they pose any military threat to you.