All of Ansel's Comments + Replies

Answer by AnselNov 29, 20233-2

I think the answers to 1 and 2 are as reasonably close to 0 as calculated probabilities can be. That may be independent with the question of how reasonable it is to step into the teleporters, however.

It looks like confused thinking to me when people associate their own conscious existence with the clone that comes out of the teleporter. Sure, you could believe that your consciousness gets teleported along with the information needed to construct a copy of your body, but that is just an assumption that isn't needed as part of the explanation of the physical... (read more)

Thanks for the response, especially including specific examples.

My motivation for asking these questions, is to anticipate that which will be obvious and of greatest humanitarian concern in hindsight, say in a year.

This is a scenario that I think is moderately probable, that I'm worried about: 

Part 1, most certain: Israeli airstrikes continue, unclear if they're still using their knocking system much. Due in part to deliberate Hamas mixing of combatants and non-combatants, numbers of civilian casualties rise over time.

Part 2, less certain: Israel cont... (read more)

2Yovel Rom4mo
Part 1: I agree, it seems they don't use the Roofknock Protocol for now, and that will be the main source of civilian casualties. It's a tragedy, but not actually a problem by law of war (see Jay Donde's post). Part 2: I generally agree. I don't think actual food shortage will be a problem (5%), electricity might (10-33%, very uncertain) but I don't think will cause many casualties by itself. We live in a warm country, and hospitals (and Hamas operatives) have emergency reserves. Part 3: I agree, and think it depends a lot on Egyptian refugee policy. Additional possibility is a second front in Lebanon, which adds orders of magnitude more missiles, which are also stronger and more accurate. Israeli civilian casualties will quickly rise, not to mention the possibility of them trying similar tactics to those Hamas tried last Saturday (even though that will probably be less effective, since Israel is on high alert). Of course such scenario will also deeply impact Lebanon and its citizens. As for your later point, I think Israel is trying to topple Hamas's regime, one way or another. The region around Gaza is populated by 70,000 people, who will not stay there if there's a risk for attacks like the last one. I am not sure whether it will be done by completely occupying the strip, a siege, or something completely different, but I don't think we'll return to status quo unless Israel tries and fails to do that. 

My utmost sympathy goes out to the civilians (and soldiers for that matter) who have been harmed in such a horrible way. The conduct of Hamas is unspeakable.

My guess is that you most likely do not expect the currently unfolding Israeli response to result in a massive humanitarian tragedy (please correct me if that's wrong). Do you have any specific response to those who have concerns in this vein?

Specifically, the likely results of denying food supplies and electricity to Gaza seem disastrous for the civilians therein. Water disruption is also dangerous, t... (read more)

4Yovel Rom4mo
I don't expect it, since historically Israel let humanitarian supplies in during wars. I also sincerely hope it will continue do that during this war. IIRC (can't find sources right now) common practice has been to stop electricity for some time, then when international pressure increases supply it interminently.  Yeah, I don't understand the water situation myself. I hear Hamas complaining about electricity, and not about water, so it seems to be fine for now. About casualties: That's extremely difficult to answer in the best conditions, depends on the way Israel will do it, and requires actual expertise and classified information which I lack. I will try to give you the way I think about this question.  The most similar war would be Operation Protective Edge, in 2014. Gazan forces were approximated at 25,000 people. According to Israel [1] in that war 2,125 Gazans were killed, of which 36% were civilians, 44% combatants, and 20% uncategorized males aged 16–50 (probably some are militants, some civilians and some opportunistic attackers who did not formally belong to any organization). Israel suffered 67 soldiers killed. However, the cities themselves were not invaded, and Hamas's vast bunker system was not directly confronted. Most palestinian casualties were from air strikes. The most similar purely urban battle I can think about is the battle of Jenin (AFAIK the Israeli version of the events is the correct one). It consisted of 1000 Israeli soldiers vs 300 Palestinians, in a 40k people city. it ended with 23 Israeli soldiers killed, ~25-40 Palestinian militants killed, and 15-25 civilians killed. Direct multiplication would suggest 2,300 Israeli dead, 2,500 - 4,000 militants, and 1,500 -  2,500 civilians killed. In practice it would be more, since Gaza is better fortified and Hamas would not have anywhere to flee, unlike in Jenin. An additional possibility is the usage of siege tatics. It seems siege directed against civilians is prohibited, but it's possi

I disagree with this. The fact that the active mechanism of any functional weight loss strategy is having lower caloric intake than expenditure is obviously a critical aspect of dieting that makes sense to talk about, so I disagree with calling it a red herring.

Calorie counting doesn't work well for everyone as a weight loss strategy, but it does work for some people. Obviously a strategy that works well when adhered to, and which some people can successfully adhere to, is worth talking about. Also obviously, people who have trouble with implementing it themselves should try other strategies. Find the strategy that works for you, and combine with a form of exercise that you enjoy.

The parent post amusingly equated "accurately communicating your epistemic status", which is the value I selected in the poll, with eating babies. So I adopted that euphemism (dysphemism?) in my tongue-in-cheek response.

Also, this:

I modestly propose that eating babies is more likely to have good outcomes, including with regard to the likelihood of apocalypse, compared to the literal stated goal of avoiding the apocalypse.

1Stephen Bennett5mo
This seems like a fairly hot take on a throwaway tangent in the parent post, so I'm very confused why you posted it. My current top contender is that it was a joke I didn't get, but I'm very low confidence in that.

In my opinion, the risk analysis here is fundamentally flawed. Here's my take on the two main SETI scenarios proposed in the OP:

Automatic disclosure SETI - all potential messages are disclosed to the public pre analysis. This is dangerous if it is possible to send EDM (Extremely Dangerous Messages - world exploding/world hacking), and plausible to expect they would be sent.

Committee vetting SETI - all potential messages are reviewed by a committee of experts, who have the option of unilaterally concealing information they deem to be dangerous.

The argument ... (read more)

Strongly upvoted, I think that the point about emotionally charged memeplexes distorting your view of the world is very valuable.

That does clarify where you're coming from. I made my comment because it seems to me that it would be a shame for people to fall into one of the more obvious attractors for reasoning within EA about the SBF situation. 
E.G., an attractor labelled something like "SBF's actions were not part of EA because EA doesn't do those Bad Things".

Which is basically on the greatest hits list for how (not necessarily centrally unified) groups of humans have defended themselves from losing cohesion over the actions of a subset anytime in recorded history. Some portio... (read more)

From what I've heard, SBF was controlling, and fucked over his initial (EA) investors as best he could without sabotaging his company, and fucked over parts of the Alameda founding team that wouldn't submit to him. This isn't very "EA" by the usual lights.


It's not immediately clear to me that this isn't a No True Scotsman fallacy.

2James Payor1y
You may draw what conclusions you like! It's not my intention to defend EA here. Here's an attempt to clarify my outlook, though my words might not succeed: To the extent EA builds up idealized molds to shove people into to extract value from them, this is fucked up. To the extent that EA then pretends people like Sam or others in power fit the same mold, this is extra fucked up. Both these things look to me to me rampant in EA. I don't like it.

I'd be interested in someone with legal expertise weighing in on whether the farm example is in violation of child labor laws. There are special regulations and exemptions for farms, especially run by a parent or person standing for the parent, but a nine year old driving that tractor seems very likely to be illegal to me. I broadly agree with all the stuff about letting children roam, and it comports well with my own experience, but tractors in particular can be very dangerous and 9 seems very young to be doing genuinely independent ag work like this. Would be interested in other people's thoughts.

Even when the US considered banning kids from doing dangerous work on farms they still were planning to exempt children on their own family's farm:

It seems like you might be reading into the post what you want to see to some extent(after reading what I wrote, it looked like I'm trying to be saucy paralleling your first sentence, just want to be clear that to me this is a non valenced discussion), the OP returns to referring to K-type and T-type individual people after discussing their formal framework. That's what makes me think that classifying people into the binary categories is meant to be the main takeaway.

I'm not going to pretend to be more knowledgeable than I am about this kind of framework, ... (read more)

2Cleo Nardo1y
Thanks for the comments. I've made two edits: and You're correct that this is a spectrum rather than a strict binary. I should've clarified this. But I think it's quite common to describe spectra by their extrema, for example: * Conflict theorists vs Mistake theorists * Convex and Concave Dispositions * Bullet-biters vs Bullet-swallowers.
Fair. I'm sorry.

I'm not persuaded at all by the attempt to classify people into the two types. See: in your table of examples, you specify that you tried to include views you endorse in both columns. However, if you were effectively classified by your own system, your views should fit mainly or completely in one column, no? 

The binary individual classification aspect of this doesn't even seem to be consistent in your own mind, since you later talk about it as a spectrum. 

Maybe you meant it as a spectrum the whole time but that seems antithetical to putting people into two well defined camps.

Setting those objections aside for a moment, there is an amusing meta level of observing which type would produce this framework.

It seems you didn't read the argument to the end. They only motivate the distinction only to move on to formalize the notion and putting it in a shared framework that explains what is traded off and how to find the optimum mix for given error and inference rates.
5Gordon Seidoh Worley1y
Similarly, there's an amusing meta level observation of which type would object.

One would expect a Prime Minister to be Prime over Ministers. I don't see the need to rename everything Ministry of This or That, so Prime Minister doesn't really seem appropriate.

Would you be willing to summarize the point you're making at the object level? Is it something like "the Soviets had to make the Molotov Ribbentrop pact, and that doesn't say anything meaningful about their cultural approach to the interaction of world religions"? I don't want to put words in your mouth or anything,  I just want to understand the "extremely low-epistemics" bit.

It's something I'm not really comfortable talking about with anonymous people on the internet. I'm really sorry for the inefficiency, but I've done as much as I can to share as much as I can.

It seems like you've retreated fully from your bailey: 

"at the risk of being the Captain Obvious, I must remind the readers that mountain climbing is stupid"

to your motte: 

"There is no greatness in being the 5001th man who climbed Everest"

I suspect most people responding take greater issue with the former position, so maybe if you still stand by it you could defend that one. 
To me, it seems like the standard of "if it increases your chances of dying, it's a stupid recreational activity" is one that is unlikely to be applied evenly by just ab... (read more)

Conceptually I like the framing of "playing to your outs" taken from card games. In a nutshell, you look for your victory conditions and backchain your strategy from there, accepting any necessary but improbable actions that keep you on the path to possible success. This is exactly what you describe, I think, so the transposition works and might appeal intuitively to those familiar with card games. Personally, I think avoiding the "miracle" label has a significant amount of upside.

Not every occupation is the same, but nations occupied by military force are often denied the ability to run their own affairs with regard to legal proceedings, defence, etc. In particular not being allowed to have final authority over legal matters on their own soil seems to historically be a great sticking point: see the Austro-Hungarian demands of Serbia leading to WW1. 

This is one of the key domains which defines the authority of a sovereign nation, whereas it doesn't seem that uncommon in history for there to be foreign military assets in a natio... (read more)

I think it's useful to point out that training muscles for strength/size results in a well documented phenomenon called supercompensation. However, training for other qualities like speed doesn't really work the same way. There's lots of irrational training done because people make an inferential leap from the supercompensation they see in strength training and apply it to cases which intuitively seem like they might be analogues (e.g., weighted sprints don't make you faster).

I think counterexamples are relevant because sometimes intuition points out real ... (read more)

You imply that you understand it's a metaphor, but your other sentences seem to insist on taking the word "wrestling" literally as referring to the sport. The sentence in bold 

"This was no passive measure to confirm a hypothesis, but a wrestling with nature to make her reveal her secrets."

Makes it pretty clear I think. Do you simply not like the metaphor?

"Wrestling with nature" does not work for me as a metaphor. To me it doesn't evoke any imagery that would make sense. Maybe it's a problem of native vs. foreign speaker?
Yeah, I meant the type of metaphor that Ansel points at, where it's the manipulation of systems that matters rather than just passive observation. The wrestling is mostly a poetic license and a nice picture, but not to be taken too literally.

I suspect that massive destabilization following the precipitous fall of most of the great powers (NATO + Russia at the least) would result in war on every continent (sans Antarctica). If Asian countries don't get nuked in this scenario like you suppose, I think it's quite plausible general war in Asia would follow shortly as the surviving greatest powers jockey for dominance. If we posit the complete collapse of U.S. power projection in the Pacific, surely China is best positioned to fill the void, and I don't think it's clear where they'd draw the new lines.

In practice, leading thinkers in EA seem to interpret AGI as a special class of existential threat (i.e., something that could effectively ‘cancel’ the future)


This doesn't seem right to me. "Can effectively ’cancel’ the future" seems like a pretty good approximation of the definition of an existential threat. My understanding of why A.I. risk is treated differently is because of a cultural commonality between said leading thinkers such that A.I. risk is considered to be a more likely and imminent threat than other X-risks. Along with a less widespread (I think) subset of concerns that A.I. can also involve S-risks that other threats don't have an analogue to.

5Cameron Berg2y
I agree with this. By 'special class,' I didn't mean that AI safety has some sort of privileged position as an existential risk (though this may also happen to be true)—I only meant that it is unique. I think I will edit the post to use the word "particular" instead of "special" to make this come across more clearly.

These are just one native speaker's impressions, so take them with a grain of salt.

Your first two examples, to me, scan as being about abstract concepts; respectively: the emotion/quality of curiosity and the property of being in context.

This quora result indicates that it's a quality of "definiteness" that indicates when articles get dropped (maybe as a second language learner you're likely to already have this as knowledge, but find it difficult to intuit).

In those examples, the meaning doesn't rely on pointing at two specific "curiosity" and "context" o... (read more)

It's possible you're in Ease Hell. It has been a while since I got into the weeds with my settings but there are pretty good reasons to change the default ease settings and reset the ease on old cards, as I recall. I'm also in the camp of only using the "again" and "good" buttons, since the other ones affect ease iirc. Anyway you've been at it longer than I have but maybe the ease hell thing is new info for you or other anki users.

I wish the cuteness made a difference. Interesting reading though, thanks.

Is that link safe to click for someone with Arachnophobia?

no pictures
Yes. Photos are a lot of work to include, and anyway, jumping spiders are famously cute (as far as spiders go).

I appreciate the clarification, at first #1 seemed dissonant to me (and #2 and #3 following from that) given the trope of highly inbred European nobility, but on further reflection that might be mostly a special case due to dispensations. I hadn't thought of worldwide consanguination/marriage norms as a potential X factor for civilizational development, but it's an interesting angle.

Yeah the European nobility couldn't follow the stringent outbreeding constraints (and naturally could pay for exemptions) because the dating pool was too small, but the attempts to do so still intermingled the european bloodlines. It's historically wierd/unusual too - if you consider that the more common alternative would be intra-clan marriage within national/cultural borders. In most other times/places cultures/nations/tribes engaged in total warfare and then destroyed/enslaved/conquered each other, vs constrained warfare combined with intermarriage alliance mingling. Marriage between people who spoke completely different languages was common for the european nobility, vs uncommon throughout most of history. But the europeans were semi-unified under a shared roman catholic cultural heritage.

Just to clarify, with this sentence: 

Christianity was also unusual in other potentially key dimensions - it dramatically promoted outbreeding (by outlawing inbreeding far beyond the typical), which plausibly permanently altered the european trajectory.

are you proposing that Christian Europe was historically successful in significant part due to inbreeding less than non-Christian-European civilizations? Is there somewhere I can read more about that thesis? I'm not familiar with it.

To clarify: 1.) Fairly high confidence: The RCC/christendom was unique in that it banned cousin marriage between 4 and 7 degrees of consanguinity (depending on the time period), and had the record tracking infrastructure to implement such a ban. 2.) Also reasonably confident that 1.) had long term genetic/cultural consequences after a millennia. Eg: medieval europoean societies were more outbred than most middle eastern societies (where 1st degree cousin marriage was/is the norm). 3.) Less confident that those changes gave a significant edge, but it seems plausible.[1][2][3][4]. ---------------------------------------- 1. ( ↩︎ 2. Schulz, Jonathan F., et al. "The Church, intensive kinship, and global psychological variation." Science 366.6466 (2019). ↩︎ 3. Schulz, Jonathan F. The Churches' bans on consanguineous marriages, kin-networks and democracy. No. 2016-16. CeDEx Discussion Paper Series, 2016. ↩︎ 4. Akbari, Mahsa, Duman Bahrami-Rad, and Erik O. Kimbrough. "Kinship, fractionalization and corruption." Journal of Economic Behavior & Organization 166 (2019): 493-528. ↩︎

Without even getting into whether your specific reward heuristic is misaligned, it seems to me that you'd just shifted the problem slightly out of the focus of your description of the system, by specifying that all of the work will be done by subsystems that you're just assuming will be safe. 


"paperclip quality control" has just as much potential for misalignment in the limit as does paperclip maximization, depending on what kind of agent you use to accomplish it. So, even if we grant the assumption that your heuristic is aligned, we are merely left with the task of designing a bunch of aligned agents to do subtasks.

1Gerald Monroe2y
Paperclip quality control is an agent that was trained on simulated sensor inputs (camera images and whatever else) of variations of paperclip. Paperclips that are not within a narrow range of dimensions and other measurements for correctness are rejected. It doesn't have any learning ability. It is literally an overgrown digital filter that takes in some dimensions of input image and outputs true or false to accept or reject. (and probably another vector specifying the checks failed) We can describe every subagent for everything the factory needs as such limited, narrow domain machines that alignment issues are not possible. (Especially as most will have no memory and all will have learning disabled)