Posts

Sorted by New

Wiki Contributions

Comments

Answer by AnselNov 29, 20233-2

I think the answers to 1 and 2 are as reasonably close to 0 as calculated probabilities can be. That may be independent with the question of how reasonable it is to step into the teleporters, however.

It looks like confused thinking to me when people associate their own conscious existence with the clone that comes out of the teleporter. Sure, you could believe that your consciousness gets teleported along with the information needed to construct a copy of your body, but that is just an assumption that isn't needed as part of the explanation of the physical process. From that, also, stems the problems of needing to explain which copy your consciousness would "prefer", if there are multiples; or whether consciousness would be somehow split or combined.

The troubling issues that follow from the teleporter problem are the questions it raises about the actual makeup of the phenomenon we identify as our own consciousness. It seems to me that it may well be that our conception of a persistent personal consciousness is fully illusory, in which case it may be said that the original questions are all ambiguous in terms of the referent of "you". In this conception, an instance of "you" may have qualia, but this qualia is not connected to any actually persistent identity.

If the idea that we have an actually persistent conscious experience is an illusion, then the question of whether we should use the teleporter, cloning or otherwise, is mostly about how comfortable we are with it, and how desirable the outcome of using it is likely to be as a practical matter. If you bite that bullet then using the teleporter should have no more impact than going under anesthesia or even a deep dreamless sleep. If the illusion model is true, then selfishness with regard to experience is simply a mark of not being able to personally accept the falseness of your own identity, in which case you are likely to not choose to use the teleporter for that reason.

For the record, I feel very uncomfortable with the idea of using the teleporter. Currently, the idea "feels" like suicide. But I don't know that there's any rational basis for that.

Ansel6mo104

Thanks for the response, especially including specific examples.

My motivation for asking these questions, is to anticipate that which will be obvious and of greatest humanitarian concern in hindsight, say in a year.

This is a scenario that I think is moderately probable, that I'm worried about: 

Part 1, most certain: Israeli airstrikes continue, unclear if they're still using their knocking system much. Due in part to deliberate Hamas mixing of combatants and non-combatants, numbers of civilian casualties rise over time.

Part 2, less certain: Israel continues to withhold or significantly restrict electricity and/or food/medical supplies. Civilian casualties rise over time.

Part 3, less certain: Israel proceeds with an invasion/occupation of Gaza. Goals could be restricted to killing known members of Hamas, destroying Hamas materiel, rescuing hostages, or they could be expanded to some kind of occupation or even resettlement objectives.

With part 2 and 3, the possibilities for non-combatant casualties seem largely open ended. The results (if these things happen) will depend not just on Israel's conduct, but also the reaction from Hamas and the general Palestinian population.

I think that those who are able to consider the situation dispassionately, both inside and outside of Israel, should be clear that the maximally aggressive Israeli response would be tragic and catastrophic. The question, therefore, is how much restraint can be shown; and to a lesser extent, if the response can do any good. As a backdrop to all this, I also consider that it's as yet uncertain whether, among other considerations, there could be more attacks against Israel yet to come in the near term.

I understand that you might not have much to say about all this since it's largely speculation, just thought I'd throw in my thoughts about the situation.

Ansel6mo143

My utmost sympathy goes out to the civilians (and soldiers for that matter) who have been harmed in such a horrible way. The conduct of Hamas is unspeakable.

My guess is that you most likely do not expect the currently unfolding Israeli response to result in a massive humanitarian tragedy (please correct me if that's wrong). Do you have any specific response to those who have concerns in this vein?

Specifically, the likely results of denying food supplies and electricity to Gaza seem disastrous for the civilians therein. Water disruption is also dangerous, though I read that water is being trucked in.

Also, Israel seems to be gearing up for a very large scale operation in Gaza, with potentially tens of thousands of soldiers involved. What is your expectation of the casualties - of combatants, for both sides, and non-combatants on the Palestinian side?

Ansel6mo-2-3

I disagree with this. The fact that the active mechanism of any functional weight loss strategy is having lower caloric intake than expenditure is obviously a critical aspect of dieting that makes sense to talk about, so I disagree with calling it a red herring.

Calorie counting doesn't work well for everyone as a weight loss strategy, but it does work for some people. Obviously a strategy that works well when adhered to, and which some people can successfully adhere to, is worth talking about. Also obviously, people who have trouble with implementing it themselves should try other strategies. Find the strategy that works for you, and combine with a form of exercise that you enjoy.

Ansel7mo20

The parent post amusingly equated "accurately communicating your epistemic status", which is the value I selected in the poll, with eating babies. So I adopted that euphemism (dysphemism?) in my tongue-in-cheek response.

Also, this: https://en.wikipedia.org/wiki/A_Modest_Proposal

Ansel7mo2-1

I modestly propose that eating babies is more likely to have good outcomes, including with regard to the likelihood of apocalypse, compared to the literal stated goal of avoiding the apocalypse.

Ansel1y2-1

In my opinion, the risk analysis here is fundamentally flawed. Here's my take on the two main SETI scenarios proposed in the OP:

Automatic disclosure SETI - all potential messages are disclosed to the public pre analysis. This is dangerous if it is possible to send EDM (Extremely Dangerous Messages - world exploding/world hacking), and plausible to expect they would be sent.

Committee vetting SETI - all potential messages are reviewed by a committee of experts, who have the option of unilaterally concealing information they deem to be dangerous.

The argument in the OP hinges on portraying the first scenario as risky, with the second scenario motivated based on avoiding that risk. But the risk to be avoided there is fully theoretical, there's no concrete basis EDM (obviously if smart people think there can be/should be a concrete basis for them, I'd love to see it fleshed out).

But the second scenario has much more plausible risk! Conditioned on both scenarios eventually receiving alien messages, the second scenario could still be dangerous if EDM aren't real. By handling alien messages with unilateral secrecy, you're creating a situation where normal human incentives for wealth, personal aggrandizement, or even altruistic principles could lead a small, insular group to try to seize power using alien technology. The main assumption for this risk to be a factor, is that aliens sending us messages could have significantly superior technology. This seems more plausible than the existence of EDM, which is after all essentially the same claim but incredibly stronger.

Some people might even see the ability to seize power with alien tech as a feature, probably. But I think this is an underdiscussed and essential aspect to the analysis of public disclosure SETI vs secret committee SETI. To my mind, it dominates the risk of EDM until there's a basis for claiming that EDM are real.

Ansel1y52

Strongly upvoted, I think that the point about emotionally charged memeplexes distorting your view of the world is very valuable.

Ansel1y65

That does clarify where you're coming from. I made my comment because it seems to me that it would be a shame for people to fall into one of the more obvious attractors for reasoning within EA about the SBF situation. 
E.G., an attractor labelled something like "SBF's actions were not part of EA because EA doesn't do those Bad Things".

Which is basically on the greatest hits list for how (not necessarily centrally unified) groups of humans have defended themselves from losing cohesion over the actions of a subset anytime in recorded history. Some portion of the reasoning on SBF in the past week looks motivated in service of the above.

The following isn't really pointed at you, just my thoughts on the situation.

I think that there's nearly unavoidable tension with trying to float arguments that deal with the optics of SBF's connection to EA, from within EA. Which is a thing that is explicitly happening in this thread. Standards of epistemic honesty are in conflict with the group need to hold together. While the truth of the matter is and may remain uncertain, if SBF's fraud was motivated wholly or in part by EA principles, that connection should be taken seriously.

 

My personal opinion is that, the more I think about it, the more obvious it seems that several cultural features of LW adjacent EA are really ideal for generating extremist behavior. People are forming consensus thought groups around moral calculations that explicitly marginalize the value of all living people, to say nothing of the extreme side of negative consequentialism. This is all in an overall environment of iconoclasm and disregarding established norms in favor of taking new ideas to their logical conclusion.
 
These are being held in an equilibrium by stabilizing norms. At the risk of stating the obvious, insofar as the group in question is a group at all, it is heterogeneous; the cultural features I'm talking about are also some of the unique positive values of EA. But these memes have sharp edges.

Ansel1y2118

From what I've heard, SBF was controlling, and fucked over his initial (EA) investors as best he could without sabotaging his company, and fucked over parts of the Alameda founding team that wouldn't submit to him. This isn't very "EA" by the usual lights.

 

It's not immediately clear to me that this isn't a No True Scotsman fallacy.

Load More