thakil

thakil's Comments

Are we failing the ideological Turing test in the case of ISIS? (a crazy ideas thread)

I'm a little confused by your first point (I guess you're pointing out a grammar/spelling error, but the only one I note is that you've used "a" instead of "an", and evil starts with a vowel so, no I don't understand that point).

You're second point is correct, I meant to mention that as a cost. By appearing more moderate I cost myself support. I've sort of hand waved the idea that I can just convince everyone to fight for me in the first place, which is obviously a difficult problem! That said I think you could be a little less obviously evil initially and still attract people to your fundamentalist regime.

Are we failing the ideological Turing test in the case of ISIS? (a crazy ideas thread)

"More useful questions would be: given their supreme goal (to establish a new Caliphate which will spread Islam by the sword to the whole world), what should they do to accomplish that? And how should we (by which I mean, everyone who wants Islamic universalism to fail) act to prevent them?"

I think this is an interesting question. If you want to create a new islamic state you could do worse than siezing on the chaos caused by a civil war in Syria, and a weak state in Iraq. You will be opposed by

1)local interests, i.e. the governments of Iraq and Syria 2)The allies of local interests. In the case of Syria, Iran and Russia, Iraq the US and Britain.

I think 2 is quite interesting because the amount other nations intervene will be due in part to how much their population cares. I would argue that the attacks on Russia and France represent a strategic mistake because in both cases it encouraged those nations to be more active in their assault on ISIS.

Arguably the best way to discourage international interests from getting involved is increasing local costs. Make sure that any attacks on you will kill civillians, try to appear as legitimate and as boring as possible.

Essentially, if I wanted to run an evil fundamentalist oppressive state I would look as cuddly as possible at first. In fact, I would probably pretend to be on the side of the less religiously motivated rebels, so I can get guns and arms. Then, when Assad is toppled, make sure that any oil I have is available. My model here will be to look as much as Saudia Arabia as possible, as they can do horrifying things to their own citizens provided they remain a key strategic ally in the region. Real politik will trumph over morality provided you can keep western eyes off of you.

The goal, always, would be to be as non threatening as possible to squeeze as much arms as you can out of western allies (and Russian allies too, if you can work it, but if you topple Assad you probably can't), which puts you in a position to expand your interests. Then you need to provoke other nations to invade you, so you can plausibly claim to be the wronged party in any conflict where the US feels obliged to pick sides.

The Number Choosing Game: Against the existence of perfect theoretical rationality

"However if the utility is dished out after the number has been spesified then an idler and a ongoer have exactly the same amount of utility and ought to be as optimal. 0 is not a optimum of this game so an agent that results in 0 utility is not an optimiser. If you take an agent that is an optimiser in other context then it ofcourse might not be an optimiser for this game."

The problem with this logic is the assumption that there is a "result" of 0. While it's certainly true that an "idler" will obtain an actual value at some point, so we can assess how they have done, there will never be a point in time that we can assess the ongoer. If we change the criteria and say that we are going to assess at a point in time then the ongoer can simply stop then and obtain the highest possible utility. But time never ends, and we never mark the ongoer's homework, so to say he has a utility of 0 at the end is nonsense, because there is, by definition, no end to this scenario.

Essentially, if you include infinity in a maximisation scenario, expect odd results.

The Number Choosing Game: Against the existence of perfect theoretical rationality

Indeed. And that's what happens when you give a maximiser perverse incentives and infinity in which to gain them.

This scenario corresponds precisely to pseudocode of the kind

newval<-1

oldval<-0

while newval>oldval

{

oldval<-newval

newval<-newval+1

}

Which never terminates. This is only irrational if you want to terminate (which you usually do), but again, the claim that the maximiser never obtains value doesn't matter because you are essentially placing an outside judgment on the system.

Basically, what I believe you (and the op) are doing is looking at two agents in the numberverse.

Agent one stops at time 100 and gains X utility Agent two continues forever and never gains any utility.

Clearly, you think, agent one has "won". But how? Agent two has never failed. The numberverse is eternal, so there is no point at which you can say it has "lost" to agent one. If the numberverse had a non zero probability of collapsing at any point in time then Agent two's strategy would instead be more complex (and possibly uncomputable if we distribute over infinity), but as we are told that agent one and two exist in a changeless universe and their only goal is to obtain the most utility then we can't judge either to have won. In fact agent two's strategy only prevents it from losing, and it can't win.

That is, if we imagine the numberverse full of agents, any agent which chooses to stop will lose in a contest of utility, because the remaining agents can always choose to stop and obtain their far greater utility. So the rational thing to do in this contest is to never stop.

Sure, that's a pretty bleak lookout, but as I say, if you make a situation artificial enough you get artificial outcomes.

The Number Choosing Game: Against the existence of perfect theoretical rationality

But time doesn't end. The criteria of assessment is

1)I only care about getting the highest number possible

2)I am utterly indifferent to how long this takes me

3)The only way to generate this value is by speaking this number (or, at the very least, any other methods I might have used instead are compensated explicitly once I finish speaking).

If your argument is that Bob, who stopped at Grahams number, is more rational than Jim, who is still speaking, then you've changed the terms. If my goal is to beat Bob, then I just need to stop at Graham's number plus one.

At any given time, t, I have no reason to stop, because I can expect to earn more by continuing. The only reason this looks irrational is we are imagining things which the scenario rules out: time costs or infinite time coming to an end.

The argument "but then you never get any utility" is true, but that doesn't matter, because I last forever. There is no end of time in this scenario.

If your argument is that in a universe with infinite time, infinite life and a magic incentive button then all everyone will do is press that button forever then you are correct, but I don't think you're saying much.

The Number Choosing Game: Against the existence of perfect theoretical rationality

Then the "rational" thing is to never stop speaking. It's true that by never stopping speaking I'll never gain utility but by stopping speaking early I miss out on future utility.

The behaviour of speaking forever seems irrational, but you have deliberately crafted a scenario where my only goal is to get the highest possible utility, and the only way to do that is to just keep speaking. If you suggest that someone who got some utility after 1 million years is "more rational" than someone still speaking at 1 billion years then you are adding a value judgment not apparent in the original scenario.

The Number Choosing Game: Against the existence of perfect theoretical rationality

But apparently you are not losing utility over time? And holding utility over time isn't of value to me, otherwise my failure to terminate early is costing me the utility I didn't take at that point in time? If there's a lever compensating for that loss of utility then I'm actually gaining the utility I'm turning down anyway!

Basically the only reason to stop at time t1 would be that you will regret not having had the utility available at t1 until t2, when you decide to stop.

We really need a "cryonics sales pitch" article.

A fairly small amount. Again, risk aversion says to me that a 1 in 1000 chance isn't worth much if I can only make that bet once.

We really need a "cryonics sales pitch" article.

Less than 1%. I haven't thought hard about these numbers, but I would say 1 has a probability of say 50/60%,2 10% (as 2 allows for societal collapse, not just company collapse) 3 10% (being quite generous there) and 4 40% which gives us 0.60.10.1*0.4=0.0024. If I'm more generous to 3, bumping it up to 80% I get 0.0192. I don't think I could be more generous to 2 though. These numbers are snatched from the air without deep thought, but I don't think they're wildly bad or anything.

We really need a "cryonics sales pitch" article.

My argument against cyronics:

The probability of being successfully frozen and then being revived later on is dependent on the following

1)Being successfully frozen upon death (loved ones could interfere, lawyers could interfere, the manner of my death could interfere)

2)The company storing me keeps me in the same (or close to it) condition for however long it takes for revivification technologies to be discovered

3)The revivification technologies are capable of being discovered

4)There is a will to revivify me

These all combine to make the probability of success quite low.

The value of success is obviously high, but it's difficult to assess how high: just because they can revivify me doesn't mean my life will then end up being endless (at the very least, violent death might still lead to death in the future)

This is weighted by the costs. These are

1)The obvious financial ones

2)The social ones. I actually probably value this higher than 1. Explaining to my loved ones my decision, having to endure mockery and possibly quite strong reactions

The final point here is about risk aversion. While one could probably set up the utility calculation above to come up positive, I'm not sure that utility calculation is the correct way to determine whether to make such a risk. That is, if a probability of a one shot event is low enough, the expected value isn't a very useful indicator of my actual returns. That is, if a lottery has a positive gain, it still might not be worth me playing it if the odds are still very much against me making any money from it!

So how would you convince me?

1)Drop the costs, both social and financial. The former is obviously done by making cryonics more mainstream, the latter... well by making cryonics more mainstream, probably

2)Convince me that the probability of all 4 components is higher than I think it is. If the conjoined probability started hitting >5% then I might start thinking about it seriously.

Load More