Emiya

Comments

Sacred Cash

It has less benign forms. Governments and other bandits look for wealth and take it. Sometimes those bandits are your friends, family and neighbors. A little giving back is a good thing, but in many cultures demands for help and redistribution rapidly approach 100% – life is tough, and your fellow tribe members, or at least family members, are endless pits of need, so any wealth that can be given away must be hidden if you want to remain in good standing. Savings, security and investment in anything but status are all but impossible. There is no hope for prosperity.

I'm not sure of how literally I should interpret this part. Governments and systems seem to be in a trend of taxing poverty more than they tax wealth, after a certain level of wealth you definitely pay less per dollar earned that someone who's poor, even considering official taxes alone.  

Poor people do seem to be forced to dissipate any extra wealth they accumulate through societal obligations, and for slack and status purchase it seems to definitely hold true, I'm just puzzled by the government thing.

The Intelligent Social Web

Characters often want change as part of their role. And just as importantly, their role often requires that they can't achieve that change. The tension between craving and deprivation gives birth to the character's dramatic raison d'être. The "wife" can't be as clingy and anxious if the "husband" opens up, so "she" enacts behavior that "she" knows will make "him" close down. "She" can't really choose to change this because "her" thwarted desire for change is part of "her" role.

I'm conflicted about drawing this kind of conclusions from people behaviour, it opens up a door that allows you to interpret anything any way you like. 

More simple explanations are that if a "wife" knows how to interact with the husband in a way that causes him to open up and talk about what's happening, then the conflict gets resolved and you aren't observing a clingy and anxious "wife" anymore. 

It's actually hard to communicate openness and communication while you are feeling anxious and clinging, so you'd see a lot of people acting in ways that "discharge" their anxiety, rather than fix their problem. You don't need to go as far as to postulate that they are actually acting like this "on purpose".

Even if the "wife" is clearly showing a stereotypical script, it might just be that "she" has no utter clue of what else could be done about "her" situation. "She" could be just assuming that it's the correct way to face the problem, nag the "husband" until it finally works. Yeah, "she" would likely feel nervous and lost if considering the option of going off script and trying something else, and would avoid doing that because of that. But people have been using "punishments" in contexts where they have no hopes to work for countless millennia now, and there's no reason to assume everyone just secretly wants the target to persist in unwanted behaviour so they can punish him some more.  

There are other circumstances where drawing simpler explanations is harder, and then you can start to wonder if there is this kind of "purpose" in someone's actions. Self sabotage is definitely a real thing, sometimes. But I think you'd be safer by going with the simplest explanation first, because you can use "secret reasons" as an explanation for everything in psychology.

 

Aside from this, the post was really good and insightful. It got me thinking about what roles I'm being pushed on and where I'm pushing my friends to. 

I often see that people I know make assumptions about me being the rational one of the group, such as assuming I'd commit the stereotypical mistakes of someone who follows Hollywood rationality... which I always found weird as hell, because 1) in other contexts it's basically a meme that I'm really genre-savvy (for example, I DM in games for the group and people have a habit of worrying at least about the first four-five levels of subversions and recursions of my twists and plots), and so I thought they should realise I'd have saw the possibility of making the obvious cliched mistake coming, and 2) because I never showed any hint of such behaviours and regularly do the opposite thing, but I guess it makes more sense now.

My role, according to them, is to be incredibly devious and intelligent and do the non-supervillain equivalent of having the hero fall in my devious-four-levels-of-deception-trap, and then screw up something obvious such as leaving him unattended to free himself or fail to my own hubris or insert cliched genius mistake x, so the "balance" between intelligence and heart is reaffirmed.

The Meaning That Immortality Gives to Life

Given what I’ve actually seen of people’s psychology, if you want anything done about global warming (like building 1000 nuclear power plants and moving on to real problems), then, yes, you should urge people to sign up for Alcor.

 

I realise this is a 13 year old post, but please don't dismiss global scale problems with the first idea that comes to mind and without doing serious research first, your opinion is (to say the least) really respected on this site and lots of people would assume you were right about it.

 

By IPCC datas from 2014, electricity and heat production is a mere 35% (total, considering all associated costs) of global emissions. Even if we convinced everyone to switch to electric cars and transportations AND to electric heating, which would not be trivial at all, we'd have curbed emissions by a total 55%.

https://www.ipcc.ch/site/assets/uploads/2018/02/SYR_AR5_FINAL_full.pdf (page 102)

Also by IPCC datas, nuclear phase out will add a 7% cost to what it would take to stop climate change, while each year wasted between 2014 and 2030 by delaying actions increments cost by more or less 3%.  Of course, that is due to the low prevalence of nuclear power as an energy source, but it still goes to show that the issue of nuclear energy is far from being the vault key here. (same link as above, page 41)

If you could persuade everyone to build 1000 nuclear plants, switch to electric cars and to electric heating, then you'd also be able to solve the problem in a dozen more ways.

 

I agree with everything else on the post and that there are worse problems than climate change (though my guess is that it would still increase existential risk by at least 5% if botched, mostly because it would increase the likelihood of someone botching AGI).

Ingredients of Timeless Decision Theory

Can anyone suggest me good background reading material to understand the technical language/background knowledge of this and, more generally, on decision theory?

Nobody Is Perfect, Everything Is Commensurable

I'm puzzled by a really effective activism post that manages to get me to commit to give 10% of my income to charity saying that activism and spreading the cause isn't an effective way to get things done.

 

I also think protesting can buy a lot more political shift for a cause than the average hourly pay of the participant. Millions of protesters seem to shift the political landscape a lot more than tens of millions of dollars spent in lobbying and ads.

The Parable Of The Talents

I shouldn’t pretend I’m worried about this for the sake of the poor. I’m worried for me.

At this point I should just try ask in a poll if there's a level of intelligence where you eventually stop worrying if you could ever catch up to the level above yourself.

Maybe if you were literally the highest-IQ person in the entire world you would feel good about yourself, but any system where only one person in the world is allowed to feel good about themselves at a time is a bad system.

Well, that's fricking encouraging.

Meditations On Moloch

This was amazingly good.

On a side note:

But things that work from a god’s-eye view don’t work from within the system. No individual scientist has an incentive to unilaterally switch to the new statistical technique for her own research, since it would make her research less likely to produce earth-shattering results and since it would just confuse all the other scientists. They just have an incentive to want everybody else to do it, at which point they would follow along. And no individual journal has an incentive to unilaterally switch to early registration and publishing negative results, since it would just mean their results are less interesting than that other journal who only publishes ground-breaking discoveries. From within the system, everyone is following their own incentives and will continue to do so.

You can, as an individual scientist, start praising and giving status to any other scientist who follow stricter guidelines than the average, and comment negatively on any scientist that's using guidelines that are laxer than the average and your own. Eventually really lax scientist stop having an edge, slightly stricter scientists gain it and the standards in the field move up.

It doesn't require simultaneous coordination and it's a rule of thumb any scientist can adopt without harming their own fitness too much.

Killing the ants

This was pretty interesting, and pretty different from the kind of content you usually find on LessWrong.

I often see arguments against "spontaneous inconvenient moral behaviour", such as worrying whether to kill ants infesting your house or stop eating meat, that advocate these behaviours should be replaced with more effective planned behaviours, but I don't really think most of the first behaviours prevent the others.

Suggesting that someone currently in his house should stop thinking about how to humanly get rid of ants, start working for an hour and using those overtime moneys to donate to ants charity isn't a feasible model, since most people wouldn't have a job where they can just take an hour of spare time whenever they want and convert it to extra money. You are converting "fun time" into "care for the ants time". 

Thinking about how you can be more effective to produce charity or moral value is certainly a good idea, 15 minutes of your time can easily improve the charity you can output in the next years by ten times or more without any real drawback, but the kind of "moral rigor" that's required when one wants to contest a behaviour he doesn't want to adopt it's usually the level of rigor that requires someone to drop his career, start working on friendly AI full time and donating every material possession that he doesn't think it's needed to keep his productivity high to friendly AI research.

You'll need a Schelling point about morality if you don't want to donate your every value to friendly AI research ( if you want to I won't certainly try to stop you), at some point you have to go "screw it, I'll do this less effective thing instead because I want to", and this Schelling point will likely include a lot of behaviours that are spontaneous things you care about but are also ineffective. 

 

Also the way some critiques try to evaluate  non-human lives doesn't really make sense. I agree on a "humans > complex animals > simple animals logic", but there should be some kind of quantitative relations between the wellbeing of the groups. You can argue that you would save a human over any number of cow and I guess that can sorta makes sense, but there still should be some amount of human pleasure you should be willing to give up to prevent some amount of animal suffering, or you might as well give up on quantitative moral at all.

If one's suggesting a 1:1000 exchange of human pleasure:animal suffering, you can't refuse by arguing that you'd refuse a 10:10 exchange.

Short, Extreme, Forgotten Torture vs Death

Inquire about the subjective vs objective duration of that millisecond. If there aren't any bad surprises there, pick torture before my mind can try to make a guess of how bad it will hurt. 

In the torture vs dust specks I choose dust specks if they weren't allowed to cause ripple effect and if they were guaranteed to be spread with only 1 dust speck for humans. Here there is a similar consideration, how the pain is spread in a time interval so small that it will basically be inconsequential (since he guaranteed that I won't suffer lasting consequences, I'd fully expect such a pain to fry my brain and have it possibly melt out of my eyes or something).

I'm basically choosing to screw over the future myself of that millisecond to protect all the other future self. 

Both decisions should work fine as long as I'm not approached by a large number of Pascal's muggers, if it risks becoming a trend I should review my decision theory.

For another human... I'd choose torture for the same considerations, if he choose torture I wouldn't override it, I'd have emotional qualms about overriding his "death" decision, but I likely will.

 

The math of pain vs pleasure of being alive would likely say my decisions are wrong, but I think the math starts to stop helping in this limit cases, picking death strikes me as a two boxing with Omega (though I think the math there shows you are right went one boxing if you manage to take in the backward causal link). You'll be pretty glad you choose torture exactly one millisecond after and for the rest of your live, and so will the stranger (unless he was suicidal, but it doesn't seem I'm allowed to know it before picking).

Guided By The Beauty Of Our Weapons

I think the only... slight divergence of the situation from reality is that the bad guys figured out most of this stuff already (though I doubt they did so explicitly).

There has been a lot of talk about how "the political divide has grown harsher than ever" as if this kind of shift just happened because of random cosmic variations.

What exactly happened is that, invariably in different country, the local "bag guy" wannabe grabs the loudest mic it can get and starts saying something absolutely hateful over and over, doing everything he can to poison the well and just stop people from talking with each other, instead getting the two parties to yell insults at the other one.

Pretty sure democrats didn't just went "hey, you know that Trump guy? For no real reason, I really hate him and his supporters way more than I hated Romney and his supporters, even though I don't perceive his communications have taken a harsh shift away from democracy and basic human decency. Let's abandon debate and go tell them what ignorant dumb faces they have".

It's a scarily effective trap, and a strong argument in favour of the tactic the post suggest.

 

And yes, I know it's not a helpful argument to say if you want to propose "look, maybe we'd just better agree to sit down and talk politics civilly" to a "bad guy supporter" but I think it would be great if somehow an agreement about how wonderful it would be if we could just agree to shun the next politician who tries to poison the well, no matter which party is he from, also ended up in the discussion.

Load More