bgaesop

Comments

Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk

real life, I'd say: "Ok guys, let's sit in this room, everyone turn off their recording devices, and let's talk, with the agreement that what happens in this room stays in this room."

The one time I did this with rationalists, the person (Adam Widmer) who organized the event and explicitly set forth the rule you just described, then went on to remember what people had said and bring it up publicly later in order to shake them into changing their behavior to fit his (if you'll excuse me speaking ill of the dead) spoiled little rich boy desires.

So my advice, based on my experience, and which my life would have been noticably better had someone told me before, is: DON'T do this, and if anyone suggests doing this, stop trusting them and run away

Which is not to say that you are untrustworthy and trying to manipulate people into revealing sensitive information so you can use it to manipulate them; in order for me to confidently reach that conclusion, you'd have to actually attempt to organize such an event, not just casually suggest one on the internet

bgaesop's Shortform

The in-person community seems much less skeptical of these things than the online community. Which isn't to say there are no skeptics, but (especially among the higher status members) it's kind of distressing to see how little skepticism there is about outright silly claims and models. At last year's CFAR reunion, for instance, there was a talk uncritically presenting chakras as a real thing, and when someone in the audience proposed doing an experiment to verify if they are real or it's a placebo effect, the presenter said (paraphrasing) "Hmm, no, let's not do that. It makes me uncomfortable. I can't tell why, but I don't want to do it, so let's not" and then they didn't.

This is extremely concerning to me, and I think it should be to everyone else who cares about the epistemological standards of this community

bgaesop's Shortform

https://slatestarcodex.com/2019/10/21/the-pnse-paper/

So, shouldn't all the rats who've been so into meditation etc for the past decade or so be kinda panicking at the apparent fact that enlightenment is just dunning-krugering yourself into not being able to notice your own incompetence?

Moral Weight Doesn't Accrue Linearly

My position is "chickens have non-zero moral value, and moral value is not linearly additive." That is, any additional chicken suffering is bad, any additional chicken having a pleasant life is good, and the total moral value of all chickens as the number of chickens approaches infinity is something like 1/3rd of a human

Moral Weight Doesn't Accrue Linearly

For anyone who does think that both 1) chickens have non-zero moral value, and 2) moral value is linearly additive, are you willing to bite the bullet that there exist a number of chickens such that it would be better to cause that many chickens to continue to exist at the expense of wiping out all other sentient life forever? This seems so obviously false and also so obviously the first thing to think of when considering 1 and 2 that I am confused there exist folks who accept 1 and 2

Moral Weight Doesn't Accrue Linearly

Replace "you" with "the hypothetical you who is attempting to convince hypothetical me they exist", then

Moral Weight Doesn't Accrue Linearly

>What is the mugging here?

I'm not sure what the other-galaxy-elephants mugging is, but my anti-Pascal's-mugging defenses are set to defend me against muggings I do not entirely understand. In real life, I think that the mugging is "and therefore it is immoral of you to eat chickens."

>Why are they "my elephants"?

You're the one who made them up and/or is claiming they exist.

Moral Weight Doesn't Accrue Linearly
When people consider it worse for a species to go from 1000 to 0 members, I think it's mostly due to aesthetic value (people value the existence of a species, independent of the individuals), and because of option value

Yes, these are among the reasons why moral value is not linearly additive. I agree.

People would probably also find it tragic for plants to go extinct (and do find languages going extinct tragic), despite these having no neurons at all.

Indeed, things other than neurons have value.

I personally reject this for animals, though, for the same reasons that I reject it for humans.

Really? You consider it to be equivalently bad for there to be a plague that kills 100,000 humans in a world with a population of 100,000 than in a world with a population of 7,000,000,000?

Moral Weight Doesn't Accrue Linearly

My reply to all of those is "I do not believe you. This sounds like an attempt at something akin to Pascal's Mugging. I do not take your imaginary elephants into consideration for the same reason I do not apply moral weight to large numbers of fictional elephants in a novel."

2013 Less Wrong Census/Survey

Several of these questions are poorly phrased. For instance, the supernatural and god questions, as phrased, imply that the god chance should be less than the chance of supernatural anything existing. However, I think (and would like to be able to express) that there is a very small (0), chance of ghosts or wizards, but only a small (1) chance of there being some sort of intelligent being which created the universe-for instance, the simulation hypothesis, which I would consider a subset of the god hypothesis.

Load More