All of RowanE's Comments + Replies

There's a vast difference between being "almost god-like" and being God, and as long as you don't equate the two then there's no contradiction.

This vast difference is only philosophical, but there is no practical difference: both (if they exist) are able to create miracles, install rules, promise paradise, immortality or hell after death. The only difference is the relation to the ontology of the Universe: the real God exists forever, and the simulation creator has evolved from the dead matter. But this difference doesn't create any observables. 

I don't think I've ever seen the paradox of tolerance used that way. Even in the original formulation from Popper, it's specifically an argument for restricting the principle of tolerance, based on the consequences of society being too tolerant.

The problem with the paradox of tolerance, (as I've seen it used) is people use it as an argument to justify putting limits on the principle which are in fact arbitrary and unjustified; they just say "we can't tolerate the intolerant" as a cached excuse for doing violence to politi... (read more)

You've set up a dichotomy between limited (e.g., reciprocal) tolerance and absolute tolerance by presuming that the limitations would be arbitrary and unjustified, but the limitations are justified by self-preservation (in case of Popper, tempered by preferring rational discourse if possible), so what you've said is an illustration of the fallacious argument this question is about.

Downvoting is not an argument because downvoting is a judgement that an idea is not worthy of "intellectually addressing" (on this forum). That's not not addressing an idea.

I have taken the survey.

That's the reason she liked those things in the past, but "acheiving her goals" is redundant, she should have known years in advance about that, so it's clear that she's grown so attached to self-improvement that she sees it as an end in itself. Why else would anyone ever, upon deciding to look inside themselves instead of at expected utility, replace thoughts of paragliding in Jupiter with thoughts of piano lessons?

Hedonism isn't bad, orgasmium is bad because it reduces the complexity of fun to maximising a single number.

I don't want to be upgr... (read more)

By believing it's important enough that when you come up with a system of values, you label it a terminal one. You might find that you come up with those just by analysing the values you already have and identifying some as terminal goals, but "She had long been a believer in self-perfection and self-improvement" sounds like something one decides to care about.

Serves her right for making self-improvement a foremost terminal value even when she knows that's going to be rendered irrelevant, meanwhile the loop I'm stuck in is of the first six hours spent in my catgirl volcano lair.

Self-improvement wasn't her terminal value, that was only derived from her utilitarianism, she liked to improve herself and see new vistas because it allowed her to be more efficient in carrying out her goals. I could have had her spend some time exploring her hedonistic side before looking at what she was becoming (orgasmium) and not liking it from her previous perspective.But the ASI decided that this would scar her mentally and that the two jump as dreams was the best way to get her out of the situation (or I didn't want to have to try to write highly optimised bliss, one of the two).
Is it possible to make something a terminal value? If so, how?

Seems heavy on sneering at people worried about AI, light on rational argument. It's almost like a rationalwiki article.

I'll add a datapoint to that and say an anonymous site like that is would tempt me enough to actively go and troll even though I'm not usually inclined towards trolling.

Although I picture it getting so immediately overwhelmed by trolls that the fun would disappear; "pissing in an ocean of piss" as 4chan calls it.

What is the value of this link supposed to be?

So, uh, are people honestly reporting that they got a "rationalist" result from this, or are they just thinking "well, I'm a rationalist, so..."?

The test labeled me as a "rationalist".

"Oh, that's nice."

They wouldn't exactly be accepting the belief as equally valid; religious people already accept that people of other religions have a different faith than they do, and on at least some level they usually have to disagree with "other religions are just as valid as my own" to even call themselves believers of a particular religion, but it gets you to the point of agreeing to disagree.

Since my comment was vague enough to be misunderstood, I'll try to clarify what I thought the first time.

The dialogue reads as a comedy skit where the joke is "theists r dum". The atheist states beliefs that are a parody of certain attitudes of religious believers, and then the theist goes along with an obvious setup they should see coming a mile away. It doesn't seem any more plausible than the classic "rabbit season, duck season" exchange in Looney Tunes, so it's not valuable.

I think an overall decrease in activity on Less Wrong is to blame - "the death of Less Wrong" has been proclaimed for a while now. In which case, decreasing the frequency of the quotes thread seems like it would add to the downward spiral if it did anything at all.

Don't feel I have the attention span (and/or spoons) right now to actually look through the draft, but I note that you mis-spelled "embarrass" while talking about whether you'd embarrassed yourself, which I thought was kinda funny.

Um, not intending to mock, just coincidental placing of a typo I'm sure

Believer: "I say it's duck season, and I say fire!"

Yeah, I don't see any real intellectual value to this.

Putting aside the piece itself, I'm curious...what do you think a believer would say about faith if an atheist claimed to not believe in God because of faith?

The usual rule is to identify as an "aspiring rationalist"; identifying rationality as what you are can lead to believing you're less prone to bias than you really are, while identifying it as what you aspire to reminds you to maintain constant vigilance.

That is mostly true, you've discovered the fallacy of most humans, they identify not out of rationality's sake but from their own comfort in most cases. Because they are not honest, they wear the facade of Rationality to rationalize their behavior even though they are not rational at all or care. Stupidity can be categorized as writing useless posts on facebook - Yudkowsky. Not being Vegan. Smoking etc. Connecting with the Way emotionally will allow you to scrutinize and redevelop your belief system. It's an observation our species has made but not reviewed.

I think I can conceive of things that are logically inconsistent. I might just be ignoring the details that make it inconsistent when I do, but other cases where I conceive of a concept but don't keep every detail in mind at once don't seem examples of inconceivability.

Wouldn't the ability to have a false positive for a paradox itself be a sign that people can conceive of things that are paradoxical?

I like "effective egoism" enough already, the alternatives I've seen suggested sound dumb and this one sounds snappy. It might not be perfect for communicating exactly the right message of what the idea is about, but you can do that by explaining, and having a cool name can only be achieved within the name itself.

I don't quite see the connection between the title and first sentence and the rest of the post you have there; logically inconsistent is not the same as inconceivable

Logically inconsistent implies inconceivable, right? Falsely attributing a paradox is a way of issuing a false positive for inconceivability. The rest of the post is an example of falsely attributing a paradox and concluding inconceivability, when someone already has the concept.

I accept that meat is more environmentally damaging per calorie (or similar such measures), and with the scale of the meat and dairy industry I'd accept saying it has a huge effect on the environment, but there are several steps between that and "if humanity doesn't go vegan soon, we will probably go extinct".

It's not actually an article, rather a structured debate formatted after a wiki, so that particular problem is kind of inherent.

I didn't click-through and there might be more context than this, but "chances only increase by 2 to 5 percent" is ambiguous between "percent (as an absolute probability)" and "percent (of the chance it was before)". I'm not sure if it qualifies as an "irrationality quote", it's just unclear and could be confusing, but /u/PhilGoetz's version is a step up.

(I'd maybe not use "odds ratio multiplier", because we're not just concerned about clarity, but clarity to people who might be statistically illiterate)

The way the problem reads to me, choosing dust specks means I live in a universe where 3^^^3 of me exist, and choosing torture means 1 of me exist. I prefer that more of myself exist than not, so I should choose specks in this case.

In a choice between "torture for everyone in the universe" and "specks for everyone in the universe", the negative utility of the former obviously outweighs that of the latter, so I should choose specks.

I don't see any incongruity or reason to question my beliefs? I suppose it's meant to be implied that it's ... (read more)

It sounds like you expect it to be obvious, but nothing springs to mind. Perhaps you should actually describe the insane reasoning or conclusion that you believe follows from the premise.

We could have random number generators that choose the geometry an agent in our simulation finds itself in every time it steps into a new room. We could make the agent believe that when you put two things together and group them, you get three things. We could add random bits to an agent's memory. There is no limit to how perverted a view of the world a simulated agent could have.

I unironically love how highly upvoted this post is - it's just so much my tribe, bonobo rationalist tumblr notwithstanding.

Guy who doesn't know much about startups here - "launched the first version" and "want [it] to become" sound indicative of something more "outline of a novel" - can you elaborate on how big of an accomplishment it was to get it off the ground in the first place?

In startups, it is so called "MVP" - minimal viable product, a simplest version that you can show users to get some feedback and see if it works. It is the first step to building a startup. To me it's a pretty huge accomplishment, I'm really proud of myself =) Most of the work went not into coding the website, but into figuring out what it is. I needed a thing that would be valuable, and that I would be excited to work on for the following few years. A competent programmer could probably create something like that in a week, but because I'm just learning web development(along with writing, producing videos, and other stuff) it took me longer. At the moment it's the best thing I've created, so I'm really happy about it. Also it's actually the 3rd iteration of my startup idea(first one was a platform for publishing fiction [], 2nd - platform for publishing webcomics [].)

I'll come in to say yes I agree these problems are confusing, although my ethics are weird and I'm only kind if a consequentialist.

(I identify as amoral, in practice what it means is I act like an egoist but give consequentialist answers to ethical questions)

What I've noticed is that this has caused me to slide towards prioritizing issues that affect me personally (meaning that I care somewhat more about climate change and less about animal rights than I have previously done).

She fangirls over the remake? I've never heard the remake described as anything other than some variant of "lifeless", especially from fans of classic Sailor Moon.

EDIT: Forgot it was the positivity thread for a second, let me have another go at that: So I guess maybe I should have another go at the remake! I actually really like being convinced to like a show I was previously "meh" about. Some shows it's more fun to get a hateboner/kismesis thing going for, but Sailor Moon Crystal isn't one of them.

My fiance likes it, supposedly it's been very true to the original manga. It might just be better than the original dub.

The problem is that ethics can work with other axioms. Someone might be a deontologist, and define ethics around bad actions e.g. "murder is bad", not because the suffering of the victim and their bereaved loved ones is bad but because murder is bad. Such a set of axioms results in a different ethical system than one rooted in consequentialist axioms such as "suffering is bad", but by what measure can you say that the one system is better than the other? The difference is hardly the same as between attempting rationality with empiricism vs without.

There is a difference, I'll be posting it Friday. I've got an exam tomorrow and it still needs some finishing touches. This project got a bit out of hand, the complete train of thought is about 4 pages long to explain properly, so a post is more appropriate than a comment. I'd like to hear your opinion on it, if you are willing :)

Well, I don't think "a bit of a middle-ground" justifies taking a stance calling full-on moral relativism "immoral, pointless & counterproductive".

"Suffering is bad" seems a lot easier to agree on as a premise than it actually is - taken by itself, just about anyone will agree, but taken as a premise for a system it implies a harm-minimising consequentialist ethical framework, which is a minority view.

And it's simple enough to consistently be pro-life but also support the death penalty: if one believes a fetus at whatever ... (read more)

But can't the same be said for rationality and science? As Descartes showed a "demon" could continuously trick us with a fake reality, or we could be in the matrix for all we know. For rationality to work we have to assume that empiricism holds true. Why couldn't the same be true for ethics? I think that if science can have its empiricism axiom, ethics can have its suffering axiom.

You could probably have just covered Ubuntu with "I'm not talking about the OS, I'm talking about a philosophy/ideology used used by Mugabe".

Although as formoral relativism... bad idea by whose standard? By what logic? If it's irrational nonsense to be a moral relativist, do you have a rational argument for moral realism?

Ah yes the illusion of transparency. I should have seen it coming that the OS would be first on peoples minds. Stupid. My position on moral realism/relativism is a bit middle ground between the two. There is no law of the universe that says we all should be "good" or even what this "good" is supposed to be. But I believe that does not mean we can't think rationally about it. We can show that some moral systems are at least inconsistent with respect to their stated goals. And on top of that if we assume for the sake of argument that we can get everyone to believe "suffering is bad" we can rule out a few more. For example the pro-life lobby in the US is vehemently against abortion, yet thinks that the death penalty is a good thing. If life were in fact sacrosanct would it not be logical to stop killing people? (This would also extend to cryonism, but since most of the pro-life lobby is christian, most adherents believe they are going to heaven and won't actually die. So that doesn't necessarily make it inconsistent.) Such a philosophy could be made more rational by making its beliefs consistent with its goal. To say that it would be better or more moral to do so would require people to at least agree suffering is bad, although I think most people would agree on that one. I deleted the post by now. This entire ordeal was very bad for my karma. Which come to think of it, is a strange term. Why not call it "thumbs up" or something? Such a reference to a non-scientific meta-physical idea seems a bit inconsistent with the rest of the content of the site.

I have taken the survey.

I see 20-30 (didn't count) comments in the thread so far, probably people are too lazy to upvote every one more than they vet who they upvote here, I think.

Downvoted for the kind of attitude actually described in Politics Is The Mind-Killer, the NRxs historically tending v to be the worst offenders is irrelevant.

I didn't downvote because it was already at minus one, but it seemed to apply mainly to government policies rather than private donations and be missing the point because of it, and "miss the point so as to bring up politics in your response" is not good.

The statements being believed in don't have to be on continuums (continui?) for belief in them to be represented as probabilities on a continuum; "I am X% certain that Y is always true".


If they know that few names from my era, they probably know similarly little about each one. I play "Albert Einstein", but it's obvious to any popsicles from the same era that I'm actually Rick Sanchez. This develops into an in-joke where basically every "Albert Einstein" is really playing Rick Sanchez. We ruin everything with drunken debauchery, then ???, profit, take over the degenerate binge-drinking wasteland society becomes.

If you think this has non-negligible negativity*probability, you've got the conjunction fallacy up the wazoo. Although what it actually reads as is finding a LessWrong framing and context to post the kind of furry hate you'd see in any other web forum, not very constructive.

So I'll respond at the same level of discourse to the scenario: "Bitch, I watched Monster Musume. My anaconda don't want none unless she's part anaconda. Your furfags are tame. Didn't you at least bring back any pegasisters? IWTCIRD!"

Now, not so much being inclined towards tho... (read more)

"So, specifically my generation, not my parents' or Queen Victoria's or... yours? That's a bold strategy, let's see if it pays off."

Maybe I have to spend a thousand years entertaining myself by making up total bullshit about my culture to troll the scientists, but eventually some group with completely different political beliefs will takeover, and maybe I'll share the same fate as the zookeepers but I'll damn sure be beaming the smuggest shiteating I-told-you-so grin at the zookeeper while the 41st-century neonazis hang us both in their day of th... (read more)

Wouldn't that be subjectively equivalent on the cryo-patient's end to "cryonics doesn't work, you just stay dead"?

In this case whether it "works" is a matter of where to draw the line. The Zombie Preacher of Somerset that Good_Burning_Plastic linked had an animated body, with competent mental faculties and some psychological continuity with the former person. If he could become a zombie from a blow to the head, it seems quite plausible (for the purpose of writing fiction, not for making predictions) that the same could happen with cryonics.
In this case whether it "works" is a matter of where to draw the line. The Zombie Preacher of Somerset that Good_Burning_Plastic linked had an animated body, with competent mental faculties and some psychological continuity with the former person. If he could become a zombie from a blow to the head, it seems quite plausible (for the purpose of writing fiction, not for making predictions) that the same could happen with cryonics.

You think I have friends and/or loved ones who are going into cryonics? Hahahaha!

Would seem to imply memories don't make up who you are - I mean, what I'm inclined to read into it is "there are souls and they got moved around", but it could be anything - in which case, if there's a way to cause myself amnesia (and with this level tech why wouldn't there be?) I should just wipe out my memories and find out who I am. Ideally it'll also be possible to save the memories in backups somehow, or I'll have "external memory" like diaries and such, in case I start regretting the decision.

That scenario still sounds awesome, as long as I'm comparing it to "no cryonics" instead of "best-case cryonics scenarios". I get to be dropped into a completely unfamiliar world with just my mind, a small sum of money, and a young healthy body? Sounds like a fun challenge, I mean I died once what have I got to lose?

Well, my current self and associated memories/opinions is fine with the second part, this is basically just a Buddhist hell where afterwards I get reincarnated into the post-singularity future.

ETA: also highly unlikely, since it happening to me is conditional on the scenario happening to anyone.

Yes - I mean existential crisis in the sense of dread and terror from letting my mind dwell on my eventual death, convincing myself I'm immortal is a decisive solution to that insofar as I can actually convince myself. I don't mind existence being meaningless, it is that either way, I care much more about whether it ends.

So you're not worried that it might be unending but very uncomfortable?

I consciously will myself to believe in big world immortality, as a response to existential crises, although I don't seem to have actual reasons not to believe such besides intuitions about consciousness/the self that I've seen debated enough to distrust.

So did I understand correctly, believing in big world immortality doesn't cause you an existential crisis, but not believing in it does?

Storing data that might be used to reconstruct someone in the future isn't really objectionable, but that seems separate from actually using that data to create the resurrection. And it probably works out fine in the utilitarian calculus unless you count the sunk cost vs creating a "better" new person or a utility monster, but bringing someone back to life just because they didn't mention that they didn't want it, or you thought the reason they gave for not wanting it was irrational, sounds really skeevy. We have rules about consent for interacting with other people's bodies, I think that includes implanting their consciousness in new bodies.

Most people hadn't chance to rationally evolute would they want to be resurrected and especially by the means of DI. Of course we could model their answer, but we have to create their model first which create circular logic in this case. many religious people probably will prefer not be resurrected by DI, because they bet that they could get better type of immortality in the other world. Also a person whose body is cryopreserved would rationally prefer be resurrected based on this body, but not using DI. It is large open field of ethical and legal questions here. What if a child died, and his father wants his DI (+DNA) immortality and his mother doesn't?

I believe the accepted plural of "waifu" is "waifus".

I know at least in our specific community, that we'd rather be resurrected than not, and especially in a techno-utopian future, almost goes without saying, but it still worries me that you don't seem to mention consent. At least the top paragraph suggests a third party collecting information about someone else so that they can be resurrected after their death, and even if we skip over the more normal issues with doing that, resurrecting someone without their permission seems like a violation.

In the mix with the problems you've listed under 1. is whether t... (read more)

Basically we do it all the time when we communicate with a person and create his model in our head. I think that it is moral to return to life everybody, except whose who explicitly and rationally were against. We don't know how to solve identity problem now, but maybe we will do some kind practical research and we will find it in the future. Or may be AI will help us. Until that I suggest conservative approach to identity - try to preserve as much as possible and accept copy creation only if alternative is death. May be we could build mechanism of identity transfer which is independent from information. If identity has any substance, like soul or causal links, we could build machines that find it and preserve it. There is also two type of immortality. Immortality for me, that is immortality from the point of view of the observer, which is most interesting, but also immortality-for-others, thats is immortality for your friends. Big world immortality from the link may work, but only for immortality-азк-me, but not for my friends who may want to see me alive in 20 years here on Earth. Also big world immortality helps cryonics and DI because resurrected DI and cryo client will dominate big-world resurection landscape and some of these resurrections will be exact as originals. So big world immortality help to fill gap lost during cryo
Load More