Shamash

Posts

Sorted by New

Wiki Contributions

Comments

Speaking of Stag Hunts

While I am not technically a "New User" in the context of the age of my account, I comment very infrequently, and I've never made a forum-level post. 

I would rate my own rationality skills and knowledge at slightly above the average person but below the average active LessWrong member. While I am aware that I possess many habits and biases that reduce the quality of my written content, I have the sincere goal of becoming a better rationalist. 

There are times when I am unsure whether an argument or claim that seems incorrect is flawed or if it is my reasoning that is flawed. In such cases, it seems intuitive to write a critical comment which explicitly states what I perceive to be faulty about that claim or argument and what thought processes have led to this perception. In the case that these criticisms are valid, then the discussion of the subject is improved and those who read the comment will benefit. If the criticisms are not valid, then I may be corrected by a response that points out where my reasoning went wrong, helping me avoid making such errors in the future.

Amateur rationalists like myself are probably going to make mistakes when it comes to criticism of other people's written content, even when we strive to follow community guidelines. My concern with your suggestions is that these changes may discourage users like me from creating flawed posts and comments that help us grow as rationalists. 

When I brought up Atlantis, I was thinking of a version populated by humans, like in the Disney film. I now realize that I should have made this clear, because there are a lot of depictions of Atlantis in fiction and many of them are not inhabited by humans. To resolve this issue, I'll use Shangri-La as an example of an ostensibly hidden group of humans with advanced technology instead. 

To further establish distinct terms, let Known Humans be the category of humanity (homo sapiens) that publicly exists and is known to us. Let Unknown Humans be the category of humanity (homo sapiens) which exists in secret cities and/or civilizations. Let Unknown Terrestrials be non-human lifeforms which originated on earth and are capable of creating advanced technology. Let Extraterrestrials be lifeforms which did not originate on earth. Let Superhumans be humans from space, or another dimension, or the future. 

The arguments you bring up concerning the Fermi paradox don't seem to answer the question of "Why jump to extraterrestrial life?". They are simply saying, "This is how aliens could potentially exist in close proximity without our knowledge." Let me attempt to demonstrate the issue with an analogy.

Imagine a cookie has been stolen from the cookie jar. Mother and Father are trying to figure out who took the cookie. 

Mother: "It seems most probable that one of the children did it. They have taken cookies from the cookie jar before."
Father: "Ah, but we should consider the possibility that a raven did it."
Mother: "Why would we think that there's a non-negligible chance that a raven took the cookie?"
Father: "Studies show that Ravens are capable of rudimentary tool use. It could have pried off the lid by using another object as a lever."

Nothing about these UFOs specifically indicates that they are extraterrestrial. The fact that extraterrestrial life might exist and might have the technology necessary to secretly observe us is not enough evidence to support any significant probability for them as an explanation for the UFOs, especially when we know for near-certain that Known Humans have flying machines with similar abilities. 

Let's say we ignore mundane explanations like meteorological phenomena, secret military tech developed by known governments, and weather balloons. Even in that case, why jump to extraterrestrial life?

Consider, say, the possibility that these UFOs are from the hyper-advanced hidden underwater civilization of Atlantis. Sure, this is outlandish. But I'd argue that it's at least as likely as an extraterrestrial origin. We know that humans exist, we know that Atlantis would be within flying distance, there are reasonable explanations for why Atlantis would want to secretly surveil us. If this version of Atlantis existed, sure, we would expect to see other pieces of evidence for them, but maybe Atlantis is hiding from us.

Consider the contrivances required to explain why a group of extraterrestrials would be discovered through UFO sightings. They'd be competent enough to travel through space, likely using faster than light travel, they'd clearly not want to be discovered because otherwise they'd respond to our signalling attempts. And yet they would not have the competence to, on multiple occasions, blow their cover and show themselves to humanity. And yet, they never blow their cover in a way that actually distinguishes them as extraterrestrial.

If not for popular culture, do you really think that you'd jump to an extraterrestrial explanation? All other flying machines that we know of have been made by humans. There is insufficient evidence to suppose that these flying machines, if they are flying machines, are not.

Is there work looking at the implications of many worlds QM for existential risk?

Could you elaborate on what exactly you mean by many worlds QM? From what I understand, this idea seems only to have relevance in the context of observing the state of quantum particles. Unless we start making macro-level decisions about how to act through Schrodinger's Cat scenarios, isn't many worlds QM irrelevant?

We need a standard set of community advice for how to financially prepare for AGI

Is AGI even something that should be invested in on the free market? The nature of most financial investments is that individuals expect a return on their investment. I may be wrong, but I can't really envision a friendly AGI being created with the purpose of creating financial value for its investors. I mean, sure, technically if friendly AGI is created the investors will almost certainly benefit regardless because the world will become a better place, but this could only be considered an investment in a rather loose sense. Investing in AGI won't provide any significant returns until AGI is created, and at that point it is likely that stock ownership will not matter. 

Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems

I'm a gay cis male, so I thought that the author and/or other members of this forum might find my perspective on the topic interesting. 

The confusion between finding someone sexually attractive and wishing you had their body is common enough in the online gay community to earn its own nickname: jealusty. It seems that this is essentially the gay version of autogynephilia, in a sense. As I read the blog post, I briefly wondered whether fantasies of a better body could contribute to homosexuality somehow, but that doesn't really fit the pattern you present. After all, your attraction to women was a constant. 

In regards to your masturbatory fantasies, the gay analogue would probably be growth or transformation fantasies, which are probably around as popular online proportionally. When I think about it from that point of view, it doesn't seem all that strange to desire a body that you would find sexually attractive. Personally, one of the primary reasons I haven't even been seeking any sexual experiences yet (I'm 21) is that I feel like the participation of my current body, which I do not find sexually attractive, would decrease my enjoyment of the activity to the point of uselessness. It makes sense that the inverse, the prospect of having sex where you're sexually attracted to everyone involved, would be alluring.

Anyway, everyone, let me know if you have any questions or feedback about what I've said. 

Teaching to Compromise

It seems to me that compromise isn't actually what you're talking about here. An individual can have strongly black-and-white and extreme positions on an issue and still be good at making compromises. When a rational agent agrees to compromise, this just implies that the agent sees the path of compromise as the most likely to achieve their goals. 

For example, let's say that Adam slightly values apples (U = 1) and strongly values bananas (U = 2), while Stacy slightly values bananas (U=1) and strongly values apples (U=2). Assume these are their only values, and that they know each other's values. If Adam and Stacy both have five apples and five bananas, a dialogue between them might look like this: 

Adam: Stacy, give me your apples and bananas. (This is Adam's ideal outcome. If Stacy agrees, he will get 30 units of utility. 

Stacy: No, I will not. (If the conversation ends here, both Adam and Stacy leave without a change in net value.)

Adam: I know that you like apples. I will give you five apples if you give me five bananas. (This is the compromise. Adam will not gain as much utility as an absolute victory, but he will still have a net 10 increase in utility.)

Stacy: I accept this deal. (Stacy could haggle, but I don't want to overcomplicate this. She gets a net 10 increase in utility from the trade.)

In this example, Adam's values are still simple and polarized, he never considers "stacy having apples" to have any value whatsoever. Adam may absolutely loathe giving up his apples, but not as much as he benefits from getting those sweet sweet bananas. If Adam had taken a stubborn position and refused to compromise (assuming Stacy is equally stubborn) then he would not have gained any utility at all, making it the irrational choice. It has nothing to do with how nuanced his views on bananas and apples are. 

It's important to try to view situations from many points of view, yes, and understanding the values of your opponent can be very useful for negotiation. But once you have, after careful consideration, decided what your own values are, it is rational to seek to fulfill them as much as possible. The optimal route is often compromise, and for that reason I agree that people should be taught how to negotiate for mutual benefit, but I think that being open to compromise is a wholly separate issue from how much conviction or passion one has for their own values and goals. 

When you already know the answer - Using your Inner Simulator

This seems like it could be a useful methodology to adopt, though I'm not sure it would be helpful for everyone. In particular, for people who are prone to negative rumination or self-blame, the answer to these kinds of questions will often be highly warped or irrational, reinforcing the negative thought patterns. Such a person could also come up with a way they could improve their life, fail to implement it, and then feel guilty when their reality fails to measure up to their imagined future. 

On the other hand, I'm no psychotherapist, so it may just be the opposite. Maybe asking these questions to oneself could help people break out of negative thought patterns by forcing certain conditions? I'd appreciate other people's take on this subject. 

Curing Sleep: My Experiences Doing Cowboy Science

I'm not sure it's actually useful, but I feel like I should introduce myself as an individual with Type 1 Narcolepsy. I might dispute the claim that depression and obesity are "symptoms" of narcolepsy (understanding, of course, that this was not the focus of your post) because I think it would be more accurate to call them comorbid conditions.

The use of the term "symptom" is not necessarily incorrect, it could be justified by some definitions, but it tends to refer to sensations subjectively experienced by an individual. For example, if you get the flu, your symptoms may include a headache, chills, and a runny nose. On the other hand, it's rather unlikely that you may tell your doctor that you are experiencing the symptom of obesity, you'd say you're experiencing weight gain. Comorbid conditions, on the other hand, refer to conditions (with symptoms of their own) that often occur alongside the primary condition. The term "comorbid" is the one I find most often in the scientific literature about narcolepsy and other disorders and conditions. 

Why am I writing an entire comment about this semantic dispute? Well, firstly, given the goals of this website, it seems that correcting an error (no matter how small) seems unlikely to have an unwanted result. Secondly, I think that the way we talk about an illness, especially a chronic illness, can significantly affect the mindsets of people who have that illness. The message of "narcolepsy can cause obesity" seems less encouraging to an obese narcoleptic than "Narcolepsy increases the chance of becoming obese". That might just be me, though, so it's inconclusive. 

I hope this comment hasn't been too pointless to read. What do you think about the proposed change? Do you think that there's a difference between calling something a symptom and calling it a comorbid condition? Oh, and if anyone wants to know anything about my experiences with type 1 narcolepsy, ask away.

Against butterfly effect

The point is that in this scenario, the tornado does not occur unless the butterfly flaps its wings. That does not apply to "everything", necessarily, it only applies to other things which must exist for the tornado to occur. 

Probability is an abstraction in a deterministic universe (and, as I said above, the butterfly effect doesn't apply to a nondeterministic universe.) The perfectly accurate deterministic simulator doesn't use probability, because in a deterministic universe there is only one possible outcome given a set of initial conditions. The simulation is essentially demonstrating "there is a set of initial conditions such that when butterfly flap = 0 there is no Texas tornado, but when butterfly flap = 1 and no other initial conditions are changed, there is a Texas tornado." 

Load More