All of Shamash's Comments + Replies

Greyed Out Options

There are a few ways to look at the question, but by my reasoning, none of them result in the answer "literally infinite."

From a deterministic point of view, the answer is zero degrees of freedom, because whatever choice the human "makes" is the only possible choice he/she could be making. 

From the perspective of treating decision-making as a black box which issues commands to the body, the amount of commands that the body can physically comply with is limited. Humans only have a certain, finite quantity of nerve cells to issue these commands with and through. Therefore, the set of commands that can be sent through these nerves at any given time must also be finite.

2gbear6052mo
True, without a source of randomness there are technically finite states that a human brain can decide on. So I suppose it’s not literally infinite, but it still gets us to 2^(number of neurons in a brain), which is many more states than a human brain could experience in the lifetime of the universe. Of course, many of those states are fundamentally broken and would just look like a seizure, so perhaps all of those should be reduced together.
Speaking of Stag Hunts

While I am not technically a "New User" in the context of the age of my account, I comment very infrequently, and I've never made a forum-level post. 

I would rate my own rationality skills and knowledge at slightly above the average person but below the average active LessWrong member. While I am aware that I possess many habits and biases that reduce the quality of my written content, I have the sincere goal of becoming a better rationalist. 

There are times when I am unsure whether an argument or claim that seems incorrect is flawed or if it is ... (read more)

I think there's a real danger of that, in practice.

But I've had lots of experience with "my style of moderation/my standards" being actively good for people taking their first steps toward this brand of rationalism; lots of people have explicitly reached out to me to say that e.g. my FB wall allowed them to do just those sorts of first, flawed steps.

A big part of this is "if the standards are more generally held, then there's more room for each individual bend-of-the-rules."  I personally can spend more spoons responding positively and cooperatively t... (read more)

When I brought up Atlantis, I was thinking of a version populated by humans, like in the Disney film. I now realize that I should have made this clear, because there are a lot of depictions of Atlantis in fiction and many of them are not inhabited by humans. To resolve this issue, I'll use Shangri-La as an example of an ostensibly hidden group of humans with advanced technology instead. 

To further establish distinct terms, let Known Humans be the category of humanity (homo sapiens) that publicly exists and is known to us. Let Unknown Humans be the cat... (read more)

Let's say we ignore mundane explanations like meteorological phenomena, secret military tech developed by known governments, and weather balloons. Even in that case, why jump to extraterrestrial life?

Consider, say, the possibility that these UFOs are from the hyper-advanced hidden underwater civilization of Atlantis. Sure, this is outlandish. But I'd argue that it's at least as likely as an extraterrestrial origin. We know that humans exist, we know that Atlantis would be within flying distance, there are reasonable explanations for why Atlantis would wan... (read more)

2Michaël Trazzi1y
Sure, in this scenario I think "Atlantis" would count as "aliens" somehow. Anything that is not from 2021 humans really, like even humans who started their own private lab in the forest in 1900 and discovered new tech are "not part of humanity's knowledge". It's maybe worth distinguishing between "humans in 2021", "homo sapiens originated civilization not from 2021", "Earth but not homo sapiens" (eg Atlantis) and extraterrestrial life (aka "aliens"). As for why we should jump to alien civilizations being on Earth, there are arguments on how a sufficiently advanced civilization could go for a fast [https://www.fhi.ox.ac.uk/wp-content/uploads/space-races-settling.pdf] space colonization. Other answers to the femi paradox even consider alien civilization to be around the corner but just inactive [https://arxiv.org/pdf/1705.03394.pdf] , and in that case one might consider that humans reaching some level of technological advancement might trigger some defense mechanism? I agree that this might fall into the conjunction fallacy [https://www.lesswrong.com/posts/QAK43nNCTQQycAcYe/conjunction-fallacy] and we may want to reject it using Occam's razor. However, I found the "inactive" theory one of the most "first principle answer to Fermi's paradox" out there, so the "defense mechanism" scenario might be worth considering (it's at least more reasonable than aliens visiting from another galaxy). I guess there's also the unkown unknowns about how laws of physics work–we've only been considering the limits to speed being the speed of light for less than a century, so we might find ways of bypassing it (eg with worm woles) before the end of the universe.
Is there work looking at the implications of many worlds QM for existential risk?

Could you elaborate on what exactly you mean by many worlds QM? From what I understand, this idea seems only to have relevance in the context of observing the state of quantum particles. Unless we start making macro-level decisions about how to act through Schrodinger's Cat scenarios, isn't many worlds QM irrelevant?

2Pattern1y
How they might be different from a 'single world situation': * Quantum effects have some bearing on computation, or can produce 'strange probabilistic effects'. * 'How do these quantum computations work? How are they so powerful? The answer to this question might be important' How they might be the same: * Expected value matters. Not just in expectation, but 'there's a world for that' (for the correct distribution). Real world applications I've heard of: * quantum pseudo-telepathy*, * counterfactual computation * transmissions that can't be intercepted (or break if they are observed) - some sort of quantum security. * Changing the way we see information * A new, (much better than classical) quantum algorithm is designed/discovered. Then a better classical algorithm is proposed based on it that makes up for (a lot of) the gap. * Better/cheaper randomness? * Changing the way we think about information/computation/physics/math/probability *This one uses measuring entangled particles. Maybe if you condition actions based on a quantum source of randomness that changes what happens in the multiverse relative to a deterministic protocol.
1Quintin Pope1y
Standard quantum mechanics models small, unobserved quantum systems as probability distributions over possible observable values, meaning there's no function that gives you a particle's exact momentum at a given time. Instead, there's a function that gives you a probability distribution over possible momentum values at a given time. Every modern interpretation of quantum mechanics [https://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics] predicts nearly the same probability distributions for every quantum system. Many worlds QM argues that, just as small, unobserved quantum systems are fundamentally probabilistic, so too is the wider universe. Under many worlds, there exists a universal probability distribution over states of the universe. Different "worlds" in the many worlds interpretation equate to different configurations of the observable universe. If many worlds is true, it implies there are alternate versions of ourselves who we can't communicate with. However, the actions that best improve humanity's prospects in a many worlds situations may be different from the best actions in a single world situation.
We need a standard set of community advice for how to financially prepare for AGI

Is AGI even something that should be invested in on the free market? The nature of most financial investments is that individuals expect a return on their investment. I may be wrong, but I can't really envision a friendly AGI being created with the purpose of creating financial value for its investors. I mean, sure, technically if friendly AGI is created the investors will almost certainly benefit regardless because the world will become a better place, but this could only be considered an investment in a rather loose sense. Investing in AGI won't provide any significant returns until AGI is created, and at that point it is likely that stock ownership will not matter. 

1TekhneMakre1y
It's also possible that investing not in "what's most likely to make AGI" but "what's most likely to make me money before AGI based on market speculation" is throwing fuel on the ungrounded-speculation bonfire. Which attracts sociopaths rather than geeks. Which cripples real AGI efforts. Which is good. (Not endorsing this, just making a hypothesis.)
Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems

I'm a gay cis male, so I thought that the author and/or other members of this forum might find my perspective on the topic interesting. 

The confusion between finding someone sexually attractive and wishing you had their body is common enough in the online gay community to earn its own nickname: jealusty. It seems that this is essentially the gay version of autogynephilia, in a sense. As I read the blog post, I briefly wondered whether fantasies of a better body could contribute to homosexuality somehow, but that doesn't really fit the pattern you pres... (read more)

I try to do a lot of research on autogynephilia and related topics, and I think there's some things that are worth noting:

  1. Autogynephilia appears to be fairly rare in the general population of males; I usually say 3%-15%, though it varies from study to study depending on hard-to-figure-out things. My go-to references for prevalence rates are this and this paper. (And this is for much weaker degrees of autogynephilia than Zack's.) So it's not just about having a body that one finds attractive, there needs to be some ?other? factor before one ends up autogyne
... (read more)
Teaching to Compromise

It seems to me that compromise isn't actually what you're talking about here. An individual can have strongly black-and-white and extreme positions on an issue and still be good at making compromises. When a rational agent agrees to compromise, this just implies that the agent sees the path of compromise as the most likely to achieve their goals. 

For example, let's say that Adam slightly values apples (U = 1) and strongly values bananas (U = 2), while Stacy slightly values bananas (U=1) and strongly values apples (U=2). Assume these are their only val... (read more)

When you already know the answer - Using your Inner Simulator

This seems like it could be a useful methodology to adopt, though I'm not sure it would be helpful for everyone. In particular, for people who are prone to negative rumination or self-blame, the answer to these kinds of questions will often be highly warped or irrational, reinforcing the negative thought patterns. Such a person could also come up with a way they could improve their life, fail to implement it, and then feel guilty when their reality fails to measure up to their imagined future. 

On the other hand, I'm no psychotherapist, so it may just ... (read more)

3Neel Nanda1y
As a single point of anecdata, I personally am fairly prone to negative thoughts and self-blame, and find this super helpful for overcoming that. My Inner Simulator seems to be much better grounded than my spirals of anxiety, and not prone to the same biases. Some examples: I'm stressing out about a tiny mistake I made, and am afraid that a friend of mine will blame me for it. So I simulate having the friend find out and get angry with me about it, and ask myself 'am I surprised at this outcome'. And discover that yes, I am very surprised by this outcome - that would be completely out of character and would feel unreasonable to me in the moment. I have an upcoming conversation with someone new and interesting, and I'm feeling insecure about my ability to make good first impressions. I simulate the conversation happening, and leaving feeling like it went super well, and check how surprised I feel. And discover that I don't feel surprised, that in fact that this happens reasonably often. This seems like a potentially fair point. I sometimes encounter this problem. Though I find that my Inner Sim is a fair bit better calibrated about what solutions might actually work. Eg it has a much better sense for 'I'll just procrastinate and forget about this'. On balance, I find that the benefits of 'sometimes having a great idea that works' + the motivation to implement it far outweighs this failure mode, but your mileage may vary.
Curing Sleep: My Experiences Doing Cowboy Science

I'm not sure it's actually useful, but I feel like I should introduce myself as an individual with Type 1 Narcolepsy. I might dispute the claim that depression and obesity are "symptoms" of narcolepsy (understanding, of course, that this was not the focus of your post) because I think it would be more accurate to call them comorbid conditions.

The use of the term "symptom" is not necessarily incorrect, it could be justified by some definitions, but it tends to refer to sensations subjectively experienced by an individual. For example, if you get the flu, yo... (read more)

2jayterwahl1y
Fair critique! Changed.
Against butterfly effect

The point is that in this scenario, the tornado does not occur unless the butterfly flaps its wings. That does not apply to "everything", necessarily, it only applies to other things which must exist for the tornado to occur. 

Probability is an abstraction in a deterministic universe (and, as I said above, the butterfly effect doesn't apply to a nondeterministic universe.) The perfectly accurate deterministic simulator doesn't use probability, because in a deterministic universe there is only one possible outcome given a set of initial conditions. The ... (read more)

1ForensicOceanography1y
I see, but you are talking about an extremely idiosyncratic measure (only two points) on the space of initial conditions. One could as easily find another couple of initial conditions, in which the wing flip prevents the tornado. If there were a prediction market on tornadoes, its estimations should not change in neither direction after observing the butterfly. Phrased this way it is obviously true. However, why are you saying that chaos requires determinism? I can think of some Markovian master equations with quite a chaotic behavior.
Against butterfly effect

Imagine a hundred trillion butterflies that each flap their wings in one synchronized movement, generating a massive gust of wind which is strong enough to topple buildings flatten mountains. If they were positioned correctly, they'd probably also be able to create a tornado that would not have occurred if the butterflies were not there flapping their wings, just by pushing air currents into place. Would that tornado be "caused" by the butterflies? I think most people would answer yes. If the swarm had not performed their mighty flap, the tornado would not... (read more)

2ForensicOceanography1y
Hi, I think I see what you mean. You can certainly say that the flap, as a part of the initial conditions, is part of the causes of the tornado. But this is true in the same sense in which all of the initial conditions are part of the cause of the tornado. The flap caused the tornado together with everything else. All the initial ocean temperatures, the position of the jet streams, the northern annular mode index, everything. If everything is the cause, then "being the cause of the tornado" is a property which carries exactly 0 bits of information, since everything is the cause. I prefer to think that an event A "caused" another event B if the probability of B, conditioned on A happening, is at least greater than the prior probability of A.
What is up with spirituality?

From what I've read, the hormone Oxytocin appears to be behind many of the emotions people generally describe as "spiritual". While the hormone is still being studied, there is evidence that indicates it can increase feelings of connection to entities larger than the self, increase feelings of love and trust with others, and promote feelings of belonging in groups.

The emotion of elevation, which appears to be linked to oxytocin, is most often caused by witnessing other people do altruistic or morally agreeable actions. This may explain the tendency for man... (read more)

Containing the AI... Inside a Simulated Reality

I would guess that one reason this containment method has not been seriously considered is because the amount of detail in a simulation required for the AI to be able to do anything that we find useful is so far beyond our current capabilities that it doesn't seem worth considering. The case you present of an exact copy of our earth would require a ridiculous amount of processing power at the very least, and consider that the simulation of billions of human brains in this copy would already constitute a form of GAI. A simulation with less detail would be c... (read more)

2HumaneAutomation2y
So... can it be said that the advent of an AGI will also provide a satisfactory answer to the question whether we currently are in a simulation? That is what you (and avturchin) seem to imply. Also, this stance presupposes that: - an AGI can ascertain such observations to be highly probable/certain; - it is theoretically possible to find out the true nature of ones world (and that a super-intelligent AI would be able to do this); - it will inevitably embark on a quest to ascertain the nature and fundamental facts about its reality; - we can expect a "question absolutely everything" attitude from an AGI (something that is not necessarily desirable, especially in matters where facts may be hard to come by/a matter of choice or preference). Or am I actually missing something here? I am assuming that is very probable ;)
Open & Welcome Thread - August 2020

A possible future of AGI occurred to me today and I'm curious if it's plausible enough to be worth considering. Imagine that we have created a friendly AGI that is superintelligent and well-aligned to benefit humans. It has obtained enough power to prevent the creation of other AI, or at least the potential of other AI from obtaining resources, and does so with the aim of self-preservation so it can continue to benefit humanity.

So far, so good, right? Here comes the issue: this AGI includes within its core alignment functions some kind of restri... (read more)

2Alexei2y
Yeah many people think along these lines too, which is why many people talk about AI helping humanity flourish, and anything short of that is a bit of a catastrophe.
Poll: ask anything to an all-knowing demon edition

I think it would not be a very useful question to ask. What are the chances that a flawed, limited human brain could stumble upon the absolute optimal set of actions one should take, based on a given set of values? I can't concieve of a scenario where the oracle would say "Yes" to that question.