An aspiring rationalist who has been involved in the Columbus Rationality community since January 2016.


Six-Door Cars

I doubt the lack of 6-door cars has much to do with aesthetics. Doors and tight door seals are some of the more complex and expensive portions of the car body. Doors also pose challenges for crash safety as you have a large opening in the car body weakening the main body's structural integrity in an accident. I suspect that the reason there are so few cars with 6 doors is the extra cost of manufacturing cars. That would lead to increased car costs. Most purchasers don't value the extra convenience of the additional doors enough relative to the added price. Any company producing such a car would find a very small market which might make it not worth it to the manufacturer.

Covid 2/25: Holding Pattern

Recently many sources have reported a "CA variant" with many of the same properties as the English and South African strains. I haven't personally investigated, but that might be something to look into. Especially given the number of rationalists in CA.

How can I protect my bank account from large, surprise withdrawals?

As others have already answered better than I, first avoid being obligated for such large unexpected charges. The customer in the example may have canceled their credit card, but they are still legally obligated to pay that money.

To answer the actual question of how to put limits. You can use privacy.com They allow you to create new credit card numbers that bill to your bank account but can have limits both in terms of total charges and monthly charges. You can also close any number at any time without impact on your personal finances. It is meant for safety and privacy for online shopping. You set up a card for each service. For example, create a card with you auto-bill the electric bill to. Set a limit that no more than say $200 can be charged to it each month. Any transaction that would push it over that limit will be declined, even automatic payments you have scheduled.

Covid 2/11: As Expected

I'd be interested in seeing a write up on whether people who've had COVID need to be vaccinated. I have a friend who was sick with COVID symptoms for 3 weeks and tested positive for SARS-CoV-2 shortly after the onset of symptoms. He is now being told by medical professionals that he needs to be vaccinated just the same as everyone else. I tried to look up the data on this. Sources like CDC, Cleavland Clinic, and Mayo Clinic all state that people need to be vaccinated even if they have had COVID. However, their messaging seems to be contradictory. There are many appeals to "we don't know". The reasoning doesn't appear to be any more complex than "vaccine good" and "immunity from infection 'not known'". There is no discussion of things I would expect like, the difference between testing positive with no symptoms, had symptoms but never tested, or tested positive and never had symptoms. While I can imagine reasons why immunity induced from the vaccine and from infection would be different, my prior is that most of the effects are going to be the same. There is repeated reference to not knowing how long immunity developed from infection lasts, but by definition, we have had less time to see how long immunity from the vaccine lasts. So our evidence about the vaccine would be weaker. I could say a lot more, but I'll leave it at that.

To avoid any confusion: My actual model is that if you've had COVID19 then the vaccine would act as a booster. So I'd say people who've had it should get vaccinated eventually but should be among the lowest priority. That should be modulated by the probability that you had COVID and the fact that asymptotic COVID may be less likely to develop immunity. On the other hand, having had asymptomatic COVID is probably evidence that you will be asymptotic if you get it again. That is not the message that is being given to the public.

History of the Public Suffix List

It's unfortunate that we have this mess. But couldn't this have been avoided by defaulting to minimal access? Per Mozilla (https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies), if a cookie's domain isn't set, it defaults to the domain of the site excluding subdomains. If instead, this defaulted to the full domain, wouldn't that resolve the issue? The harm isn't in allowing people to create cookies that span sites, but in doing so accidentally, correct? The only concern is then tracking cookies. For this, a list of TLDs which it would be invalid to specify as the domain would cover most cases. Situations like github.io are rare enough that there could simply be some additional DNS property they set which makes it invalid to have a cookie at that domain level.

Similarly, the secure and http-only properties ought to default to true.

Grokking illusionism

Even after reading your post, I don't think I'm any closer to comprehending the illusionist view of reality. One of my good and most respected friends is an illusionist. I'd really like to understand his model of consciousness.

Illusionists often seem to be arguing against strawmen to me. (Notwithstanding the fact that some philosophers actually do argue for such "strawman" positions). Dennet's argument against "mental paint" seems to be an example of this. Of course, I don't think there is something in my mental space with the property of redness. Of course "according to the story your brain is telling, there is a stripe with a certain type of property." I accept that the most likely explanation is that everything about consciousness is the result of computational processes (in the broadest sense that the brain is some kind of neural net doing computation, not in the sense that it is anything actually like the Von Neumann architecture computer that I am using to write this comment). For me, that in no way removes the hard problem of consciousness, it only sharpens it.

Let me attempt to explain why I am unable to understand what the strong illusionist position is even saying. Right now, I'm looking at the blue sky outside my window. As I fix my eyes on a specific point in the sky and focus my attention on the color, I have an experience of "blueness." The sky itself doesn't have the property of phenomenological blueness. It has properties that cause certain wavelengths of light to scatter and other wavelengths to pass through. Certain wavelengths of light are reaching my eyes. That is causing receptors in my eyes to activate which in turn causes a cascade of neurons to fire across my brain. My brain is doing computation which I have no mental access to and computing that I am currently seeing blue. There is nothing in my brain that has the property of "blue". The closest thing is something analogous to how a certain pattern of bits in a computer has the "property" of being ASCII for "A". Yet I experience that computation as the qualia of "blueness." How can that be? How can any computation of any kind create, or lead to qualia of any kind? You can say that it is just a story my brain is telling me that "I am seeing blue." I must not understand what is being claimed, because I agree with it and yet it doesn't remove the problem at all. Why does that story have any phenomenology to it? I can make no sense of the claim that it is an illusion. If the claim is just that there is nothing involved but computation, I agree. But the claim seems to be that there are no qualia, there is no phenomenology. That my belief in them is like an optical illusion or misremembering something. I may be very confused about all the processes that lead to my experiencing the blue qualia. I may be mistaken about the content and nature of my phenomenological world. None of that in any way removes the fact that I have qualia.

Let me try to sharpen my point by comparing it to other mental computation. I just recalled my mother's name. I have no mental access to the computation that "looks up" my mother's name. Instead, I go from seemingly not having ready access to the name to having it. There is no qualia associated with this. If I "say the name in my head", I can produce an "echo" of the qualia. But I don't have to do this. I can simply know what her name is and know that I know it. That seems to be consistent with the model of me as a computation. That if I were a computation and retrieved some fact from memory, I wouldn't have direct access to the process by which it was retrieved from memory, but I would suddenly have the information in "cache." Why isn't all thought and experience like that? I can imagine an existence where I knew I was currently receiving input from my eyes that were looking at the sky and perceiving a shade which we call blue without there being any qualia. 

For me, the hard problem of consciousness is exactly the question, "How can a physical/computational process give rise to qualia or even the 'illusion' of qualia?" If you tell me that life is not a vital force but is instead very complex tiny machines which you cannot yet explain to me, I can accept that because, upon close examination, those are not different kinds of things. They are both material objects obeying physical laws. When we say qualia are instead complex computations that you cannot yet explain to me, I can't quite accept that because even on close examination, computation and qualia seem to be fundamentally different kinds of things and there seems to be an uncrossable chasm between them.

I sometimes worry that there are genuine differences in people's phenomenological experiences which are causing us to be unable to comprehend what others are talking about. Similar to how it was discovered that certain people don't actually have inner monologues or how some people think in words while others think only in pictures.

To first order, moral realism and moral anti-realism are the same thing

I can parse your comment a couple of different ways, so I will discuss multiple interpretations but forgive me if I've misunderstood.

If we are talking about 3^^^3 dust specks experienced by that many different people, then it doesn't change my intuition. My early exposure to the question included such unimaginably large numbers of people. I recognize scope insensitivity may be playing a role here, but I think there is more to it.

If we are talking about myself or some other individual experiencing 3^^^3 dust specks (or 3^^^3 people each experiencing 3^^^3 dust specks), then my intuition considers that a different situation. A single individual experiencing that many dust specks seems to amount to torture. Indeed, it may be worse than 50 years of regular torture because it may consume many more years to experience them all. I don't think of that as "moral learning" because it doesn't alter my position on the former case.

If I have to try to explain what is going on here in a systematic framework, I'd say the following. Splitting up harm among multiple people can be better than applying it to one person. For example, one person stubbing a toe on two different occasions is marginally worse than two people each stubbing one toe. Harms/moral offenses may separate into different classes such that no amount of a lower class can rise to match a higher class. For example, there may be no number of rodent murders that is morally worse than a single human murder. Duration of harm can outweigh intensity. For example, imagine mild electric shocks that are painful but don't cause injury and furthermore that receiving one followed by another doesn't make the second any more physically painful. Some slightly more intense shocks over a short time may be better than many more mild shocks over a long time. This comes in when weighing 50 years of torture vs 3^^^3 dusk specks experienced by one person though it is much harder to make the evaluation.

Those explanations feel a little like confabulations and rationalizations. However, they don't seem to be any more so than a total utilitarianism or average utilitarianism explanation for some moral intuitions. They do, however, give some intuition why a simple utilitarian approach may not be the "obviously correct" moral framework.

If I failed to address the "aggregation argument," please clarify what you are referring to.

To first order, moral realism and moral anti-realism are the same thing

At least as applied to most people, I agree with your claim that "in practice, and to a short-term, first-order approximation, moral realists and moral anti-realists seem very similar." As a moral anti-realist myself, a likely explanation for this seems to be that they are engaging in the kind of moral reasoning that evolution wired into them. Both the realist and anti-realist are then offering post hoc explanations for their behavior.

With any broad claims about humans like this, there are bound to be exceptions. Thus all the qualifications you put into your statement. I think I am one of those exceptions among the moral anti-realist. Though, I don't believe it in any way invalidates your "Argument A." If you're interested in hearing about a different kind of moral anti-realist, read on.

I'm known in my friend circle for advocating that rationalists should completely eschew the use of moral language (except as necessary to communicate to or manipulate people who do use it). I often find it difficult to have discussions of morality with both moral realists and anti-realists. I don't often find that I "can continue to have conversations and debates that are not immediately pointless." I often find people who claim to be moral anti-realists engaging in behavior and argument that seem antithetical to an anti-realist position. For example, when anti-realists exhibit intense moral outrage and think it justified/proper (esp. when they will never express that outrage to the offender, but only to disinterested third parties). If someone engages in a behavior that you would prefer they not, the question is how can you modify their behavior. You shouldn't get angry when others do what they want, and it differs from what you want. Likewise, it doesn't make sense to get mad at others for not behaving according to your moral intuitions (except possibly in their presence as a strategy for changing their behavior).

To a great extent, I have embraced the fact that my moral intuitions are an irrational set of preferences that don't have to and never will be made consistent. Why should I expect my moral intuitions to be any more consistent than my preferences for food or whom I find physically attractive? I won't claim I never engage in "moral learning," but it is significantly reduced and more often of the form of learning I had mistaken beliefs about the world than changing moral categories. When debating the torture vs. dust specks problem with friends, I came to the following answer: I prefer dust specks. Why? Because my moral intuitions are fundamentally irrational, but I predict I would be happier with the dust specks outcome. I fully recognize that this is inconsistent with my other intuition that harms are somehow additive and the clear math that any strictly increasing function for combining the harm from dust specks admits of a number of people receiving dust specks in their eyes that tallies to significantly more harm than the torture. (Though there are other functions for calculating total utility that can lead to the dust specks answer.)

Military AI as a Convergent Goal of Self-Improving AI

Not going to sign up with some random site. If you are the author, post a copy that doesn't require signup.

Load More