moridinamael

moridinamael's Comments

LessWrong Coronavirus Agenda

Scaling up testing seems to be critical. With easy, fast and ubiquitous testing, huge numbers of individuals could be tested as a matter of routine, and infected people could begin self-isolating before showing symptoms. With truly adequate testing policies, the goal of true "containment" could potentially be achieved, without the need to resort to complete economic lockdown, which causes its own devastating consequences in the long term.

Cheap, fast, free testing, possibly with an incentive to get tested regularly even if you don't feel sick, could move us beyond flattening the curve and into actual containment.

Even a test with relatively poor accuracy helps, in terms of flattening the curve, provided it is widely distributed.

So I might phrase this as a set of questions:

  • Should I get tested, if testing is available?
  • How do we best institute wide-scale testing?
  • How do we most quickly enact wide-scale testing?
How Do You Convince Your Parents To Prep? To Quarantine?

As my brother pointed out to me, arguments are not won in real time. Give them information in packets and calmly deal with objections as they come up, then disengage and let them process.

What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world?

Perhaps there’s some obvious way in which I’m misunderstanding, but if10% of people contract the virus over a shortish time frame then won’t essentially everyone contract it eventually? Why would it reach a 10% penetration and then stop? Isn’t this like asking what happens if 10% of people contract influenza? Maybe in a given year your odds of getting the flu are X% but your odds of getting it once in 10 years are roughly 10*X%. Am I missing something that implies the virus will be corralled and gotten under control at a certain point?

The two-layer model of human values, and problems with synthesizing preferences

Fantastic post; I'm still processing.

One bite-sized thought the occurs to me is that maybe this coupling of the Player and the Character is one of the many things accomplished by dreaming. The mind-system confabulates bizarre and complex scenarios, drawn in some sense from the distribution of possible but not highly probable sensory experiences. The Player provides an emotional reaction to these scenarios - you're naked in school, you feel horrifying levels of embarrassment in the dream, and the Character learns to avoid situations like this one without ever having to directly experience it.

I think that dreaming does this sort of thing in a general way, by simulating scenarios and using those simulations to propagate learning through the hierarchy, but in particular it would seem that viewing the mind in terms of Player/Character gives you a unique closed-loop situation that really bootstraps the ability of the Character to intuitively understand the Player's wishes.

How Doomed are Large Organizations?

I would love to see an answer to or discussion of this question. The premise of the OP that large companies would be better off if split into much much smaller companies is a shocking and bold claim. If conglomeration and growth of large firms were a purely Molochian and net-negative proposition, then the world would look different than it does.

Reality-Revealing and Reality-Masking Puzzles

I'm reminded of the post Purchase Fuzzies and Utilons Separately.

The actual human motivation and decision system operates by something like "expected valence" where "valence" is determined by some complex and largely unconscious calculation. When you start asking questions about "meaning" it's very easy to decouple your felt motivations (actually experienced and internally meaningful System-1-valid expected valence) from what you think your motivations ought to be (something like "utility maximization", where "utility" is an abstracted, logical, System-2-valid rationalization). This is almost guaranteed to make you miserable, unless you're lucky enough that your System-1 valence calculation happens to match your System-2 logical deduction of the correct utilitarian course.

Possible courses of action include:

1. Brute forcing it, just doing what System-2 calculates is correct. This will involve a lot of suffering, since your System-1 will be screaming bloody murder the whole time, and I think most people will simply fail to achieve this. They will break.

2. Retraining your System-1 to find different things intrinsically meaningful. This can also be painful because System-1 generally doesn't enjoy being trained. Doing it slowly, and leveraging your social sphere to help warp reality for you, can help.

3. Giving up, basically. Determining that you'd rather just do things that don't make you miserable, even if you're being a bad utilitarian. This will cause ongoing low-level dissonance as you're aware that System-2 has evaluated your actions as being suboptimal or even evil, but at least you can get out of bed in the morning and hold down a job.

There are probably other options. I think I basically tried option 1, collapsed into option 3, and then eventually found my people and stabilized into the slow glide of option 2.

The fact that utilitarianism is not only impossible for humans to execute but actually a potential cause of great internal suffering to even know about is probably not talked about enough.

ialdabaoth is banned

For the record, I view the fact that I commented in the first place, and that I now feel compelled to defend my comment, as being Exhibit A of the thing that I'm whining about. We chimps feel compelled to get in on the action when the fabric of the tribe is threatened. Making the banning of a badguy the subject of a discussion rather than being an act of unremarked moderator fiat basically sucks everybody nearby into a vortex of social wagon-circling, signaling, and reading a bunch of links to figure out which chimps are on the good guy team and which chimps are on the bad guy team. It's a significant cognitive burden to impose on people, a bit like an @everyone in a Discord channel, in that it draws attention and energy in vastly disproportionate scope relative to the value it provides.

If we were talking about something socio-emotionally neutral like changing the color scheme or something, cool, great, ask the community. I have no opinion on the color scheme, and I'm allowed to have no opinion on the color scheme. But if you ask me what my opinion is on Prominent Community Abuser, I can't beg off. That's not an allowed social move. Better not to ask, or if you're going to ask, be aware of what you're asking.

Sure, you can pull the "but we're supposed to be Rationalists(tm)" card, as you do in your last paragraph, but the Rationalist community has pretty consistently failed to show any evidence of actually being superior, or even very good, at negotiating social blow-ups.

ialdabaoth is banned

I wasn’t really intending to criticize the status quo. Social consensus has its place. I’m not sure moderation decisions like this one require social consensus.

ialdabaoth is banned

If you're looking for feedback ...

On one level I appreciate this post as it provides delicious juicy social drama that my monkey brain craves and enjoys on a base, voyeuristic level. (I recognize this as being a moderately disgusting admission, considering the specific subject matter; but I'm also pretty confident that most people feel the same, deep down.) I also think there is a degree of value to understanding the thought processes behind community moderation, but I also think that value is mixed.

On another level, I would rather not know about this. I am fine with Less Wrong being moderated by a shadowy cabal. If the shadowy cabal starts making terrible moderation decisions, for example banning everyone who is insufficiently ideologically pure, or just going crazy in some general way, it's not like there's anything I can do about it anyway. The good/sane/reasonable moderator subjects their decisions to scrutiny, and thus stands to be perpetually criticized. The bad/evil moderator does whatever they want, doesn't even try to open up a dialogue, and usually gets away with it.

Fundamentally you stand to gain little and lose much by making posts like this, and now I've spent my morning indulging myself reading up on drama that has not improved my life in any way.

Mental Mountains

Maybe, but I don't think that we developed our tendency to lock in emotional beliefs as a kind of self-protective adaptation. I think that all animals with brains lock in emotional learning by default because brains lock in practically all learning by default. The weird and new thing humans do is to also learn concepts that are complex, provisional, dynamic and fast-changing. But this new capability is built on the old hardware that was intended to make sure we stayed away from scary animals.

Most things we encounter are not as ambiguous, complex and resistant to empirical falsification as the examples in the Epistemic Learned Helplessness essay. The areas where both right and wrong positions have convincing arguments usually involve distant, abstract things.

Load More