15 comments, sorted by Click to highlight new comments since: Today at 4:40 PM
New Comment

Reading AI alignment posts on here has made me realize how a lot of these ideas can potentially also apply to societal structures. Our social institutions are kind of like an AI system that uses humans for its computing units. Unfortunately, our institutions are not that “friendly”. In fact, badly aligned institutions are probably a major cause of unprogress in the developing world. Has there been much thought/discussion on these topics? Is there potential for adapting AI safety research to social mechanism design?

It's usually thought about the other way, i.e. we already are trying and failing to solve the human alignment problem (using social structures to get humans to do things in accord with particular values), so solutions to AI alignment must be of a class that cannot be or has not been attempted with humans. Examples can be drawn from business attempts to organize workers around a mission/objective/goal, state attempts to control people, and religious attempts to align behavior with religious teachings.

But I don't see much serious technical research on societal alignment at all. (Most political science is just high status people saying charismatic opinions, nothing technical.) That cultural evolution has failed that endeavor (somewhat; it still mostly works, to be fair.) does not mean we should be hopeless that the project is doomed.

I'm thinking "project [/product] announcement". I encourage you to add a tag you think works, if anyone comes up with a better name, we can always change the name later

When the Coronavirus started in the winter, I spend quite a bit of time reading related info, mostly from the LW diaspora but some from elsewhere. After some time passed, I noticed that this seems to be low value and procrastinatory, so I have not read much more about the topic. Recently, my sibling (who has had contact with a known Coronavirus infected person) has been showing some cold symptoms, and I am wondering if there are any summary posts of the practical stuff?

I took a look at the practical advice thread, but I had already read the top posts before, and the thread is also quite old and scattered. Recent Coronavirus posts seem to be only from Zvi, who focuses more on epidemiological statistics, and not practical advice, as far as I have read. Hasn’t there been new actionable info in the recent months?

Your sibling probably should get tested, but I can't tell you where and why, that is a local knowledge. Tests are simple, and results will be known in a few days.

If you live in the same place, ventilate your rooms a lot. The higher concentration of virus in the air = greater risk of infection, and worse outcomes of the infection. (Often the first infected person in the household has the best outcome because everyone else starts with a higher initial virus load.)

Otherwise, the usual advice: limit your contact with people, wear a face mask, don't touch eyes/nose/mouth, wash your hands, perhaps disinfect the stuff you buy with bleach or 70% alcohol, etc.

Another Vitamin D Covid study:

https://academic.oup.com/jcem/advance-article/doi/10.1210/clinem/dgaa733/5934827

These caterpillars go dormant when frozen in the arctic, and come alive again.

Also happens with at least one species of frog. Antifreeze in the body lowers the freezing point depression of water.

Just read Bostrom’s Pascal’s Mugging; Can’t the problem be solved as follows?

I have a probability estimate E0 in my head for the mugger giving me X (X being a lot) utility if I give them my money. E0 is not a number, as my brain does not seem to work with our traditional floating point numbers. What data structure actually represents E0, is not clear to me, but I can say E0 is a feeling of “empirically next to impossible, game-theoretically inadvisable to act on it being true”. Now, what’s the probability of I getting X utility tomorrow without giving the mugger my money? Let’s call that E1. E1 is “empirically next to impossible.” So giving my money to the mugger does NOT increase my expected utility gain at all! In fact, it decreases it, as I process E0 as a lower probability than E1 (because E0 is game-theoretically negative while E1 is neutral).

Now, you might say this is not solving the problem but bypassing it. I don’t feel this is true. Anyone who has studied numerical computation knows that errors are important and we can never have precise numbers.

The problem is solved if the limit as x approaches infinity, of p(x)*x, where x is the utility the mugger offers, is 0. (This is the case if p(x) <= x^2. If that's an upper bound, however loose, then the problem is solved.)

Are there any good introductory textbooks on decision theory? I searched some months ago, but only found a nontechnical philosophical book ...

Do you know of a free solution (not necessarily Free Software, though that’s preferred) I can use to turn speech to text? (Also no cloud solutions. I do not have the credit card and Western phone number they require.)

New to LessWrong?