Wiki Contributions

Comments

Twitter has announced a new policy of deleting accounts which have had no activity for a few years. I used the Wayback Machine to archive Grognor's primary twitter account here. Hal Finney's wife is keeping his account alive. 
I do not know who else may have died, or cryo-suspended, over the years of LW; nor how long the window of action is to preserve the accounts.

Or A*, which is a much more computationally efficient and deterministic way to minimize the distance to finish the maze, if you have an appropriate heuristic. I don't have an argument for it, but I feel like finding a good heuristic and leveraging it probably works very well as a generalizable strategy. 

Iran is an agent, with a constrained amount of critical resources like nuclear engineers, centrifuges, etc.

AI development is a robust, agent-agnostic process that has an unlimited number of researchers working in adjacent areas who could easily cross-train to fill a deficit, an unlimited number of labs which would hire researchers from DeepMind and OpenAI if they closed, and an unlimited amount of GPUs to apply to the problem. 

Probably efforts at getting the second-tier AI labs to take safety more seriously, in order to give the top tier more slack, will move back AI timelines a little? But most of the activities that my brain labels "nonviolent resistance" are the type that will be counterproductive unless there's already a large social movement behind them.

For personal communications, meta-conversations seem fine.

If you're setting up an organization, though, you should consider adopting some existing, time-tested system for maintaining secrets. For example, you could classify secrets into categories--those which would cause exceptionally grave harm to the secret's originator's values (call this category, say, "TS"); those which would cause serious harm ("S"), and those which would cause some noticeable harm ("C"). Set down appropriate rules for the handling of each type of secret--for example, you might not even write down the TS ones unless you had a very secure safe to store them in, or verbally discuss them outside of protected meeting rooms; and you might not do anything with the S secrets on an internet-connected computer. Anything above C might require a written chain of custody, with people taking responsibility for both the creation and destruction of any recorded form of the information.

You would then have to watch for mutual information in your communications, and see that no combination of the information that you publicly released could cause a large update toward one of the secrets you were keeping.  You'd also want to think of some general steps to take after an unplanned disclosure of each type of secret.

It may not sound like the most efficient way to do things, but there's some pretty high Chesterton's Fences around this kind of policy.

The answer I came up with, before reading, is that the proper maxent distribution obviously isn't uniform over every planck interval from here until protons decay; it's also obviously not a gaussian with a midpoint halfway to when protons decay. But the next obvious answer is a truncated normal distribution. And that is not a thought conducive to sleeping well.

I've used Eliezer's prayer to good effect, but it's a bit short. And I have considered The Sons of Martha, but it's a bit long.

Has anyone, in their rationalist readings, found something that would work as a Thanksgiving invocation of a just-right length?

Robin Hanson said,  with Eliezer eventually concurring, that "bets like this will just recover interest rates, which give the exchange rate between resources on one date and resources on another date."

E.g., it's not impossible to bet money on the end of the world, but it's impossible to do it in a way substantially different from taking a loan.

I built a thing.

UVC lamps deactivate viruses in the air, but harm skin, eyes, and DNA. So I made a short duct out of cardboard, with a 60W UVC corn bulb in a recessed compartment, and put a fan in it. 

I plan to run it whenever someone other than my wife and I visits my house. 

https://imgur.com/a/QrtAaUz

Note that Mortal Engines--that steampunk movie with the mobile, carnivorous cities--was released halfway between the original publishing of this essay and today.

Given the difficulties people have mentioned with moving high-density housing between and through cities, maybe we need small cities on SMTs ?

These were some great questions. I doubt a few of the answers, however. For example:

My estimate of how far off LEV is with 50% probability started out at 25 years 15 or so years ago, and is now 17 years, so let’s use round numbers and say 20 years. Those estimates have always been explicitly "post-money", though - in other words, when I say the money would make 10 years of difference, I mean that without the money, it would be 30 years. I think $1B is enough to remove that factor of 2-3 that you mentioned in the previous question, i.e. to take it down to around 1, because it would add a digit to our budget for 20 years. That factor is already coming down, and I expect that it will continue to do so as further progress is made at the bench, which is why I average the benefit out to a factor of 1.5 (i.e. 30/20).

Aubrey de Grey admits to drinking four pints of beer a day, and I believe his total ethanol consumption is much higher (via evidence which is strong to me, but not to you). He's 57, and looks older than many in their 70s. The evidence may be ambiguous on the longevity effects of <2 drinks per day, but it's quite clear on 4 or over.

This doesn't seem like the behavior of someone who truly believes, in the sense of constraining his expected experiences, that his remaining expected lifespan is almost exactly the time to LEV. I don't know what the real timeline to LEV is, but Dr. de Grey acts like he believes it's well over 30 years.

Load More