Jeff Rose

Posts

Sorted by New

Wiki Contributions

Comments

Young kids catching COVID: how much to worry?

Interesting.  We are in somewhat the same boat. Fully vaccinated adults with a two year old.  I think where we come out is as follows.  

(1)  The risk to kids of COVID over the short term are clearly lower than for adults.  Over the long term, it is presently unknown.  

(2) It is highly likely (>90%) that we will be able to vaccinate young children by next year, so any risk reducing measures we take will be temporary. (Also, see (5).)

(3)   The risk from outdoor activities and from vaccinated people are very low.  Therefore, we are fine with outdoor activities masked or not and with socializing with fully vaccinated people.

(4) There are limited gains from indoor activities with unvaccinated people, so we will not bring our daughter indoors with unmasked unvaccinated people or unnecessarily indoors with people whose vaccine status is unknown.

(5) COVID prevalence here is dropping, whether for reasons of increased vaccination or otherwise.  If, due to increased vaccination, those rates stay down, we can relax these restrictions.

Why it took so long to do the Fermi calculation right?

The more interesting question is where else do we see something similar occurring?

For example, historically income in retirement was usually discussed in terms of expected value. More recently, we've begun to see discussions about retirement focusing on the probability of running out of money. Are there other areas where people focus on expected outcomes as opposed to the probability of X occurring?

Security Mindset and the Logistic Success Curve

The bigger problem here is that as noted in the post, (0) it is always faster to do things in a less secure manner. If you assume (1) multiple competitors trying to build AI (and if this is not your assumption, I would like to hear a basis for it), (2) at least some who believe that the first AI created will be in a position of unassailable dominance (this appears to be the belief of at least some and include, but not necessarily be limited to, those who believe in a high likelihood of a hard takeoff), (3) some overlap between the groups described in 1 and 2 (again, if you don't think this is going to be the case, I would like to hear a basis for it) and (4) varying levels of conern about the potential damage caused by an unfriendly AI (even if you believe that as we get closer to developing AI, the average and minimum level of concern will rise, variance is likely), the first AI to be produced is likely to be highly insecure (i.e. with non-robust friendliness).

Inadequacy and Modesty

"If you want to outperform—if you want to do anything not usually done—then you’ll need to conceptually divide our civilization into areas of lower and greater competency. "

The idea quoted above seems wrong in practice. You don't need to conceptually divide our civilization into areas of comptency - you need to see what is actually being done in the area in which you want to outperform: in particular, (i) whether your proposed activity/solution has already been tried or assessed; and (ii) the degree to which existing evidence says it won't or will work.

Also, if civilizational competence is intended to cover something beyond an efficient market, it would make sense to use a different example.