sure, this is always a consideration. i'd even claim that the "wait.. what about the negative side effects?" question is a potential expected value spoiler for pretty much all longtermist interventions (because they often aim for effects that are multiple causal steps down the road), and as such not really specific to software.
great idea! since my metamed days i’ve been wishing there was a prediction market for personal medical outcomes — it feels like manifold mechanism might be a good fit for this (eg, at the extreme end, consider the “will this be my last market if i undertake the surgery X at Y?” question). should you decide to develop such aspect at one point, i’d be very interested in supporting/subsidising.
actually, the premise of david brin’s existence is a close match to moravec’s paragraph (not a coincidence, i bet, given that david hung around similar circles).
confirmed. as far as i can tell (i’ve talked to him for about 2h in total) yi really seems to care, and i’m really impressed by his ability to influence such official documents.
indeed, i even gave a talk almost a decade ago about the evolution:humans :: humans:AGI symmetry (see below)!
what confuses me though is that "is general reasoner" and "can support cultural evolution" properties seemed to emerge pretty much simultaneously in humans -- a coincidence that requires its own explanation (or dissolution). furthermore, eliezer seems to think that the former property is much more important / discontinuity causing than the latter. and, indeed, outsized progress being made by individual human reasoners (scientists/inventors/etc.) seems to evidence such view.
amazing post! scaling up the community of independent alignment researchers sounds like one of the most robust ways to convert money into relevant insights.
indeed they are now. retrocausality in action? :)
well, i've always considered human life extension as less important than "civilisation's life extension" (ie, xrisk reduction). still, they're both very important causes, and i'm happy to support both, especially given that they don't compete much for talent. as for the LRI specifically, i believe they simply haven't applied to more recent SFF grant rounds.