Wiki Contributions

Comments

Jaan Tallinn's 2021 Philanthropy Overview

sure, this is always a consideration. i'd even claim that the "wait.. what about the negative side effects?" question is a potential expected value spoiler for pretty much all longtermist interventions (because they often aim for effects that are multiple causal steps down the road), and as such not really specific to software.

Create a prediction market in two minutes on Manifold Markets

great idea! since my metamed days i’ve been wishing there was a prediction market for personal medical outcomes — it feels like manifold mechanism might be a good fit for this (eg, at the extreme end, consider the “will this be my last market if i undertake the surgery X at Y?” question). should you decide to develop such aspect at one point, i’d be very interested in supporting/subsidising.

Biology-Inspired AGI Timelines: The Trick That Never Works

actually, the premise of david brin’s existence is a close match to moravec’s paragraph (not a coincidence, i bet, given that david hung around similar circles).

[Linkpost] Chinese government's guidelines on AI

confirmed. as far as i can tell (i’ve talked to him for about 2h in total) yi really seems to care, and i’m really impressed by his ability to influence such official documents.

Soares, Tallinn, and Yudkowsky discuss AGI cognition

indeed, i even gave a talk almost a decade ago about the evolution:humans :: humans:AGI symmetry (see below)!

what confuses me though is that "is general reasoner" and "can support cultural evolution" properties seemed to emerge pretty much simultaneously in humans -- a coincidence that requires its own explanation (or dissolution). furthermore, eliezer seems to think that the former property is much more important / discontinuity causing than the latter. and, indeed, outsized progress being made by individual human reasoners (scientists/inventors/etc.) seems to evidence such view.

How To Get Into Independent Research On Alignment/Agency

amazing post! scaling up the community of independent alignment researchers sounds like one of the most robust ways to convert money into relevant insights.

Can you control the past?

indeed they are now. retrocausality in action? :)

Jaan Tallinn's 2020 Philanthropy Overview

well, i've always considered human life extension as less important than "civilisation's life extension" (ie, xrisk reduction). still, they're both very important causes, and i'm happy to support both, especially given that they don't compete much for talent. as for the LRI specifically, i believe they simply haven't applied to more recent SFF grant rounds.