davidad has a 10-min talk out on a proposal about which he says: “the first time I’ve seen a concrete plan that might work to get human uploads before 2040, maybe even faster, given unlimited funding”.
I think the talk is a good watch, but the dialogue below is pretty readable even if you haven't seen it. I'm also putting some summary notes from the talk in the Appendix of this dialoge.
I think of the promise of the talk as follows. It might seem that to make the future go well, we have to either make general AI progress slower, or make alignment progress differentially faster. However, uploading seems to offer a third way: instead of making alignment researchers more productive, we "simply" run them faster. This seems...
It seems to me that the real issue is rational weighing of reference classes when using multiple models. I want to assign them weights so that they form a good ensemble to build my forecasting distribution from, and these weights should ideally reflect my prior of them being relevant and good, model complexity, and perhaps that their biases are countered by other reference classes. In the computationally best of all possible world I go down the branching rabbit hole and also make probabilistic estimates of the weights. I could also wing it.
The problem is t...
I have been baking for a long time, but it took a surprisingly long while to get to this practical "not a ritual" stage. My problem was that I approached it as an academic subject: an expert tells you what you need to know when you ask, and then you try it. But the people around me knew how to bake in a practical, non-theoretical sense. So while my mother would immediately tell me how to fix a too runny batter and the importance of quickly working a pie dough, she could not explain why that worked in terms that I could understand. Much frustratio...
Awesome find! I really like the paper.
I had been looking at Fisher information myself during the weekend, noting that it might be a way of estimating uncertainty in the estimation using the Cramer-Rao bound (but quickly finding that the algebra got the better of me; it *might* be analytically solvable, but messy work).
I tried doing a PCA of the judgments, to see if there was any pattern in how the predictions were judged. However, the variance of the principal components did not decline fast. The first component explains just 14% of the variance, the next ones 11%, 9%, 8%... It is not like there are some very dominant low-dimensional or clustering explanation for the pattern of good or bad predictions.
No clear patterns when I plotted the predictions in PCA-space: https://www.dropbox.com/s/1jvhzcn6ngsw67a/kurzweilpredict2019.png?dl=0 (In this plot colour denotes mean a...
Another nice example of how this is a known result but not presented in the academic literature:
https://constancecrozier.com/2020/04/16/forecasting-s-curves-is-hard/
The fundamental problem is not even distinguishing exponential from logistic: even if you *know* it is logistic, the parameters that you typically care about (inflexion point location and asymptote) are badly behaved until after the inflection point. As pointed out in the related twitter thread, you gain little information about the latter two in the early phase and only information about the ...
I think the argument can be reformulated like this: space has very large absolute amounts of some resources - matter, energy, distance (distance is a kind of resource useful for isolation/safety). The average density of these resources is very low (solar in space is within an order of magnitude of solar on Earth) and for matter it is often low-grade (Earth's geophysics has created convenient ores). Hence matter and energy collection will only be profitable if (1) access gets cheap, (2) one can use automated collection with a very low marginal cost - p...
Overall, typographic innovations like all typography are better the less they stand out yet do their work. At least in somewhat academic text with references and notation subscripting appears to blend right in. I suspect the strength of the proposal is that one can flexibly apply it for readers and tone: sometimes it makes sense to say "I~2020~ thought", sometimes "I thought in 2020".
I am seriously planning to use it for inflation adjustment in my book, and may (publisher and test-readers willing) apply it more broadly in the text.
Looking back at our paper, I think the weakest points are (1) we handwave the accelerator a bit too much (I now think laser launching is the way to go), and (2) we also handwave the retro-rockets (it is hard to scale down nuclear rockets; I think a detachable laser retro-rocket is better now). I am less concerned about planetary disassembly and building destination infrastructure: this is standard extrapolation of automation, robotics and APM.
However, our paper mostly deals with sending a civilization's seeds everywhere, it does not deal with near ter...
I have not seen any papers about it, but did look around a bit while writing the paper.
However, a colleague and me analysed laser acceleration and it looks even better. Especially since one can do non-rigid lens systems to enable longer boosting. We developed the idea a fair bit but have not written it up yet.
I would suspect laser is the way to go.
Another domain may be aviation. In the US, from the Wright brothers in 1903 to the Air Commerce Act 1926 it took 23 years.
Wikipedia: "In the early years of the 20th century aviation in America was not regulated. There were frequent accidents, during the pre-war exhibition era (1910–16) and especially during the barnstorming decade of the 1920s. Many aviation leaders of the time believed that federal regulation was necessary to give the public confidence in the safety of air transportation. Opponents of this view included those who distrusted governme...
S. Jay Olson's work on expanding civilizations is very relevant here, e.g. https://arxiv.org/abs/1608.07522 and https://arxiv.org/abs/1512.01521 That work suggests that even non-hidden civilizations will be fairly close to their light front.
Now, the METI application: if this scenario is true, then sending messages so that the expanding civilization notices us might be risky if they can quieten down and silently englobe or surprise us. (Surprise is likely more effective than englobement, since spamming the sky with quiet relativistic probes is hard to stop)...
It would be neat to actually make an implementation of this to show sceptics. It seems to be within the reach of a MSc project or so. The hard part is representing 2-5.
I think you will find this discussed in the Hanson-Yudkowsky foom debate. Robin thinks that distributed networks of intelligence (also known as economies) are indeed a more likely outcome than a single node bootstrapping itself to extreme intelligence. He has some evidence from the study of firms, which is a real-world example of how economies of scale can produce chunky but networked smart entities. As a bonus, they tend to benefit from playing somewhat nicely with the other entities.
The problem is that while this is a nice argument, would we want to bet...
I remember that we were joking at the NYC Singularity Summit workshop a few years back that maybe we should provide AI researchers with heroin and philosophers to slow them down.
As far as I have noticed, there are few if any voices in the academic/nearby AI safety community that promote slowing AI research as the best (or even a good) option. People talking about relinquishment or slowing seem to be far outside the main discourse, typically people who have only a passing acquaintance with the topic or a broader technology scepticism.
The best antidote is to start thinking about the details of how one would actually go about it: that generally shows why differential development is sensible.
I recently gave a talk at an academic science fiction conference about whether sf is useful for thinking about the ethics of cognitive enhancement. I think some of the conclusions are applicable to point 9 too:
(1) Bioethics can work in a "prophetic" and a "regulatory" mode. The first is big picture, proactive and open-ended, dealing with the overall aims we ought to have, possibilities, and values. It is open for speculation. The regulatory mode is about ethical governance of current or near-term practices. Ethicists formulate guidelin...
Well, 70 years of 1/37 risk still has 13% chance of showing zero wars. Could happen. (Since we are talking about smaller ones rather than WWIII anthropics doesn't distort the probabilities measurably.)
One could buy a Pinker improvement scenario and yet be concerned about a heavy tail due to nuclear or bio warfare of existential importance. The median cases might decline and the rate of events go down, yet the tail get nastier.
This is incidentally another way of explaining the effect. Consider the standard diagram of the joint probability density and how it relates to correlation. Now take a bite out of the upper right corner of big X and big Y events: unless the joint density started in a really strange shape this will tend to make the correlation negative.
It is pretty cute. I did a few Matlab runs with power-law distributed hazards, and the effect holds up well: http://aleph.se/andart2/uncategorized/anthropic-negatives/
We do not assume mirrors. As you say, there are big limits due to conservation of etendué. We are assuming (if I remember right) photovoltaic conversion into electricity and/or microwave beams received by rectennas. Now, all that conversion back and forth induces losses, but they are not orders of magnitude large.
In the years since we wrote that paper I have become much more fond of solar thermal conversion (use the whole spectrum rather than just part of it), and lightweight statite-style foil Dyson swarms rather than heavier collectors. The solar thermal... (read more)