Jackson Wagner

Engineer working on next-gen satellite navigation at Xona Space Systems. I write about effective-altruist and longtermist topics at nukazaria.substack.com, or you can read about puzzle videogames and other things at jacksonw.xyz

Wiki Contributions

Comments

Which countries will go to war with who?  Doesn't strike me as plausible that, eg, individual random countries in the tropics would literally declare war on much-richer countries far away.

I think you are confusing the interests of citizens in the tropics (who might be motivated to immigrate from eg the Middle East to Europe, or from Indonesia to New Zealand, or from Venezuela to Uruguay, just as the poor are always motivated to move to more prosperous lands) with diplomacy -- why would the leaders of places like Indonesia declare war on places like New Zealand?  We don't see countries in Central America trying to declare war on the USA today.

None other than Peter Thiel wrote a huge essay about investing while under anthropic shadow, and I wrote a post analyzing said essay!  It is interesting, although pretty abstract in a way that probably makes it more relevant to organizations like OpenPhilanthropy than to most private individuals.  Some quotes from Thiel's essay:

Apocalyptic thinking appears to have no place in the world of money. For if the doomsday predictions are fulfilled and the world does come to an end, then all the money in the world — even if it be in the form of gold coins or pieces of silver, stored in a locked chest in the most remote corner of the planet — would prove of no value, because there would be nothing left to buy or sell. Apocalyptic investors will miss great opportunities if there is no apocalypse, but ultimately they will end up with nothing when the apocalypse arrives. Heads or tails, they lose. ...A mutual fund manager might not benefit from reflecting about the danger of thermonuclear war, since in that future world there would be no mutual funds and no mutual fund managers left. Because it is not profitable to think about one ’s death, it is more useful to act as though one will live forever.

Since it is not profitable to contemplate the end of civilization, this distorts market prices. Instead of telling us about the objective probabilities of how things will play out, prices are based on probabilities adjusted by the anthropic logic of ignoring doomed scenarios:

Let us assume that, in the event of [the project of civilization being broadly successful], a given business would be worth $ 100/share, but that there is only an intermediate chance (say 1:10) of that successful outcome. The other case is too terrible to consider. Theoretically, the share should be worth $ 10, but in every world where investors survive, it will be worth $100. Would it make sense to pay more than $10, and indeed any price up to $100? Whether in hope or desperation, the perceived lack of alternatives may push valuations to much greater extremes than in nonapocalyptic times.

See my post for more.

For some more detail on what this plan might look like, my 2nd-place-winning entry in the Future of Life Institute's "A.I. world-building" competition, was all about how humanity uses prediction markets and other new institutional designs to increase its level of civilizational adequacy, becoming strong/wise enough to manage the safe development of transformative AI.

See my lesswrong post here (which focuses on the details of how AI development is controlled in my team's fictional scenario), or the whole entry here (which includes two great short stories by a friend of mine, and includes many more details about how things like futarchy, liquid democracy, network states and charter cities, quadratic funding for public goods funding, etc, develop over the next few decades).

You are in luck; it would appear that Elizabeth has already produced some significant long-covid analysis of exactly this nature!

You say:

[Under georgism,] there will be more pressure to use [land] in an economically viable way.

And then later you say:

If you want to reduce rents, all the usual methods apply – remove restrictions on land use, encourage higher density housing, and all that jazz.

I think that in my mind (and that of many Georgism advocates), one of the many benefits of Georgism would be that the increased pressure to use land in economically-optimal ways, will probably create increased incentives to build higher-density housing and increased political motivation remove economically-destructive land-use restrictions.  This admittedly is a more convoluted path to YIMBYism than just advocating for YIMBYism directly.  Nevertheless, Georgism & YIMBY-ism seem like natural complements, where each encourages the other.

An AI "warning shot" plays an important role in my finalist entry to the FLI's $100K AI worldbuilding contest; but civilization only has a good response to the crisis because my story posits that other mechanisms (like wide adoption of "futarchy"-inspired governance) had already raised the ambient wisdom & competence level of civilization.

I think a warning shot in the real world would probably push out timelines a bit by squashing the most advanced projects, but then eventually more projects would come along (perhaps in other countries, or in secret) and do AGI anyways, so I'd be worried that we'd get longer "timelines" but a lower actual chance of getting aligned AI.  For a warning shot to really be net-positive for humanity, it would need to achieve a very strong response, such as the international suppression of all AI research (not just cumbersome regulation on a few tech companies) with a ferocity that meets or exceeds how we currently handle the threat of nuclear proliferation.

"We could simulate a bunch of human-level scientists trying to build nanobots."
This idea seems far-fetched:

  • If it was easy to create nanotechnology by just hiring a bunch of human-level scientists, we could just do that directly, without using AI at all.
  • Perhaps we could simulate thousands and thousands of human-level intelligences (although of course these would not be remotely human-like intelligences; they would be part of a deeply alien AI system) at accelerated speeds.  But this seems like it would probably be more hardware-intensive than just turning up the dial and running a single superintelligence.  In other words, this proposal seems to have a very high "alignment tax".  And even after paying that hefty tax, I'd still be worried about alignment problems if I was simulating thousands of alien intelligences at super-speed!
  • Besides all the hardware you'd need, wouldn't this be very complicated to implement on the software side, with not much overlap with today's AI designs?

 

Has anyone done a serious analysis of how much semiconductor capacity could be destroyed using things like cruise missiles + nationalizing and shutting down supercomputers?  I would be interested to know if this is truly a path towards disabling like 90% of the world's useful-to-AI-research compute, or if the number is much smaller because there is too much random GPU capacity out there in the wild even when you commandeer TSMC fabs and AWS datacenters.

I was interested to know if Matrix 4 (especially with its tech-company San Francisco setting) would be offering an updated perspective on some AI issues, but alas, in the end the movie seemed to be even less about AI than the original Matrix films.  But I nevertheless thought the movie was thematically interesting; see my essay about the film here.

Ostensibly, the plot of The Matrix 4 is about Neo breaking out of a prison of illusion and rediscovering the true reality. But the structure of the movie is the opposite of this! It starts out asking real-world philosophical questions and agonizing over issues of reality, authenticity, and truth. But over time, the movie stops worrying about what’s really real, and instead descends into increasingly fictional themes and decreasingly coherent plot events — an enthusiastic embrace of feelings-world.

2022 update: I too was interested to know if Matrix 4 (especially with its tech-company San Francisco setting) would be offering an updated perspective on some AI issues, but alas, in the end the movie was even less about AI than the original Matrix films.  And not really a very good movie either.  But interesting; see my essay about the film here.

Ostensibly, the plot of The Matrix 4 is about Neo breaking out of a prison of illusion and rediscovering the true reality. But the structure of the movie is the opposite of this! It starts out asking real-world philosophical questions and agonizing over issues of reality, authenticity, and truth. But over time, the movie stops worrying about what’s really real, and instead descends into increasingly fictional themes and decreasingly coherent plot events — an enthusiastic embrace of feelings-world.

Your #2 motivation goes pretty far, so this is actually a much bigger exception to your bullet-bite than you might think.  The idea of "respecting the will of past generations to boost the chances that future generations will respect your will" goes far beyond sentimental deathbed wishes and touches big parts of how cultural & financial influence is maintained beyond death.  See my comment here.

Load More