Jackson Wagner

Engineer working on next-gen satellite navigation at Xona Space Systems. I write about effective-altruist and longtermist topics at nukazaria.substack.com, or you can read about puzzle videogames and other things at jacksonw.xyz

Wiki Contributions

Comments

Socialism / communism is about equally abstract as Georgism, and it certainly inspired a lot of people to fight! Similarly, Republican campaigns to lower corporate tax rates, cut regulations, reduce entitlement spending, etc, are pretty abstract (and often actively unpopular when people do understand them!), but have achieved some notable victories over the years. Georgism is similar to YIMBYism, which has lots of victories these days, even though YIMBYism also suffers from being more abstract than conspiracy theories with obvious villains about people "hoarding" vacant housing or chinese investors bidding up prices or whatever. Finally, Georgism itself was extremely popular once, so it clearly has the potential!! Overall, I don't think being abstract is fatal for a mass movement.

But I also don't think that we need to have some kind of epic Georgist popular revolution in order to get Georgist policies -- we can do it just by making small incremental technocratic reforms to local property tax laws -- getting local governments to use tools like ValueBase (developed by Georgist Lars Doucet) to do their property value assessments, getting reforms in a few places and then hopefully seeing success and pointing to that success to build more momentum elsewhere, etc.

As Lars Doucet tells it, the main problem with historical Georgism wasn't unpopularity (it was extremely popular then!), but just the technical infeasibility of assessing land value separate from the value of the buildings on the land. But nowadays we have machine learning tools, GIS mapping systems, satellite imagery, successful home-value-estimation companies like Zillow and Redfin, etc. So nowadays we can finally implement Georgism on a technical level, which wasn't possible in the 1890s. For more on this, see the final part of Lars's epic series of georgism posts on Astral Codex Ten: https://www.astralcodexten.com/p/does-georgism-work-part-3-can-unimproved?utm_source=url

Future readers of this post might be interested this other lesswrong post about the current state of multiplex gene editing: https://www.lesswrong.com/posts/oSy5vHvwSfnjmC7Tf/multiplex-gene-editing-where-are-we-now

Future readers of this blog post may be interested in this book-review entry at ACX, which is much more suspicious/wary/pessimistic about prion disease generally:

  • They dispute the idea that having M/V or V/V genes reduces the odds of getting CJD / mad cow disease / etc.
  • They imply that Britain's mad cow disease problem maybe never really went away, in the sense that "spontaneous" cases of CJD have quadrupled since the 80s, so it seems CJD is being passed around somehow?

https://www.astralcodexten.com/p/your-book-review-the-family-that

What kinds of space resources are like "mice & cheese"?  I am picturing civilizations expanding to new star systems mostly for the matter and energy (turn asteroids & planets into a dyson swarm of orbiting solar panels and supercomputers on which to run trillions of emulated minds, plus constructing new probes to send onwards to new star systems).

re: the Three Body Problem books -- I think the book series imagines that alien life is much, much more common (ie, many civilizations per galaxy) than Robin Hanson imagines in his Grabby Aliens hypothesis, such that there are often new, not-yet-technologically-mature civilizations popping up nearby each other, around the same time as each other.  Versus an important part of the Grabby Aliens model is the idea that the evolution of complex life is actually spectacularly rare (which makes humans seem to have evolved extremely early relative to when you might expect, which is odd, but which is then explained by some anthropic reasoning related to the expanding grabby civilizations -- all new civilizations arise "early", because by the mid-game, everything has been colonized already).  If you think that the evolution of complex life on other planets is actually a very common occurrence, then there is no particular reason to put much weight on the Grabby Aliens hypothesis.

In The Three Body Problem, Earth would be wise to keep quiet so that the Trisolarians don't overheard our radio transmissions and try to come and take our nice temperate planet, with its nice regular pattern of seasons.  But there is nothing Earth could do about an oncoming "grabby" civilization -- the grabby civilization is already speeding towards Earth at near-lightspeed, and wants to colonize every solar system (inhabited and uninhabited, temperate planets with regular seasons or no, etc), since it doesn't care about temperate continents, just raw matter that it can use to create dyson swarms.  The grabby civilizations are already expanding as fast as possible in every direciton, coming for every star -- so there is no point trying to "hide" from them.

Energy balance situation:
- the sun continually emits around 10^26 watts of light/heat/radiation/etc.
- per some relativity math at this forum comment, it takes around 10^18 joules to accelerate 1kg to 0.99c
- so, using just one second of the sun's energy emissions, you could afford to accelerate around 10^8 kg (about the mass of very large cargo ships, and of the RMS Titanic) to 0.99c.  Or if you spend 100 days' worth of solar energy instead of one second, you could accelerate about 10^15 kg, the mass of Mt. Everest, to 0.99c.
- of course then you have to slow down on the other end, which will take a lot of energy, so the final size of the von neumann probe that you can deliver to the target solar system will have to be much smaller than the Titanic or Mt Everest or whatever.
- if you go slower, at 0.8c, you can launch 10x as much mass with the same energy (and you don't have to slow down as much on the other end, so maybe your final probe is 100x bigger), but of course you arrive more slowly -- if you're travelling 10 light years, you show up 1.9 years later than the 0.99c probe.  If you're travelling 100 light years, you show up 19 years later.
- which can colonize the solar system and build a dyson swarm faster -- a tiny probe that arrives as soon as possible, or a 100x larger probe that arrives with a couple years' delay?  this is an open question that depends on how fast your von neuman machine can construct solar panels, automated factories, etc.  Carl Shulman in a recent 80K podcast figures that a fully-automated economy pushing up against physical limits, could double itself at least as quickly as once per year.  So mabye the 0.99c probe would do better over the 100 light-year distance (arriving 19 years early gives time for 19 doublings!), but not for the 10 light-year distance (the 0.99c probe would only have doubled itself twice, to 4x its initial mass, by the time the 0.8c probe shows up with 100x as much mass)
- IMO, if you are trying to rapaciously grab the universe as fast as possible (for the ultimate purpose of maximizing paperclips or whatever), probably you don't hop from nearby star to nearby star at efficient speeds like 0.8c, waiting to set up a whole new dyson sphere (which probably takes many years) at each stop.  Rather, your already-completed dyson swarms are kept busy launching new probes all the time, targeting ever-more-distant stars.  By the time a new dyson swarm gets finished, all the nearby stars have also been visited by probes, and are already constructing dyson swarms of their own.  So you have to fire your probes not at the nearest stars, but at stars some distance further away.  My intuition is that the optimal way to grab the most energy would end up favoring very fast expansion speeds, but I'm not sure.  (Maybe the edge of your cosmic empire expands at 0.99c, and then you "mop up" some interior stars at more efficient speeds?  But every second that you delay in capturing a star, that's a whopping 10^26 joules of energy lost!)

Yes, it does have to be fast IMO, but I think fast expansion (at least among civilizations that decide to expand much at all) is very likely.

Of course the first few starships that a civilization sends to colonize the nearest stars will probably not be going anywhere near the speed of light.  (Unless it really is a paperclips-style superintelligence, perhaps.)  But within a million years or so, even with relatively slow-moving ships, you have colonized thousands of solar systems, built dyson swarms around every star, have a total population in the bajilions, and have probably developed about all the technology that it is physically possible to develop.  So, at some point it's plausible that you start going very close to the speed of light, because you'll certainly have enough energy + technology to do so, and because it might be desirable for a variety of reasons:

- Maybe we are trying to maximize some maximizable utility function, be that paperclips or some more human notion, and want to minimize what Nick Bostrom calls "astronomical waste".
- Maybe we fail to coordinate (via a strong central government or etc), and the race to colonize the galaxy becomes a free-for-all, rewarding the fastest and most rapacious settlers, a la Robin Hanson's "Burning the cosmic commons".

Per your own comment -- if you only colonize at 0.8c so your ships can conserve energy, you are probably actually missing out on lots and lots of energy, since you will only be able to harvest resources from about half the volume that you could grab if you traveled at closer to lightspeed!

I think part of the "calculus" being run by the AI safety folks is as follows:

  1. there are certainly both some dumb ways humanity could die (ie, AI-enabled bioweapon terrorism that could have easily been prevented by some RLHF + basic checks at protein synthesis companies), as well as some very tricky, advanced ways (AI takeover by a superintelligence with a very subtle form of misalignment, using lots of brilliant deception, etc)

  2. It seems like the dumber ways are generally more obvious / visible to other people (like military generals or the median voter), wheras these people are skeptical of the trickier paths (ie, not taking the prospect of agentic, superintelligent AI seriously; figuring alignment will probably continue to be easy even as AI gets smarter, not believing that you could ever use AI to do useful AI research, etc).

  3. The trickier paths also seem like we might need to get a longer head start on them, think about them more carefully, etc.

  4. Therefore, I (one of the rare believers in things like "deceptive misalignment is likely" or "superintelligence is possible") should work on the trickier paths; others (like the US military, or other government agencies, or whatever) will eventually recognize and patch the dumber paths.

re: your comments on Fermi paradox -- if an alien super-civilization (or alien-killing AI) is expanding in all directions at close to the speed of light (which you might expect a superintelligence to do), then you mostly don't see them coming until it's nearly too late, since the civilization is expanding almost as fast as the light emitted by the civilization. So it might look like the universe is empty, even if there's actually a couple of civilizations racing right towards you!

There is some interesting cosmological evidence that we are in fact living in a universe that will eventually be full of such civilizations; see the Robin Hanson idea of "Grabby Aliens": https://www.youtube.com/watch?v=l3whaviTqqg

[spoilers for minor details of later chapters of the book] Isn't the book at least a little self-aware about this plot hole? If I recall correctly, the book eventually reveals (rot13 from here on out)...

gung gur fcnpr cebtenz cyna jnf rffragvnyyl qrfvtarq nf n CE fghag gb qvfgenpg/zbgvingr uhznavgl, juvpu vaqrrq unq ab erny cebfcrpg bs jbexvat (gbb srj crbcyr, gbb uneq gb qbqtr zrgrbef, rirelguvat arrqf gb or 100% erhfrq, rgp, yvxr lbh fnl). Juvyr gur fcnpr cebtenz jnf unccravat, bgure yrff-choyvp rssbegf (eryngrq gb ahpyrne fhoznevarf, qvttvat haqretebhaq, rgp) ner vaqrrq batbvat, nygubhtu gur tbireazragf pregnvayl nera'g gelvat gb fnir uhaqerqf bs zvyyvbaf bs crbcyr (urapr gur qrfver sbe frperpl, V thrff).

Jung'f "fhecevfvat" nobhg gur obbx'f cybg, vf gung (fvapr Arny Fgrcurafba ernyyl jnagf gb jevgr n fgbel nobhg fcnpr, engure guna n fgbel nobhg haqretebhaq ohaxref), gur fcnpr cyna npghnyyl raqf hc fhpprrqvat ng nyy, naq vaqrrq va gur ybat eha raqf hc orvat zhpu zber vasyhragvny bire gur shgher guna gur inevbhf cerfhznoyl-zber-frafvoyr cynaf sbe haqretebhaq ershtrf.

In addition to the researchy implications for topics like deception and superpersuasion and so forth, I imagine that results like this (although, as you say, unsuprising in a technical sense) could have a huge impact on the public discussion of AI (paging @Holly_Elmore and @Joseph Miller?) -- the general public often seems to get very freaked out about privacy issues where others might learn their personal information, demographic characteristics, etc.

In fact, the way people react about privacy issues is so strong that it usually seems very overblown to me -- but it also seems plausible that the fundamental /reason/ people are so sensitive about their personal information is precisely because they want to avoid being decieved or becoming easily manipulable / persuadable / exploitable!  Maybe this fear turns out to be unrealistic when it comes to credit scores and online ad-targeting and TSA no-fly lists, but AI might be a genuinely much more problematic technology with much more potential for abuses here.

So, sure, there is a threshold effect in whether you get value from bike lanes on your complex journey from point A to point G.  But other people throughout the city have different threshold effects:

  • Other people are starting and ending their trips from other points; some people are even starting and ending their trip entirely on Naito Parkway.
  • People have a variety of different tolerances for how much they are willing to bike in streets, as you mention.
  • Even people who don't like biking in streets often have some flexibility.  You say that you personally are flexible, "but for the sake of argument, let's just assume that there is no flexibility".  But in real life, even many people who absolutely refuse to bike in traffic might be happy to walk their bike on the sidewalk (or whatever) for a single block, in order to connect two long stretches of beautiful bike path.

When you add together a million different possible journeys across thousands of different people, each with their own threshold effects, the sum of the utility provided by each new bike lane probably ends up looking much more like a smooth continuum of incremental benefits per each new bike lane that is added to a city's network, with no killer threshold effects.  This is very different from a bridge, where indeed half a bridge is not useful to any portion of the city's residents.

Therefore, I don't think "if you're going to start building [a bike lane network], you better m ake sure you have plans to finish it".  Rather, I think adding random pieces of the network piecemeal (as random roads undergo construction work for other reasons, perhaps) is a totally reasonable thing for cities to do.

Another example of individual threshold effects adding up to continuous benefit from the provider's perspective:  suppose I only like listening to thrash-metal songs on Spotify.  Whenever Spotify adds a song from a genre I don't care for -- pop, classical, doom-metal, whatever -- it provides literally ZERO value to me!  There's a huge threshold effect, where I only care when spotify adds thrash-metal songs!  But of course, since everyone has different preferences, the overall effect of adding songs from Spotify's perspective is to provide an incremental benefit to the quality of their product each time they add a song.  (Disclaimer: I am not actually a thrash-metal fanatic.)

Load More