This is doable via a number of different methods; see this overview of such methods.
To clarify, the tax Georgists want (Land Value Tax or LVT) is a tax on the Economic Rent of the land. While you can find more detailed explanations e.g. here (excellent overview by Lars Doucet, the whole series is recommended), the tax seems to (to my understanding) wind up being around 3-4% of the value of the land per year, after all the math.
As a very basic example, the economic rent of a piece of land (not buildings, just land) from the above:
If a piece of land costs $10,000 to buy, and is leased for $500/year, then an LVT that captures 100% of the land rent is $500/year, which works out to a 5% annual tax of the land value.
While pure Georgism advocates for taxing 100% of that $500/year economic rent, most of the actual proposals top out at 85%, and the more realistic ones are less than that.
Also keep in mind a) that this is tax on the land value, not including the value of the house/apartment building/office/etc. on top of it, and b) this replaces existing property taxes, which already exist everywhere and tax both the land and whatever's on top of it.
My understanding of Georgist policy is that the point is that land shouldn't be a stock at all. It should not be an appreciating asset that an individual/firm holds.
This is for a number of reasons, including but not limited to:
Instead, land should be put to productive use, so the owner can generate sufficient wealth to pay the LVT and have a little extra as a profit.
For all your Georgist needs, I recommend the substack Progress and Poverty.
For successful examples of Georgist policy, this article focuses on land policy in Singapore, while this one focuses on Norway.
Hope this helps!
This absolutely might just be me being blind, but in dark mode I'm not seeing a difference between read and unread alerts.
I suspect (without any real evidence) that the publication track record is more important than the grades, if graduate school or a doctorate is the goal. A C average undergrad with last authorship on a couple of great papers seems to me to look better than a straight-A student without any authorship, although I've no idea if it works that way in practice.
Holy crap this is amazing. Thank you!
My two cents as someone who burned out with a full depressive episode in their junior year of an electrical engineering degree and managed to limp all the way to graduation:
And a couple of general decision-helpers I like to use:
Damn that dirty, unpredictable input.
It somewhat amuses me that the result of an AI attempting prediction error could plausibly be the equivalent of hiding under the covers for all eternity.
I have no idea how to interpret this. Any ideas?
It seems like we got a variety of different styles, with red, blue, black, and white as the dominant colors.
Can we say that DALLE-2 has a style of its own?
Those are good points, thanks. I suppose in my model of how this sort of thing works out, I hadn't considered that the AGI might just buy us off, so to speak.
Part of this also comes down to what part of the FOOM we're speaking of, and what kind of power the AGI has. If it gets to nanotech, then you're right - it's so powerful that it can neutralize us any number of ways, "war" being only one.
If it isn't at nanotech, though - if the AGI is still just smarter-than-human but not yet capable of using existing apparatus (Yudkowsky's example is custom proteins for molecular-scale nanotech, which can be done through orders placed over the internet) to achieve virtual omnipotence, then it isn't clear to me the AGI could neutralize humanity's ability to destroy it without getting rid of us altogether.
More saliently, what motive would such an AGI have for keeping us around at all? Genuinely asking - even if the AGI doesn't have specific terminal goals beyond "reduce prediction error in input", wouldn't that still lead to it being opposed to humans if it believed that no trust could exist between them and it?
Regarding the typical CEO, that does seem likely.
Suppose that after two days, the AI has superadvanced nanotech. It can do pretty much as it pleases. The humans all supposedly hate the AI. The AI uses its nanotech to build an immortal utopia for the humans anyway. Maybe humans all realize that actually the AI is aligned. (It has had plenty of opportunity to wipe out humanity and didn't)
I can't tell if you're rejecting my premise by presenting one that you see as equally far-fetched?
My general point is more about the idea that, if we consider an AGI without explicit purpose, its reaction to humanity may be determined (at least in part) by our reaction to it, which is something we can plausibly exert some small measure of control of, and likely won't make anything worse.
If an AGI models humans, via the data it can access on us, as being fundamentally incapable of trusting it, doesn't it have little choice but to act in such a way that neutralizes us?