Wiki Contributions

Comments

Is there a worked example of Georgian taxes?

This is doable via a number of different methods; see this overview of such methods.

Is there a worked example of Georgian taxes?

Georgist Tax Clarifications

To clarify, the tax Georgists want (Land Value Tax or LVT) is a tax on the Economic Rent of the land.  While you can find more detailed explanations e.g. here (excellent overview by Lars Doucet, the whole series is recommended), the tax seems to (to my understanding) wind up being around 3-4% of the value of the land per year, after all the math.

As a very basic example, the economic rent of a piece of land (not buildings, just land) from the above:

If a piece of land costs $10,000 to buy, and is leased for $500/year, then an LVT that captures 100% of the land rent is $500/year, which works out to a 5% annual tax of the land value.

While pure Georgism advocates for taxing 100% of that $500/year economic rent, most of the actual proposals top out at 85%, and the more realistic ones are less than that.

Also keep in mind a) that this is tax on the land value, not including the value of the house/apartment building/office/etc. on top of it, and b) this replaces existing property taxes, which already exist everywhere and tax both the land and whatever's on top of it.

 

Stock Versus Flow

My understanding of Georgist policy is that the point is that land shouldn't be a stock at all.  It should not be an appreciating asset that an individual/firm holds.

This is for a number of reasons, including but not limited to:

  • The individual/firm didn't create the land, so why should they exclusively benefit from it?
  • Land held as stock incentivizes land speculation, which increases land prices and has all sorts of negative externalities.
  • Land value generally originates from what the land is near, rather than anything on it; the old real estate adage of location, location, location is literally true.  The individual didn't make the beach or park or subway system the land is near, and yet being near those things is what makes land valuable.

Instead, land should be put to productive use, so the owner can generate sufficient wealth to pay the LVT and have a little extra as a profit.

 

Further Reading

For all your Georgist needs, I recommend the substack Progress and Poverty.

For successful examples of Georgist policy, this article focuses on land policy in Singapore, while this one focuses on Norway.

 

Hope this helps!

LessWrong Now Has Dark Mode

Bug Report

 

This absolutely might just be me being blind, but in dark mode I'm not seeing a difference between read and unread alerts.

Also true.

I suspect (without any real evidence) that the publication track record is more important than the grades, if graduate school or a doctorate is the goal.  A C average undergrad with last authorship on a couple of great papers seems to me to look better than a straight-A student without any authorship, although I've no idea if it works that way in practice.

LessWrong Now Has Dark Mode

Holy crap this is amazing.  Thank you!

My two cents as someone who burned out with a full depressive episode in their junior year of an electrical engineering degree and managed to limp all the way to graduation:

  • Don't underestimate the magnitude of what can go wrong in your head.  I've got some genetic factors and some childhood stuff that likely contributed, but anxiety and depression can and will cripple you for years if you let them.
  • Get a blood test at your earliest convenience; make sure that you're not low on vitamin B12 or D, or have anything else obviously wrong.  I've heard enough stories about things like this (and had deficiencies myself) that indicate this is high-value.  It's also not hard to do or expensive (just ask your family doctor or the University Health Center; make sure to specify that you're curious about those deficiencies in particular).
  • Don't fall for false dichotomies.  You're clearly smart enough to get creative.
    • One suggestion I've seen in this thread is taking a semester/year off to go work, which your University should be fine with.
    • Another suggestion, if you believe there is inherent value in a degree, is to Goodhart the degree itself.  Read this - you can consciously engage in half-assing your degree with everything you've got.  In other words, diplomas don't include grades.  In other other words, Cs get degrees.  Feel free to put in the absolute minimum that has you passing, take the easiest electives, etc., and spend the time you get back on other things.
    • Once you've had your first job, no one cares where you got your degree from.  Spending 55k per year is absurd - transfer to a cheaper college, or take a look at your local community college.  Most community colleges will have an agreement with four-year colleges for credits to transfer.  Drop out and enroll there, get the freshman/sophomore/core credits over with, then transition to a cheaper/online college for the degree.  You can also take this slowly - one community college class a semester can be done simultaneously with a full-time job, if you're so inclined, and most four-year colleges will have something for people with jobs.
  • Reframe the question: if you've got three years left, each costing 55k, then would you willingly take 165k right now to walk away?  Would that 165k be worth more to you than an additional three years of education at college?
  • I honestly don't believe that student loan debt is ever worth it, especially not for programmers.  Doctors maybe, but that's about it.  Whatever you do, plan out how you're going to wind up debt-free at the end.  Scholarships are great.  So is working.  Debt isn't.
  • Isn't MIT's entire curriculum online?  Is there anything you can realistically gain from your current university that you can't get yourself in other ways?  Is the alumni network super valuable?  Is there an incubator you could take advantage of?

And a couple of general decision-helpers I like to use:

  • It's twenty years later, and you're looking back on your life.  What will you regret not having done more, finishing your degree or striking out on your own?
  • Take out a fair coin.  Heads you get the degree, tails you walk away.  Flip the coin, and pay attention to how you feel.  Are you hoping for a specific result?  Will you be disappointed with a different result?  The face the coin lands on is irrelevant.  In the moment it was in the air, spinning, what did you want it to land on?
  • You are considering Deviating from The Path.  Deviating from The Path invites risk, not only of failure, but of scorn.  Look at the dropout, they'll whisper as you pass.  I guess he didn't know better than us after all.  Too bad, really.  He had a good thing and lost it.  Can you endure that?  Do you care?  Does it matter to you, if that's what people think?  Remember that dissent feels like wearing a clown suit.
  • Humans are loss-averse.  Tip the scales slightly in favor of the riskier option, knowing that you're in a profession where the worst you'll do is still better than most of the people who have ever lived.
  • Do a pre-mortem: you drop out, do your startup, and fail.  You apply to the company you mentioned and are rejected.  You've left college and you've got no job, no startup, no hope.  What went wrong?  What can you do now to lower the probability that it will go wrong, or to soften the blow when it does?
  • Sleep on it.
  • Sleep on it again.
  • Read better advice than mine.

Good luck.

Frankenstein: A Modern AGI

Damn that dirty, unpredictable input.

It somewhat amuses me that the result of an AI attempting prediction error could plausibly be the equivalent of hiding under the covers for all eternity.

What DALL-E 2 can and cannot do

I have no idea how to interpret this.  Any ideas?

It seems like we got a variety of different styles, with red, blue, black, and white as the dominant colors.

Can we say that DALLE-2 has a style of its own?

Frankenstein: A Modern AGI

Those are good points, thanks.  I suppose in my model of how this sort of thing works out, I hadn't considered that the AGI might just buy us off, so to speak.

Part of this also comes down to what part of the FOOM we're speaking of, and what kind of power the AGI has.  If it gets to nanotech, then you're right - it's so powerful that it can neutralize us any number of ways, "war" being only one.

If it isn't at nanotech, though - if the AGI is still just smarter-than-human but not yet capable of using existing apparatus (Yudkowsky's example is custom proteins for molecular-scale nanotech, which can be done through orders placed over the internet) to achieve virtual omnipotence, then it isn't clear to me the AGI could neutralize humanity's ability to destroy it without getting rid of us altogether.

More saliently, what motive would such an AGI have for keeping us around at all?  Genuinely asking - even if the AGI doesn't have specific terminal goals beyond "reduce prediction error in input", wouldn't that still lead to it being opposed to humans if it believed that no trust could exist between them and it?

Frankenstein: A Modern AGI

Regarding the typical CEO, that does seem likely.

Suppose that after two days, the AI has superadvanced nanotech. It can do pretty much as it pleases. The humans all supposedly hate the AI. The AI uses its nanotech to build an immortal utopia for the humans anyway. Maybe humans all realize that actually the AI is aligned. (It has had plenty of opportunity to wipe out humanity and didn't) 

I can't tell if you're rejecting my premise by presenting one that you see as equally far-fetched?  

My general point is more about the idea that, if we consider an AGI without explicit purpose, its reaction to humanity may be determined (at least in part) by our reaction to it, which is something we can plausibly exert some small measure of control of, and likely won't make anything worse.

If an AGI models humans, via the data it can access on us, as being fundamentally incapable of trusting it, doesn't it have little choice but to act in such a way that neutralizes us?

Load More