Dagon

Just this guy, you know?

Wiki Contributions

Comments

Sorted by
Dagon20

I think massive acceleration in AI is likely after the point when AIs can accelerate labor working on AI R&D.

Fully agreed.  And the trickle-down from AI-for-AI-R&D to AI-for-tool-R&D to AI-for-managers-to-replace-workers (and -replace-middle-managers) is still likely to be a bit extended.  And the path is required - just like self-driving cars: the bar for adoption isn't "better than the median human" or even "better than the best affordable human", but "enough better that the decision-makers can't find a reason to delay".

DagonΩ120

Thanks for this - I'm in a more peripheral part of the industry (consumer/industrial LLM usage, not directly at an AI lab), and my timelines are somewhat longer (5 years for 50% chance), but I may be using a different criterion for "automate virtually all remote workers".  It'll be a fair bit of time (in AI frame - a year or ten) between "labs show generality sufficient to automate most remote work" and "most remote work is actually performed by AI".

Dagon20

If your improvement can't extract $6.3k from the land, preventing you from investing in that improvement is a feature, not a bug.

OK.  I hate that feature.  Especially since it doesn't prevent imperfect investments, it only punishes the ones that turn out suboptimal, often many years later.  

Dagon20

You can't sell the improvements if they're tied to land that is taxed higher than the improvements bring in (due to mistakes in improvement or changed environment that has increased the land value and the improvements haven't stayed optimal).    The land is taxed at it's full theoretical value, less than the improvements bring in, and the improvements are literally connected to it.

Dagon20

Yeah, I understand the theory - I haven't seen an implementation plan that unbundles land and improvements in practice, so it ends up as "normal property tax, secured by land and improvements, just calculated based on land-rent-value".

If you have a proposal where failing to pay the LVT doesn't result in loss of use of the improvements, let me know.

Dagon30

"seem like something that should be legal" is not the standard in any jurisdiction I know.  The distinctions between individual service-for-hire and software-as-a-service are pretty big, legally, and make the analogy not very predictive.

I'll take the other side of any medium-term bet about "action will be taken in a hurry" if that action is lawsuit under current laws.  Action being new laws could happen, but I can't guess well enough to have any clue how or when it'd be.

Dagon30

No, they're not.  I know of no case where a general-purpose toolmaker is responsible for misuse of it's products. This is even less likely for software, where it's clear that the criminals are violating their contract and using it without permission.

None of them, as far as I know, publish specifically what they're doing.  Which is probably wise - in adversarial situations, telling the opponents exactly what they're facing is a bad idea.  They're easy and cheap enough that "flag suspicious uses" doesn't do much - it's too late by the time the flags add up to any action.

This is going to get painful - these things have always been possible, but have been expensive and hard to scale.  As it becomes truly ubiquitous, there will be no trustworthy communication channels.

Dagon20

Thanks for the discussion.  I think I understand what you're pointing at, but I don't model it as an inverted preference hierarchy, or even a distinct type of preference.  Human preferences are very complicated graphs of long- and short-term intents, both rational, reflective goals and ... illegible desires.  These desires are intertwined and correlated, and change weights (and even composition) over time - sometimes intentionally, often environmentally.  

Calling it an "inversion" implies that one set is more correct or desirable than another, AND that the correct one is subverted.  I disagree with both of these things philosophically and generally, though there are specific cases where I agree for myself, and for most in the current environment.  My intuitions are specific and contextual for those cases, not generalizable.

Dagon20

I think this matches my modal expectations - this is most likely, in my mind.  I do give substantial minority probability (say, 20%) to more extreme and/or accelerated cases within a decade, and it becomes a minority of likelihood (say, 20% the other direction) over 2 or 3 decades.  

My next-most-likely case is that there is enough middle- and upper-middle class disruption in employment and human-capital value that human currencies and capital ownership structures (stocks, and to a lesser extent, titles and court/police-enforced rulings) become confused.  Food and necessities become scarce because the human systems of distribution break.  Riots and looting destroy civilization.  Possibly taking AI with it, possibly with the exception of some big data centers whose (human, with AI efficiency) staffers have managed to secure against the unrest - perhaps in cooperation with military units.

Dagon40

Hmm.  By "coercion", you include societal and individual judgements, not just actual direct threats.  It's still hard for me to separate (and even harder for me to privilege) "innate" preferences, over "holistic" preferences which acknowledge that there is a real advantage to existing smoothly in the current society, and include the contradictory sub-desires of thriving in a society, getting along well with allies, having fewer enemies, etc. and for the biological urges for (super)stimuli. 

Load More