Learned about 'Harberger tax' recently.
The motivation is like
The pitch is like, can we do something in between?
They claim this keeps most of the benefit of investment incentivisation, because things are mostly private in practice, but substantially improves allocative efficiency by lubricating the more valuable trades.
Anyway, mainly it's interesting because getting into the gubbins of particular proposals helps me learn about the relevant dynamics in general, but also I wondered if there's something in this vicinity that could work nicely for AI development and AI deployment. (Like maybe a Harberger tax on compute, or on AI systems, or...)
Being required to sell at exactly the value you place on something seems unlikely to work out well in practice. As a toy example, carabiners are cheap, but if you're using one to hold your weight over a hundred foot drop, its value to you is approximately equal to the value of your life. Depending on any asset the sale of which can be forced is very risky, and assessing all critical components of a business at the entire value of the business seems incoherent.
Many versions of the proposal do not require you to sell immediately. Often there's a built in delay period before the trade occurs.
But more generally, yeah, you don't want to to do this with carabiners. But you might want to do it with land.
It seems like it would be pretty hard to define a single reasonable delay period for land sales. An example case where a large delay would be justifiable is: a business manufactures cars, and has a single factory that makes some critical component. The value of their business is effectively zero if they can't manufacture that component. But it takes N years and D dollars to get zoning, environnemental, safety etc approvals for a new factory (and also to actually build it and get it running as smoothly as the last one, and hire competent staff, and lay off the old staff...). Having a variable "reasonable" delay and switching-cost compensation might work, but that sounds like a recipe for endless litigation.
On top of that, land can have network effects within a single owner--if a business builds 5 codependent factories and has to sell one of them, but cannot feasibly replace the one sold with something new nearby, each of the factories would have to be valued at the entire value of the network, multiplying your tax burden by the number of parcels your network is broken into.
I agree that the way we do land now is not good, but at first glance I don't see a way to fix this proposed system without a bunch of patches with their own problems.
Leaving aside delays, this does get at a point I noticed wasn't obviously addressed in that paper which is what to do about very seasonal things. The example I thought of (rather less macabre) was an umbrella in a rainstorm. I don't think it's sensibly applicable to most personal property.
Agreed, but the timing issues seem applicable to businesses' property as well. Sudden unpredictable spikes in value (e.g. of a mine for a rare metal that somebody figured out a new use for) could result in a lot of churn and removes the upside (but not downside!) variance in asset value.
By this, do you mean something like: when I purchase a mine or whatever, I'm speculatively pricing in some upside (e.g. a new use) which is part of my valuation for it, and if later a marginally more alert person buys me out because of a new actual use before I update my valuation, I fail to realise that value? But if no new actual use comes up, I'm left holding the bag? I agree. And possibly we also agree that's the same issue as the umbrella, where someone noticed it's raining before I did?
A less crazed approach might be more like
Yup this makes more sense imo, basically having a right of refusal on the sale, but reflecting the now-assessed-higher tax rate
Wouldn't the equilibrium here trend towards a bunch of wasted labor where I deliberately lowball the value of the land, and then if someone offers a larger amount, I just say no and then start paying the larger amount, thus having a potential to pay less tax but losing nothing if I'm called out for it? No downside to me personally, and if this became common, it'd be harder to legitimately buy stuff. Seems like you'd need to pay some sort of fee to the entity credibly offering this larger amount to make it worth it.
Hmm I don't think so? If you buy land for $X, that's the floor on what you could reasonably assess it at, which is basically the status quo world. So we're in the status quo until someone comes along and bids up the price to their willingness-to-pay: Then, the asset either moves to someone who values it more, or you start paying higher taxes on it. I think either branch is preferable to the status quo?
Fair point, if you add that you can't assess it at less than you paid for it, this problem goes away.
Ideally you would want to allow depreciation though, which is a definite phenomenon! (Especially if things are neglected.)
Yeah, there's some design questions. You're right, the upside to the corrective bidders is naively nothing if they get called on it: they're doing valuable corrective cybernetic labour for free.
Maybe a sensible refinement would be for them to be owed a small fee... or roughly equivalently some (temporary) direct share of the resulting increased Harberger tax.
What constitutes cooperation?
Realised my model pipeline:
was missing an important preliminary step.
For cooperation to happen, you also need:
(Could break down further: identifying, getting common knowledge, and securing initial prospective cooperative intent.)
In some cases, 'identifying potential coalitions' might be a large, even dominant part of the challenge of cooperation, especially when effects are diffuse!
That applies to global commons and it applies when coordinating political action. What other cases?
'Identifying potential coalitions' is what a lot of activism is about, and it might also be a big part of what various cooperative memeplexes like tribes, religions, political parties etc are doing.
This feels to me like another important part of the picture that new tech could potentially amplify!
Could we newly empower large groups of humans to cooperate by recognising and fulfilling the requirements of this cooperation pipeline?
Temporary MAP stance or subjective probability matching are my words for useful mental manoeuvres for research, especially when dealing with confusing or prepradigmatic or otherwise non-crisp domains.
MAP is Maximum A Posteriori i.e. your best guess after considering evidence. Probability matching is making actions/guesses proportional to your estimate of them being right (rather than picking the single MAP choice).
By this manoeuvre I'm gesturing at a kind of behaviour where you are quite unsure about what's best (e.g. 'should I work on interpretability or demystifying deception?') and rather than allowing that to result in analysis paralysis, you temporarily collapse some uncertainty and make some concrete assumptions to get moving in one or other direction. Hopefully in so doing you a) make a contribution and b) grow your skills and collect new evidence to make better decisions/contributions next time.
It happens to correspond somewhat to a decent heuristic called Thompson Sampling, which is optimal under some conditions for some uncertain-duration sequential decision problems.
HT Evan Hubinger for articulating his take on this in discussions about research, and I'm certain I've read others discussing similar principles on LW or EAF but I don't have references to hand.
Here is some reasoned opinion about ML research automation.
Experimental compute and 'taste' seem very close to direct multiplier factors in the production of new insight:
My model of research taste is that it 'accumulates' (according to some sample efficiency) in a researcher and/or team by observation (direct or indirect) of experiments. It 'depreciates', like a capital stock, both because individuals and teams forget or lose touch, and (more relevant to fast-moving fields) because taste generalises only so far, and the 'frontier' of research keeps moving.
This makes experiments extremely important, both as a direct input to insight production and as fuel for accumulating research taste.
Peak human teams can't get much better research taste in absence of experimental compute without improving on taste accumulation, which is a kind of learning sample efficiency. You can't do that by just having more people: you have to get very sharp people and have very effective organisational structure for collective intelligence. Getting twice the taste is very difficult!
AI research assistants which substantially improved on experiment design, either by accumulating taste more efficiently or by (very expensive?) reasoning much more extensively about experiment design, could make the non-compute factor grow as well.
You can't just 'be smarter' or 'have better taste' because it'll depreciate away. Reasoning for experiment design has very (logarithmically?[2]) diminishing returns as far as I can tell, so I'd guess it's mostly about sample efficiency of taste accumulation.
(There's some parallelisation discount: k experiments in parallel is strictly worse than k in series, because you can't incorporate learnings.) ↩︎
A naive model where reasoning for experiment design means generating more proposals from an idea generator and attempting to select the best one has worse than logarithmic returns to running longer, for most sensible distributions of idea generation. Obviously reasoning isn't memoryless like that, because you can also build on, branch from, or refine earlier proposals, which might sometimes do better than coming up with new ones tabula rasa. ↩︎
What constitutes cooperation? (previously)
Because much in the complex system of human interaction and coordination is about negotiating norms, customs, institutions, constitutions to guide and constrain future interaction in mutually-preferable ways, I think:
deserves special attention.
This is despite it perhaps (for a given 'coordination moment') being in theory reducible to preference elicitation or aggregation, searching outcome space, negotiation, enforcement, ...
There are failure modes (unintended consequences, concentration of power, lost purposes, corruptability, poor adaptability, plain old inefficacy) and patterns for success (stabilising win-win equilibria, reducing inefficiencies, improving collective intelligence and adaptability) which are specific to this process of negotiating and developing institutions (there are patterns, because the complex system has emergent structure like trust, corruption, coalitions, information propagation, ...).
Said briefly: much (most?) coordination is about coordination because a) humans are that type of creature and b) we live in a highly iterated world.
I suspect that this is a bit too much of an anlytical and legible framework. the VAST majority of human interaction is not based on explicit rules or negotiated contracts, it's based on socially-evolved heuristics for who to trust to do what under what conditions, and then each individual has variance in their compliance and expectations, almost none of which are ever stated clearly.
I'd love to see 'institution and constitution design' replaced with 'institution and constitution studies'. Coordination is a word that hides a number of important sub-topics about enforcement/agreement for behaviors among misaligned individuals.
Seems right! 'studies' uplifts 'design' (either incremental or saltatory), I suppose. For sure, the main motivation here is to figure out what sorts of capabilities and interventions could make coordination go better, and one of my first thoughts under this heading is open librarian-curator assistive tech for historic and contemporary institution case studies. Another cool possibility could be simulation-based red-teaming and improvement of mechanisms.
If you have any resources or detailed models I'd love to see them!
If you want to be twice as profitable as your competitors, you don’t have to be twice as good as them. You just have to be slightly better.
I think AI development is mainly compute constrained (relevant for intelligence explosion dynamics).
There are some arguments against, based on the high spending of firms on researcher and engineer talent. The claim is that this supports one or both of a) large marginal returns to having more (good) researchers or b) steep power laws in researcher talent (implying large production multipliers from the best researchers).
Given that the workforces at labs remain not large, I think the spending naively supports (b) better.
But in fact I think there is another, even better explanation:
This also explains why it's been somewhat 'easy' (but capital intensive) for a few new competitors to pop into existence each year, and why firms' revealed preferred savings rate into compute capital is enormous (much greater than 100%!).
We see token prices drop incredibly sharply, which supports the non-commoditised margin claim (though this is also consistent with a Wright's Law effect from (runtime) algorithmic efficiency gains, which should definitely also be expected).
A lot of engineering effort is being put into product wrappers and polish, which supports the customer base claim.
The implications include: headroom above top human expert teams' AI research taste could be on the small side (I think this is right for many R&D domains, because a major input is experimental throughput). So both quantity and quality of (perhaps automated) researchers should have steeply diminishing returns in AI production rate. But might they nevertheless unlock a practical monopoly (or at least an increasingly expensive barrier to entry) on AI-derived profit, by keeping the (more monetisable) frontier out of reach of competitors?