I'm particularly interested in sustainable collaboration and the long-term future of value. I'd love to contribute to a safer and more prosperous future with AI! Always interested in discussions about axiology, x-risks, s-risks.
I enjoy meeting new perspectives and growing my understanding of the world and the people in it. I also love to read - let me know your suggestions! In no particular order, here are some I've enjoyed recently
Cooperative gaming is a relatively recent but fruitful interest for me. Here are some of my favourites
People who've got to know me only recently are sometimes surprised to learn that I'm a pretty handy trumpeter and hornist.
What constitutes cooperation? (previously)
Because much in the complex system of human interaction and coordination is about negotiating norms, customs, institutions, constitutions to guide and constrain future interaction in mutually-preferable ways, I think:
deserves special attention.
This is despite it perhaps (for a given 'coordination moment') being in theory reducible to preference elicitation or aggregation, searching outcome space, negotiation, enforcement, ...
There are failure modes (unintended consequences, concentration of power, lost purposes, corruptability, poor adaptability, plain old inefficacy) and patterns for success (stabilising win-win equilibria, reducing inefficiencies, improving collective intelligence and adaptability) which are specific to this process of negotiating and developing institutions (there are patterns, because the complex system has emergent structure like trust, corruption, coalitions, information propagation, ...).
Said briefly: much (most?) coordination is about coordination because a) humans are that type of creature and b) we live in a highly iterated world.
Ideally you would want to allow depreciation though, which is a definite phenomenon! (Especially if things are neglected.)
Yeah, there's some design questions. You're right, the upside to the corrective bidders is naively nothing if they get called on it: they're doing valuable corrective cybernetic labour for free.
Maybe a sensible refinement would be for them to be owed a small fee... or roughly equivalently some (temporary) direct share of the resulting increased Harberger tax.
A less crazed approach might be more like
By this, do you mean something like: when I purchase a mine or whatever, I'm speculatively pricing in some upside (e.g. a new use) which is part of my valuation for it, and if later a marginally more alert person buys me out because of a new actual use before I update my valuation, I fail to realise that value? But if no new actual use comes up, I'm left holding the bag? I agree. And possibly we also agree that's the same issue as the umbrella, where someone noticed it's raining before I did?
Leaving aside delays, this does get at a point I noticed wasn't obviously addressed in that paper which is what to do about very seasonal things. The example I thought of (rather less macabre) was an umbrella in a rainstorm. I don't think it's sensibly applicable to most personal property.
Learned about 'Harberger tax' recently.
The motivation is like
The pitch is like, can we do something in between?
They claim this keeps most of the benefit of investment incentivisation, because things are mostly private in practice, but substantially improves allocative efficiency by lubricating the more valuable trades.
Anyway, mainly it's interesting because getting into the gubbins of particular proposals helps me learn about the relevant dynamics in general, but also I wondered if there's something in this vicinity that could work nicely for AI development and AI deployment. (Like maybe a Harberger tax on compute, or on AI systems, or...)
Yudkowsky's 2008 AI as a Positive and Negative Factor in Global Risk is a pretty good read, both for the content (which is excellent in some ways and easy to critique in others), and for the historical interest (where it's useful to litigate the question of what MIRI was aiming at around then, and because it's interesting how much dynamic Yudkowsky anticipated/missed, and because it's interesting to inhabit 2008 for a bit and update on empirical observations since then).
improving AI strategic competence (relative to their technological abilities) may be of paramount importance (so that they can help us with strategic thinking and/or avoid making disastrous mistakes of their own), but this is clearly even more of a double-edged blade than AI philosophical competence
I think you can get less of the tradeoff here by explicitly and deliberately aiming for AI 'tools' for improving human (group) strategic competence. It sounds subtle, but I think it has quite different connotations and implications for what you actually go and do!
(Interested to know if you have a taxonomy here which you feel is approaching MECE? I think this is more like a survey - which is still helpful.)
Seems right! 'studies' uplifts 'design' (either incremental or saltatory), I suppose. For sure, the main motivation here is to figure out what sorts of capabilities and interventions could make coordination go better, and one of my first thoughts under this heading is open librarian-curator assistive tech for historic and contemporary institution case studies. Another cool possibility could be simulation-based red-teaming and improvement of mechanisms.
If you have any resources or detailed models I'd love to see them!