LESSWRONG
LW

278
mako yass
3727Ω65813175
Message
Dialogue
Subscribe

R&Ds human systems http://aboutmako.makopool.com

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
5MakoYass's Shortform
5y
108
Help me understand: how do multiverse acausal trades work?
mako yass7d20

I have a strong example for simulationism, but I guess that might not be what you're looking for. Honestly I'm not sure I know any really important multiversal trade protocols. I think their usefulness is bounded by the generalizability of computation, or the fact that humans don't seem to want any weird computational properties..? Which isn't to say that we wont end up doing any of them, just that it'll be a thing for superintelligences to think about.

In general I'm not sure this require avoiding making your AI CDT to begin with, I think it'll usually correct its decision theory later on? The transparent newcomb/parfit's hitchhiker moment where it knows that it's no longer being examined by a potential trading partner's simulation/reasoning and can start to cheat never comes. There's no way for a participant to like, wait for the cloons in the other universe to comply, and then defect, you never see them comply, you're in different universes, there's no time-relation between your actions! You know they only comply if (they will figure out) that it is your nature to comply in kind.

I do have one multiversal trade protocol that's fun to think about though.

Reply
Help me understand: how do multiverse acausal trades work?
mako yass9d40

You don't need certainty to do acausal trade.

If it's finite you don't know how many entities there are in it, or what proportion of them are going to "trade" with you, and if it's infinite you don't know the measure (assuming that you can define a measure you find satisfying).

These are baby problems for baby animals. You develop adequate confidence about these things by running life-history simulations (built with a kind of continual actively reasoning performance optimization process that a human-level org wouldn't be able to contemplate implementing) or just by surveying the technological species in your lightcone and extrapolating. Crucially, values lock in after the singularity (and probably intelligence and technology converges to the top of the S-curve) so you don't have to simulate anyone beyond the stage at which they become infeasible large.

Reply
The Transparent Society: A radical transformation that we should probably undergo
mako yass10d20

Conjecture: Absolute privacy + Absolute ability to selectively reveal any information one has, are theoretically optimal, transparency beyond that wont lead to better negotiation outcomes. Discussion of the privacy/coordination tension has previously missed this, specifically, it has missed the fact that technologies for selectively revealing self-verifying information, such as ZKVMs, suggest that the two are not in tension.

As to what's a viable path to a more coordinated world in practice, though, who knows.

Reply
Scrying for outcomes where the problem of deepfakes has been solved
mako yass10d20

allocate public funding to the production of disturbing, surreal, inflammatory, but socially mostly harmless deepfakes to exercise the public's epistemic immune system

The idea that this needed to be publicly funded is clownish in hindsight.

Makes me think it'd be useful to have some kind of impact market mechanism for dynamic pricing, where if a public resource is being produced for free/in sufficient quantities by private activity, the government's price for credits in that resource goes to zero.

Reply
Help me understand: how do multiverse acausal trades work?
mako yass11d20

Even doing something like that with things bidirectionally outside of your light cone is pretty fraught

Are you proposing that the universe outside of your lightcone might (like non-negligible P) just not be real?

Reply
Help me understand: how do multiverse acausal trades work?
mako yass11d20

Specifically, I've heard the claim that AI Safety should consider acausal trades over a Tegmarkian multiverse

Which trades? I don't think I've heard this. I think multiversal acausal trade is fine and valid, but my impression is it's not important in AI safety.

Reply
Help me understand: how do multiverse acausal trades work?
mako yass11d20

Yeah, agents incapable of acausal cooperation are already being selected out: Most of the dominant nations and corporations are to some degree internally transparent, or bound by public rules or commitments, which is sufficient for engaging in acausal trade. This will only become more true over time: Trustworthiness is profitable, a person who can't keep a promise is generally an undesirable trading partner, and artificial minds are much easier to make transparent and committed than individual humans and even organisations of humans are.

Also, technological (or post-biological) eras might just not have ongoing darwinian selection. Civilisations that fail to seize control of their own design process wont be strong enough to have a seat at the table, those at the table will be equipped with millions of years of advanced information technology, cryptography and game theory, perfect indefinite coordination will be a solved problem. I can think of ways this could break down, but they don't seem like the likeliest outcomes.

Reply
MakoYass's Shortform
mako yass12d20

I notice it becomes increasingly impractical to assess whether a preference had counterfactual impact on the allocation. For instance if someone had a preference for there to be no elephants, and we get no elephants, partially because of that, but largely because of the food costs, should the person who had that preference receive less food for having already received an absense of elephants?

Reply
MakoYass's Shortform
mako yass13d20

But not preserving normality is the appeal :/

As an example, normality means a person can, EG, create an elephant within their home, and torture it. Under preference utilitarianism, the torture of the elephant upsets the values of a large number of people, it's treated as a public bad and has to be taxed as such. Even when we can't see it happening, it's still reducing our U, so a boundaryless prefu optimizer would go in there and says to the elephant torturer "you'd have to pay a lot to offset the disvalue this is creating, and you can't afford it, so you're going to have to find a better outlet (how about a false elephant who only pretends to be getting tortured)".

But let's say there are currently a lot of sadists and they have a lot of power. If I insist on boundaryless aggregation, they may veto the safety deal, so it just wouldn't do. I'm not sure there are enough powerful sadists for that to happen, political discourse seems to favor publicly defensible positions, but [looks around] I guess there could be. But if there were, it would make sense to start to design the aggregation around... something like the constraints on policing that existed before the aggregation was done. But not that exactly.

Reply
reallyeli's Shortform
mako yass13d20

Hmm you're right, that's a distinction.

I guess I glossed over it because in applied conceptual engineering fields like code (and maybe physics? (or is this more about the fuzzyiness of the mapping to the physical world)) (or maybe even applied math sometimes), where plenty of math is done, there are always still lots of situations where the abstraction stops fitting in working memory because it's grown too complex for most of the people who work with it to fully understand its definitions.

Also maybe I'm assuming math is gonna get like that too once AI mathematicians start to work? (And I've always felt like there should be a lot more automation in math than there is)

Reply
Load More
82Release: Optimal Weave (P1): A Prototype Cohabitive Game
1y
21
24I didn't think I'd take the time to build this calibration training game, but with websim it took roughly 30 seconds, so here it is!
1y
2
22Offering service as a sensayer for simulationist-adjacent beliefs.
1y
0
18[Cosmology Talks] New Probability Axioms Could Fix Cosmology's Multiverse (Partially) - Sylvia Wenmackers
1y
2
64All About Concave and Convex Agents
1y
24
63Do not delete your misaligned AGI.
1y
13
38Elon files grave charges against OpenAI
2y
10
30Verifiable private execution of machine learning models with Risc0?
2y
2
22Eleuther releases Llemma: An Open Language Model For Mathematics
2y
0
58A thought about the constraints of debtlessness in online communities
2y
23
Load More
Eschatology
5 years ago
(+45)
Updateless Decision Theory
6 years ago
(+96/-99)
Updateless Decision Theory
6 years ago
(+2/-7)
Steven Kaas
6 years ago
(+142)