Sequences

Voting Theory Primer for Rationalists

Wiki Contributions

Comments

Doesn't matter until the switch is done.

I am into something that can be called "meta-politics": institutional reform. That is, crafting decisionmaking algorithms to have good characteristics — incentives, participation, etc. — independent of the object-level goals of politics. I think this is "meta" in a different way than what you're talking about in this article; in short, it's prescriptive meta, not descriptive meta. And I think that makes it "OK"; that is, largely exempt from the criticisms in this article.

Would you agree?

I believe that Bitcoin is a substantial net negative for the world. I think that blockchain itself, even without proof of work, is problematic as a concept — with some real potential upsides, but also real possibly-intrinsic downsides even apart from proof of work. I'd like a world where all PoW-centric cryptocurrency was not a thing (with possible room for PoW as a minor ingredient for things like initial bootstrapping), and crypto in general was more an area of research than investment for now. I think that as long as >>90% of crypto is PoW, it's better (for me, at least) to stay away entirely rather than trying to invest in some upstart PoS coin.

#2. Note that even if ETH does switch in the future, investing in ETH today is still investing in proof-of-work. Also, as long as BTC remains larger and doesn't switch, I suspect there's likely to be spillover between ETH and BTC such that it would be difficult to put energy into ETH without to some degree propping up the BTC ecosystem.

I feel it's worth pointing out that all proof-of-work cryptocurrency is based on literally burning use-value to create exchange-value, and that this is not a sustainable long-term plan. And as far as I can tell, non-proof-of-work cryptocurrency is mostly a mirage or even a deliberate red herring / bait-and-switch.

I'm not an expert, but I choose not to participate on moral grounds. YMMV.

I realize that what I'm saying here is probably not a new idea to most people reading, but it seems clearly enough true to me that it bears repeating anyway.

If anyone wants links to further arguments in this regard, from me rather than Google, I'd be happy to provide.

If we're positing a Grahamputer, then "yeah but it's essentially the same if you're not worried about agents of equal size" seems too loose. 

In other words, with great compute power, comes great compute responsibility.

Thanks for pointing that out. My arguments above do not apply.

I'm still skeptical. I buy anthropic reasoning as valid in cases where we share an observation across subjects and time (eg, "we live on a planet orbiting a G2V-type star", "we inhabit a universe that appears to run on quantum mechanics"), but not in cases where each observation is unique (eg, "it's the year 2021, and there have been about 107,123,456,789 (plus or minus a lot) people like me ever"). I am far less confident of this than I stated for the arguments above, but I'm still reasonably confident, and my expertise does still apply (I've thought about it more than just what you see here).

Our sense-experiences are "unitary" (in some sense which I hope we can agree on without defining rigorously), so of course we use unitary measure to predict them. Branching worlds are not unitary in that sense, so carrying over unitarity from the former to the latter seems an entirely arbitrary assumption.

A finite number (say, the number of particles in the known universe), raised to a finite number (say, the number of Planck time intervals before dark energy tears the universe apart), gives a finite number. No need for divergence. (I think both of those are severe overestimates for the actual possible branching, but they are reasonable as handwavy demonstrations of the existence of finite upper bounds)

I don't think the point you were arguing against is the same as the one I'm making here, though I understand why you think so.

My understanding of your model is that, simplifying relativistic issues so that "simultaneous" has a single unambiguous meaning, total measure across quantum branches of a simultaneous time slice is preserved; and your argument is that, otherwise, we'd have to assign equal measure to each unique moment of consciousness, which would lead to ridiculous "Bolzmann brain" scenarios. I'd agree that your argument is convincing that different simultaneous branches have different weight according to the rules of QM, but that does not at all imply that total weight across branches is constant across time.

I didn't do this problem, but I can imagine I might have been tripped up by the fact that "hammer" and "axe" are tools and not weapons. In standard DnD terminology, these are often considered "simple weapons"; distinct from "martial weapons" like warhammer and battleaxe, but still within the category of "weapons".

I guess that the "toolish" abstractions might have tipped me off, though. And even if I had made this mistake, it would only have mattered for "simple-weapon" tools with a modifier.

Load More