Sorted by New

Wiki Contributions


I meant I don't think the CEV of ancient Rome has the same values as ancient Rome.  Looks like your comment got truncated: "what is good if they were just"

Is there a command-line tool for previewing how a "markdown+LaTeX" text file would render as a LW draft post, for those of us who prefer to manipulate text files using productivity tools like (neo)vim and git?

Ah right, because Clippy has less measure, and so has less to offer, so less needs to be offered to it.  Nice catch!  Guess I've been sort of heeding Nate's advice not to think much about this.  :)

Of course, there would still be significant overhead from trading with and/or outbidding sampled plethoras of UFAIs, vs the toy scenario where it's just Clippy.

I currently suspect we still get more survival measure from aliens in this branch who solved their alignment problems and have a policy of offering deals to UFAIs that didn't kill their biological boot loaders.  Such aliens need not be motivated by compassion to the extent that aboriginals form a Schelling bloc, handwave appendagewave.  (But we should still play to win, like they did.)

Paperclips vs obelisks does make the bargaining harder because clippy would be offered fewer expected paperclips.

My current guess is we survive if our CEV puts a steep premium on that. Of course, such hopes of trade ex machina shouldn't affect how we orient to the alignment problem, even if they affect our personal lives. We should still play to win.

Roman values aren't stable under reflection; the CEV of Rome doesn't have the same values as ancient Rome. It's like a 5-year-old locking in what they want to be when they grow up.

Locking in extrapolated Roman values sounds great to me because I don't expect that to be significantly different than a broader extrapolation. Of course, this is all extremely handwavy and there are convergence issues of superhuman difficulty! :)

Yes it would, at least if you mean their ancient understanding of morals.

Not on mobile, in my experience.

I think it would be helpful to note at the top of the post that it's crossposted here. I initially misinterpreted "this blog" in the first sentence as referring to LW.

This idea keeps getting rediscovered, thanks for writing it up!  The key ingredient is acausal trade between aligned and unaligned superintelligences, rather than between unaligned superintelligences and humans.  Simulation isn't a key ingredient; it's a more general question about resource allocation across branches.

Too much power, I would assume. Yet he didn't kill Bo Xilai.

Load More