Luke A Somers

Posts

Sorted by New

Wiki Contributions

Comments

Contra EY: Can AGI destroy us without trial & error?

It seems like you're relying on the existence of exponentially hard problems to mean that taking over the world is going to BE an exponentially hard problem. But you don't need to solve every problem. You just need to take over the world.

Like, okay, the three body problem is 'incomputable' in the sense that it has chaotically sensitive dependence on initial conditions in many cases. So… don't rely on specific behavior in those cases on long time horizons without the ability to do small adjustments to keep things on track.

If the AI can detect most of the hard cases and avoid relying on them, and include robustness by having multiple alternate mechanisms and backup plans, even just 94% success on arbitrary problems could translate into better than that on an overall solution.

Contra EY: Can AGI destroy us without trial & error?

The distribution of outcomes is much more achievable and much more useful than determining the one true way some specific thing will evolve. Like, it's actually in-principle achievable, unlike making a specific pointlike prediction of where a molecular ensemble is going to be given a starting configuration (QM dependency? Not merely a matter of chaos). And it's actually useful, in that it shows which configurations have tightly distributed outcomes and which don't, unlike that specific pointlike prediction.

A voting theory primer for rationalists

I see. I figured U/A meant something like that. I think it's potentially useful to consider that case, but I wouldn't design a system entirely around it.

A voting theory primer for rationalists

In terms of explaining the result, I think Schulze is much better. You can do that very compactly and with only simple, understandable steps. The best I can see doing with RP is more time-consuming and the steps have potential to be more complicated.

As far as promotion is concerned, I haven't run into it; since it's so similar to RP, I think non-algorithmic factors like I mentioned above begin to be more important.

~~~~

The page you linked there has some undefined terms like u/a (it says it's defined in previous articles, but I don't see a link).

>it certainly doesn’t prevent Beatpath (and other TUC methods) from being a strategic mess, without known strategy,

Isn't that a… good thing? With the fog of reality, strategy looking like 60% stabbing yourself, 30% accomplishing nothing, 10% getting what you want… how is that a bad trait for a system to have?

In particular, as far as strategic messes are concerned, I would definitely feel more pressure to use strategy of equivocation in SICT than in beatpath (Schulze), because it would feel a lot less drastic/scary/risky.

A voting theory primer for rationalists

What are the improved Condorcet methods you're thinking of? I do recall seeing that Ranked Pairs and Schulze have very favorable strategy-backfire to strategy-works ratios in simulations, but I don't know what you're thinking of for sure. If those are it, then if you approach it right, Schulze isn't that hard to work through and demonstrate an election result (wikipedia now has an example).

Global insect declines: Why aren't we all dead yet?

95% of the sperm reaching the endpoint, then, if they're not independent.

Global insect declines: Why aren't we all dead yet?

And, like with sperm, it may be that there were many more insects than needed to fulfill their role? Like, if 20 sperm reach an egg, you can lose 95% of them and end up just as pregnant.

Corrigible but misaligned: a superintelligent messiah

That dialog reminds me of some scenes from Friendship is Optimal, only even more morally off-kilter than CelestAI, which is saying something.

April Fools: Announcing: Karma 2.0

I have no RSS monkeying going on, and Wei Dai and Kaj Sotala have the same font size as you or me.

April Fools: Announcing: Karma 2.0

Instructions unclear, comment stuck in ceiling fan?

Load More