mako yass

interactive system design

Wiki Contributions


Though it could be better, for our purposes.

The line "an animal will gnaw off its own limb" seems a bit muddled. Such an animal would seem quite human to me, to have the resolve to be able to chew through its own bones to escape otherwise certain death. But it's clarified afterwards.

Paul never seems to completely sublimate the false pain. His consciousness isn't fully integrated, he's resisting himself. Your goal as a rationalist should not be to overpower your instincts, you should learn to to convince them, you should earn their trust. He gets close enough to sublimating it, I think it would have worked if we'd turned up the equanimity all the way to 11.

You should heed instincts in their domains of passion, memory, heuristic, and they should heed reason in its domains; episteme, inference, strategy, planning. It's a bit of a shame that we don't really see much of a celebration of instinct, in Dune? We should think of instinct as dear old matriarch, frail, maybe a little bit senile, but she remembers precious recipes, she lived through more catastrophes, and more flourishing, than we can imagine, she saw nearly every face of humanity, and she knows something about the point of it. She can't explain it to you. She doesn't understand it in a precise way any more, and besides, it's too big to be explained. But if you listen to her, keep her around, she can help you to figure some of it out.

Huh, I'm guessing that's a limitation of the way it generates things/the way it learned the distribution? I've never seen such a clear illustration of that before. Prediction and action really are distinct tasks?

On reflection, does OpenAI only train it to predict the next word, wouldn't they also train it to predict the previous word, or words between?

So you think it's computationally tractable? I think there are some other factors you're missing. That's a weighted sum of a bunch of vectors assigning numbers to all possible outcomes, either all possible histories+final states of the universe, or all possible experiences. And there are additional complications with normalizing utility functions; you don't know the probability distribution of final outcomes (so you can't take the integral of the utility functions) until you already know how the aggregation of normalized weighted utility functions is going to influence it.

By the way I'd love to hear people giving my comment agreement karma explain what they're agreeing with and how they know it's true, because I was asking a question that I don't know the answer to, and I really hope people don't think that we know the answer, unless we do, in which case I'd like to hear it.

What does merging utility functions look like and are you sure it's not going to look the same as global free trade? It's arguable that trade is just a way of breaking down and modularizing a big multifaceted problem over a lot of subagent task specialists (and there's no avoiding having subagents, due to the light speed limit)

Yeah, maybe, it parallels Newcomb. Parents in filial culture say something like "I choose to feed you and teach you, because I can see who you are, and that you will follow through on your duties. If that changes, and I can see that you wont be filial, we owe you nothing" and so the child has to internalize filial values, even though being filial in the future doesn't cause parental investment now, being the kind of person who would, is thought to.

As far as I can tell, humanity has been very-approximately doing this for a long time already, and calling it moral philosophy.  This isn't to say that all moral philosophy is a good approach to acausal normativity, nor that many moral philosophers would accept acausal normativity as a framing on the questions they are trying to answer

Yes, importantly: As a result of not having this formalism, I get the impression that, for instance, Kant understood Kant's Categorical Imperative with less precision than Yudkowsky does, and I see no indications that Yudkowsky ever read Kant. Although this is what moral philosophers have been doing this whole time, it should be emphasized that they didn't understand how it emerged from decision theory, they've been very very confused and most of their stuff will want to be rewritten in this frame.

The aside about respecting boundaries should probably be removed. You don't justify or motivate boundaries well enough here, and it doesn't really seem to me that you do in the sequence either. Even if it is a useful paradigm, I actually question whether it has much relevance to acausalism, my experience is that a lot of negotiation theory will seem to an acausalist to be deeply premised on acausal trade, but it turns out that the negotiation theory works almost exactly the same in the causal world and we missed that because our head isn't in that world any more.

You speak a lot about respecting boundaries, what I need to see before I'll be convinced that you're onto something here is that you also know when to disregard a spurious boundary. There are a lot of boundaries that exist in the world have been drawn arbitrarily and incorrectly and need to be violated, examples include; almost all software patents, national borders that don't correspond to the demographic preference clusters over spacialized law, or, some optional ones: racial definitions of cultural groups, or situations where an immense transition in power has occurred that has made it so that there was never a reason for the new powers to ask consent from the old powers, for instance, if RadicalXChange or Network States gave rise to a new political system that was obviously both hundreds of times more wealthy and democratically legitimate than the old system, would you expect it to recognize the US's state borders?

How would your paradigm approach that sort of thing?

And its will follows your will. The question dissolves.

Is it sovereign, if you it obeys your programming, which obligates it to empower and integrate you as soon as it can? Yes, no, it doesn't matter.

Load More