Wiki Contributions


Ah, nevermind then. I was thinking something like let b(x,k) = 1/sqrt(2k) when |x| < k and 0 otherwise

then define integral B(x)f(x) dx as the limit as k->0+ of integral b(x,k)f(x) dx

I was thinking that then integral (B(x))^2 f(x) dx would be like integral delta(x)f(x) dx.

Now that I think about it more carefully, especially in light of your comment, perhaps that was naive and that wouldn't actually work. (Yeah, I can see now my reasoning wasn't actually valid there. Whoops.)

Ah well. thank you for correcting me then. :)

I'm not sure commission/omission distinction is really the key here. This becomes clearer by inverting the situation a bit:

Some third party is about to forcibly wirehead all of humanity. How should your moral agent reason about whether to intervene and prevent this?

Aaaaarggghh! (sorry, that was just because I realized I was being stupid... specifically that I'd been thinking of the deltas as orthonormal because the integral of a delta = 1.)

Though... it occurs to me that one could construct something that acted like a "square root of a delta", which would then make an orthonormal basis (though still not part of the hilbert space).

(EDIT: hrm... maybe not)

Anyways, thank you.

Meant to reply to this a bit back, this is probably a stupid question, but...

The uncountable set that you would intuitively think is a basis for Hilbert space, namely the set of functions which are zero except at a single value where they are one, is in fact not even a sequence of distinct elements of Hilbert space, since all these functions are elements of , and are therefore considered to be equivalent to the zero function.

What about the semi intuitive notion of having the dirac delta distributions as a basis? ie, a basis delta(X - R) parameterized by the vector R? How does that fit into all this?

Ah, alright.

Actually, come to think about it, even specifying the desired behavior would be tricky. Like if the agent assigned a probability of 1/2 to the proposition that tomorrow they'd transition from v to w, or some other form of mixed hypothesis re possible future transitions, what rules should an ideal moral-learning reasoner follow today?

I'm not even sure what it should be doing. mix over normalized versions of v and w? what if at least one is unbounded? Yeah, on reflection, I'm not sure what the Right Way for a "conserves expected moral evidence" agent is. There're some special cases that seem to be well specified, but I'm not sure how I'd want it to behave in the general case.

Not sure. They don't actually tell you that.

Really interesting, but I'm a bit confused about something. Unless I misunderstand, you're claiming this has the property of conservation of moral evidence... But near as I can tell, it doesn't.

Conservation of moral evidence would imply that if it expected that tomorrow it would transition from v to w, then right now it would be acting on w rather than v (except for being indifferent as to whether or not it actually transitions to w), but what you have here would, if I understood what you said correctly, will act on v until that moment it transitions to w, even though it knew in advance it was going to transition to w.

Yeah, found that out during the final interview. Sadly, found out several days ago they rejected me, so it's sort of moot now.

Alternately, you might have alternative hypothesis that explain the absence equally well, but with a much higher complexity cost.

Hey there, I'm mid application process. (They're having me do the prep work as part of the application). Anyways,,,

B) If you don't mind too much: stay at App Academy. It isn't comfortable but you'll greatly benefit from being around other people learning web development all the time and it will keep you from slacking off.

I'm confused about that. App Academy has housing/dorms? I didn't see anything about that. Or did I misunderstand what you meant?

Load More