Wiki Contributions

Comments

Just the facts, ma'am!

The two statements above are (as far as I can tell) facts.

Open & Welcome Thread November 2021

You're probably looking for this (via the old FAQ).

Jacob's Twit, errr, Shortform

(upvoted partially to create an incentive to crosspost good twitter threads on the LW shortform)

Using Brain-Computer Interfaces to get more data for AI alignment

Nice article! I can't find anything I disagree with, and especially like the distinction between enhancement, merging and alignment aid.

Also good point about the grounding in value learning. Outside of value learning, perhaps one won't need to ground BCI signals in actual sentiments? Especially if we decide to focus more on human imitation, just the raw signals might be enough. Or we learn how to extract some representation of inconsistent proto-preferences from the BCI data and then apply some methods to make them consistent (though that might require a much more detailed understanding of the brain).

There's also a small typo where you credit Anders "Samberg" instead of "Sandberg", unless there's two researchers with very similar names in this area :-)

Maybe one could tag posts by Eleuther AI with it.

Samuel Shadrach's Shortform

Just FYI, I've become convinced that most online communication through comments with a lot of context are much better settled through conversations, so if you want, we could also talk about this over audio call.

Samuel Shadrach's Shortform

Why do you feel it would be the right decision to kill one? Who defines "right"?

I define "right" to be what I want, or, more exactly, what I would want if I knew more, thought faster and was more the person I wish I could be. This is of course mediated by considerations on ethical injunctions, when I know that the computations my brain carries out are not the ones I would consciously endorse, and refrain from acting since I'm running on corrupted hardware. (You asked about the LCPW, so I didn't take these into account and assumed that I could know that I was being rational enough).

It's been a while since I read Thou Art Godshatter and the related posts, so maybe I'm conflating the message in there with things I took from other LW sources.

Samuel Shadrach's Shortform

I believe that in the LCPW it would be the right decision to kill one person to save two, and I also predict that I wouldn't do it anyway, mainly because I couldn't bring myself to do it.

In general, I understood the Complexity of Value sequence to be saying "The right way to look at ethics is consequentialism, but utilitarianism specifically is too narrow, and we want to find a more complex utility function that matches our values better."

niplav's Shortform

I would like to have a way to reward people for updating their existing articles (and be myself rewarded for updating my posts), since I believe that that's important (and neglected!) work, but afaik there's no way to make the LW interface show me new updates or let me vote on them.

niplav's Shortform

Yesterday I sent 0.00025₿ to Nuño Sempere for his very helpful comments on a post of mine.

I did so as part of a credit-assignment ritual described by Zack M. Davis in order to increase the measure of Nuño Sempere-like algorithms in existence.

Load More