Simulation_Brain

Posts

Sorted by New

Comments

Superintelligence 19: Post-transition formation of a singleton

Really? Can you say a little more about why you think you have that value? I guess I'm not convinced that it's really a terminal value if it varies so widely across people of otherwise similar beliefs. Presumably that's what lalartu meant as well, but I just don't get it. I like myself, so I'd like more of myself in the world!

How to Beat Procrastination

Perhaps you're thinking of the dopamine spike when reward is actually given? I had thought the predictive spike was purely proportional to the odds of success and the amount of reward- which would indeed change with boring tasks, but not in any linear way. If you're right about that basic structure of the predictive spike I should know about it for my research; can you give a reference?

Book review: The Reputation Society. Part II

Less Wrong seems like the ideal community to think up better reputation systems. Doctorow's Whuffie is reasonably well-thought-out, but intended for a post-scarcity economy; but its ideas of distinguishing right-handed (people who agree with you) from left-handed (from people who generally don't agree with you) reputations seems like one useful ingredient. Reducing the influence of those who tend to vote together seems like another potential win.

I like to imagine a face-based system; snap an image from a smartphone, and access reputation.

I hope to see more discussion, in particular, VAuroch's suggestion.

AI risk, executive summary

I think the example is weak; the software was not that dangerous, the researchers were idiots who broke a vial they knew was insanely dangerous.

I think it dilutes the argument to broaden it to software in general; it could be very dangerous under exactly those circumstances (with terrible physical safety measures), but the dangers of superhuman AGI are vastly larger IMHO and deserve to remain the focus, particularly of the ultra-reduced bullet points.

I think this is as crisp and convincing a summary as I've ever seen; nice work! I also liked the book, but condensing it even further is a great idea.

The Evil AI Overlord List

"Pleased to meet you! Soooo... how is YOUR originating species doing?..."

That actually seems like an extremely reasonable question for the first interstellar meeting of superhuman AIs.

I disagree with EY on this one (I rarely do). I don't think it's so likely as to ensure rationally acting Friendly, but I do think that the possibility of encountering an equally powerful AI, and one with a headstart on resource acquisition, shouldn't be dismissed by a rational actor.

LWers living in Boulder/Denver area: any interest in an AI-philosophy reading group?

I'm game. These are some of my favorite topics. I do computational cognitive neuroscience, and my principal concern with it is how it can/will be used to build minds.

I may be confused, but it seems to me that the issue in generalizing from decision utility to utilitarian utility simply comes down to making an assumption allowing utilities among different people to be compared- to put them on the same scale. I think there's a pretty strong argument that we can do so, springing from the fact that we all are running essentially the same neural hardware. Whatever experiential value is, it's made of patterns of neural firing, and we all have basically the same patterns. While we don't run our brains exactly the same, the mood- and reward-processing circuitry are pretty tightly feedback-controlled, so saying that everyone's relative utilities are equal shouldn't be too far from the truth.

But that's when one adopts an unbiased view. Neither I nor (almost?) anyone else in history have done so. We consider our own happiness more important than anyone else's. We weight it higher in our own decisions, and that's perfectly rational. The end point of this line of logic is that there is no objective ethics - it's up to the individual.

But there is one that makes more sense than others when making group decisions, and that's sum utilitarianism. That's the best candidate for an AI's utility function. Approximations must be made, but they're going to be approximately right. They can be improved by simply asking people about their preferences.

The common philosophical concern that you can't put different individuals preferences on the same scale does not hold water when held up against our current knowledge about how brains register value and so create preferences.

Meetup : Lesswrong Boulder CO

I'm out of town or I'd be there. Hope to catch the next one.

Luck II: Expecting White Swans

Wow, I feel for you. I wish you good luck and good analysis.

Meetup : Meetup Bolder CO

Ha- I was there the week prior. I hope this is going to happen again. Note also that I'm re-launching a defunct Singularity meetup group for boulder/broomfield if anyone is interested.

Load More