All of xelxebar's Comments + Replies

Somehow I get the feeling that most commenters haven't yet read the actual paper. This would clear up a lot of the confusion.

I imagine that a sufficiently high-resolution model of human cognition et cetera would factor into sets of individual equations to calculate variables of interest. Similar to how Newtonian models of planetary motion do.

However, I don't see that the equations themselves on disk or in memory should pose a problem.

When we want to know particular predictions, we would have to instantiate these equations somehow--either by plugging in x=3 into F(x) or by evaluating a differential equation with x=3 as an initial condition. It would depend on the specifics of the... (read more)

Please forgive this post here. There are some forgotten escaped characters and when I went to edit it, I ended up getting a separate post instead.

This may be nitpicky but I found an errata in the references. [3] I believe should be 1993 instead of 1995.

That said, there are 3 broken links for me - [4], [6] and [7] - and the non-broken links don't seem to currently be providing full text access. So, here's an updated references table, with links to full text access in each except for the book in [3] which has an amazon link instead:

[1] Desvousges, W. Johnson, R. Dunford, R. Boyle, K. J. Hudson, S. and Wilson K. N. (1992). Measuring non-use damages using contingent valuation: experimental evaluation ac... (read more)

This may be nitpicky but I found an errata in the references. [3] I believe should be 1993 instead of 1995.

That said, there are 3 broken links for me - [4], [6] and [7] - and the non-broken links don't seem to currently be providing full text access. So, here's an updated references table, with links to full text access in each except for the book in [3] which has an amazon link instead:

[1] Desvousges, W. Johnson, R. Dunford, R. Boyle, K. J. Hudson, S. and Wilson K. N. (1992). Measuring non-use damages using contingent valuation: experimental evaluation ac... (read more)

[This comment is no longer endorsed by its author]Reply
0xelxebar11y
Please forgive this post here. There are some forgotten escaped characters and when I went to edit it, I ended up getting a separate post instead.

I don't know about you guys, but being wrong scares the crap out of me. Or to say it another way, I'll do whatever it takes to get it right. It's a recursive sort of doubt.

This post inpires wtf moments in my brain. Anyone here read Greg Egan's Permutation City?

Now I find myself asking "What is going on where I feel like there is this quantity time?" instead of "What is time?"

"If you took one world and extrapolated backward, you'd get many pasts. If you take the many worlds and extrapolate backward, all but one of the resulting pasts will cancel out! Quantum mechanics is time-symmetric."

My immediate thought when reading the above: when extrapolating forward do we get cancelation as well? Born probabilities?

1wizzwizz43y
We do get some, e.g. inside a quantum computer, impossible worldstates cancel. But nowhere near as much.

I notice that I'm a bit confused, especially when reading, "programming a machine superintelligence to maximize pleasure." What would this mean?

It also seems like some arguments are going on in the comments about the definition of "like", "pleasure", "desire" etc. I'm tempted ask everyone to pull out the taboo game on these words here.

A helpful direction I see this article pointing toward, though, is how we personally evaluate an AI's behavior. Of course, by no means does an AI have to mimic human internal workings 1... (read more)

You might be interested in Allais paradox, which is an example of humans in fact demonstrating behavior which doesn't maximize any utility function. If you're aware of the Von Neumann-Morgenstern utility function characterization, this becomes clearer than just knowing what a utility function is.

0voi610y
Sorry to respond to this 2 years late. I'm aware of the paradox and the VNM theorem. Just because humans are inconsistent/irrational doesn't mean they're aren't maximizing a utility function however. Firstly, you can have a utility function and just be bad at maximizing it (and yes this contradicts the rigorous mathematical definitions which we all know and love, but we both know how English doesn't always bend to their will and we both know what I mean when I say this without having to be pedantic because we are such gentlemen). Secondly, if you consider each subsequent dollar you attain to be less valuable this makes perfect sense and this is applied in tournament poker where taking 50:50 chance of either going broke or doubling your stack is considered a terrible play because the former outcome guarantees you lose your entire entry fee but the latter gives you an expected winning value that is less than your entry fee. This can be seen with a simple calculation or by just noting that if everyone plays aggressively like this I can do nothing and make into into the prize pool because the other players will simply eliminate each other faster than the blinds will eat away at my own stack. But I digress. Let's cut to the chase here. You can do what you want but you can't choose your wants. Along the same lines a straight man, no matter how intelligent he becomes, will still find women arousing. An AI can be designed to have the motives of a selfless benevolent human (the so called Artificial Gandhi Intelligence) and this will be enough. Ultimately humans want to be satisfied and if it's not in their nature to be permanently so, then they will concede to changing their nature with FAI-developed science.
0CuSithBell12y
That's not exactly true. The Allais paradox does help to demonstrate why explicit utility functions are a poor way to model human behavior, though.