Wiki Contributions

Comments

I'm posting on behalf of my friend, who is an aspiring AI researcher in his early 20's, and is looking to live with likeminded individuals. He currently lives in Southern California, but is open to relocating (preferably USA, especially California).

Please message jeffreypythonclass+ea@gmail.com if you're interested!

I find it disappointing that this post has not received more attention. (If it has elsewhere, please let me know.)


 

Re: Jessica, I find the comparison to chemistry only accurate if we are doing decision theory to make useful agents, helpful predictions, or whatever -- just as chemistry was used to "make useful physical materials." 

But this does not seem to be the point of most discussion about decision theory. If debates were framed in terms of "what sort of decision theory should we build into the choice-making architecture of an agent if our goal is to ___ (maximize utility, etc.)," then the discussion would look very different.

We wouldn't, for instance, have people arguing about whether we should engage in multiverse-wide cooperation because of evidential decision theory. In my eyes, that discussion seems suspiciously about some independent fact of the matter about rationality, and does not seem to be treating decision theory as just a useful abstraction.

 

Re: Guy

Likewise, in my eyes this does not seem like the point of most discussion about decision theory. People do not seem to be just comparing different formal models, saying when they are equivalent, etc. Your last sentence seems to steer away from this view, as you say "certain decision theories are just straight up better than others assuming you have the power to implement them." But this talk of being "straight up better" is precisely the sort of realism that we are challenging. What does it mean for one decision theory to be better? Don't we need to specify some criteria of evaluation?