Posts

Sorted by New

Wiki Contributions

Comments

Ali 2mo20

I've only read a couple of pieces on corrigibility but curious why the following scheme wouldn't work/ I haven't seen any work in this direction (from my admittedly v brief scan). 

Suppose for every time t some decision maker would choose a probability p(t) with which the AI would shutdown. Further, suppose the AI's utility function in any period was initially U(t). Now scale the new AI's utility function to be U(t)/(1-p(t))- I think this can be quite easily generalized so future periods of utility are likewise unaffected by an increase in the risk of shutdown (eg. scale them by the product of (1-p(t)) over all intervening time periods).

In this world the AI should be indifferent over changes to p(t) (as long as it gets arbitrarily close to 1 but never reaches it) and so should take actions trying to maximize U whilst being indifferent to if humans decide to shut it down.

Ali 2mo32

I don't really understand the concern here. It seems from Google's perspective there was basically a near certainty that the NYT/media/some people on twitter would have some complaints about Gemini regardless of the model outputs. Their options were either to have the complaints be about i) the model apparently offering advice on 'dangerous' topics/perpetuating biases which both seem conducive to requests for more regulation or ii) to cause some outrage/counter-outrage on 'wokeness' which may drive some clicks but is unlikely to cause any serious policymaker to want to regulate and when it quietly gets fixed a couple weeks later have caused basically no actual economic/other harms or costs. Further with ii) you get some evidence/talking points for all the many times in the upcoming years people push for regulation forcing AI models to censor certain outputs/ anti-bias/discrimination proposals. It seems to be that ii) is pretty strongly preferable to the point one would want to basically be guaranteeing the model errs on ii) rather than i) when first releasing it. 

Ali 6mo50

Enjoyed reading this (although still making my way through the latter parts), I think it's useful to have a lot of these ideas more formalised. 

Wondering if this simple idea would come up against theorem 2/ any others or why it wouldn't work.

Suppose for every time t some decision maker would choose a probability p(t) with which the AI would shutdown. Further, suppose the AI's utility function in any period was initially U(t). Now scale the new AI's utility function to be U(t)/(1-p(t))- I think this can be quite easily generalised so future periods of utility are likewise unaffected by an increase in the risk of shutdown.

In this world the AI should be indifferent over changes to p(t) (as long as it gets arbitrarily close to 1 but never reaches it) and so should take actions trying to maximise U whilst being indifferent to if humans decide to shut it down.

Ali 3y60

Great read; just one thing I'd add re the housing discussion which I think is sometimes neglected is that unlike essentially all other assets everyone is naturally short housing as they're always going to need somewhere to live. As such buying a similar amount of housing as one plans to use in the future (this can also be done via investing disproportionately into housebuilding stocks or buying shares of various properties) can reduce your risk to future price increases (which historically have been large, especially if you value living in cities/desirable locations).

Housing could therefore be a sensible investment even if you thought an index fund would yield slightly higher returns. This effect is further heightened by the fact most individuals have extremely limited access to leverage unless buying a house and the tax advantages associated with a mortgage too.

Ali 4y90

Regarding Ozzie's point above about markets where agents can pay to acquire costly information; this is the famous Grossman-Stiglitz paradox (1980), and indeed the paper concludes that informationally efficient markets are impossible in this setting.

https://en.wikipedia.org/wiki/Grossman-Stiglitz_Paradox