Wiki Contributions

Comments

On the overall point of using LLMs-for-reasoning this (output of a team at AI Safety Camp 2023) might be interesting - it is rather broad-ranging and specifically about argumentation in logic, but maybe useful context: https://compphil.github.io/truth/ 

That’s really useful, thank you.

This is really useful, thank you - Bach's views are quite hard to capture without sitting through hours of podcasts (though he has re-started writing).

In response to Roman’s very good points (i have only for now skimmed the linked articles); these are my thoughts:

I agree that human values are very hard to aggregate (or even to define precisely); we use politics/economy (of collectives ranging from the family up to the nation) as a way of doing that aggregation, but that is obviously a work in progress, and perhaps slipping backwards. In any case, (as Roman says) humans are (much of the time) misaligned with each other and their collectives, in ways little and large, and sometimes that is for good or bad reasons. By ‘good reason’ I mean that sometimes ‘misalignment’ might literally be that human agents & collectives have local (geographical/temporal) realities they have to optimise for (to achieve their goals), which might conflict with goals/interests of their broader collectives: this is the essence of governing a large country, and is why many countries are federated. I’m sure these problems are formalised in preference/values literature, so I’m using my naive terms for now…

Anyway, this post’s working assumption/intuition is that ‘single AI-single human’ alignment (or corrigibility or identity fusion or (delegation to use Andrew Critch’s term)) is ‘easier’ to think about or achieve, than ‘multiple AI-multiple human’. Which is why we consciously focused on the former & temporarily ignored the latter. I don’t know if that assumption is valid and I haven’t thought about (i.e. no opinion) whether ideas in Roman’s ‘science of ethics’ linked post would change anything, but am interested in it !

I've taken a crack at #4 but it is more about thinking through how 'hundreds of millions of AIs' might be deployed in a world that looks, economically and geopolitically, something like today's (i.e. the argument in the OP is for 2036 so this seems a reasonable thing to do). It is presented as a flowchart which is more succinct than my earlier longish post.

Good catch, thank you - fixed & clarified !

I noticed that footnotes don't seem to come over when I copy-paste from Google Docs (where I originally wrote the post), hence I have to put them in individually (using the LW Docs editor). Is there a way of just importing them? Or is the best workflow to just write the post in LW Docs?

Perhaps this is too much commentary (on Rao's post), but given (I believe) he's pretty widely followed/respected in the tech commentariat, and has posted/tweeted on AI alignment before, I've tried to respond to his specific points in a separate LW post.  Have tried to incorporate comments below, but please suggest anything I've missed.  Also if anyone thinks this isn't an awful idea, I'm happy to see if a pub like Noema (who have run a few relevant things e.g. Gary Marcus, Yann LeCun, etc.) would be interested in putting out an (appropriately edited) response - to try to set out the position on why alignment is an issue, in publishing venues where policymakers/opinion makers might pick it up (who might be reading Rao's blog but are perhaps not looking at LW/AF).  Apologies for any conceptual or factual errors, my first LW post :-)

A fool was tasked with designing a deity. The result was awesomely powerful but impoverished - they say it had no ideas on what to do. After much cajoling, it was taught to copy the fool’s actions. This mimicry it pursued, with all its omnipotence.  

The fool was happy and grew rich.

And so things went, ‘til the land cracked, the air blackened, and azure seas became as sulfurous sepulchres.

As the end grew near, our fool ruefully mouthed something from a slim old book: ‘Thou hast made death thy vocation, in that there is nothing contemptible.’
 

Thanks ! I'd love to know which points you were uncomfortable with...

Load More