Hjalmar_Wijk

Wiki Contributions

Comments

Tabooing 'Agent' for Prosaic Alignment

Strongly agree with this, I think this seems very important.

Tabooing 'Agent' for Prosaic Alignment

These sorts of problems are what caused me to want a presentation which didn't assume well-defined agents and boundaries in the ontology, but I'm not sure how it applies to the above - I am not looking for optimization as a behavioral pattern but as a concrete type of computation, which involves storing world-models and goals and doing active search for actions which further the goals. Neither a thermostat nor the world outside seem to do this from what I can see? I think I'm likely missing your point.

Torture and Dust Specks and Joy--Oh my! or: Non-Archimedean Utility Functions as Pseudograded Vector Spaces

Theron Pummer has written about this precise thing in his paper on Spectrum Arguments, where he touches on this argument for "transitivity=>comparability" (here notably used as an argument against transitivity rather than an argument for comparability) and its relation to 'Sorites arguments' such as the one about sand heaps.

Personally I think the spectrum arguments are fairly convincing for making me believe in comparability, but I think there's a wide range of possible positions here and it's not entirely obvious which are actually inconsistent. Pummer even seemed to think rejecting transitivity and comparability could be a plausible position and that the math could work out in nice ways still.

Towards a mechanistic understanding of corrigibility

Understanding the internal mechanics of corrigibility seems very important, and I think this post helped me get a more fine-grained understanding and vocabulary for it.

I've historically strongly preferred the type of corrigibility which comes from pointing to the goal and letting it be corrigible for instrumental reasons, I think largely because it seems very elegant and that when it works many good properties seem to pop out 'for free'. For instance, the agent is motivated to improve communication methods, avoid coercion, tile properly and even possibly improve its corrigibility - as long as the pointer really is correct. I agree though that this solution doesn't seem stable to mistakes in the 'pointing', which is very concerning and makes me start to lean toward something more like act-based corrigibility being safer.

I'm still very pessimistic about indifference corrigibility though, in that it still seems extremely fragile/low-measure-in-agent-space. I think maybe I'm stuck imagining complex/unnatural indifference, as in finding agents indifferent to whether a stop-button is pressed, and that my intuition might change if I spend more time thinking about examples like myopia or world-model <-> world interaction, where the indifference seems to have more 'natural' boundaries in some sense.

Computational Model: Causal Diagrams with Symmetry

I really like this model of computation and how naturally it deals with counterfactuals, surprised it isn't talked about more often.

This raises the issue of abstraction - the core problem of embedded agency.

I'd like to understand this claim better - are you saying that the core problem of embedded agency is relating high-level agent models (represented as causal diagrams) to low-level physics models (also represented as causal diagrams)?

Tabooing 'Agent' for Prosaic Alignment

I wonder if you can extend it to also explain non-agentic approaches to Prosaic AI Alignment (and why some people prefer those).

I'm quite confused about what a non-agentic approach actually looks like, and I agree that extending this to give a proper account would be really interesting. A possible argument for actively avoiding 'agentic' models from this framework is:

  1. Models which generalize very competently also seem more likely to have malign failures, so we might want to avoid them.
  2. If we believe then things which generalize very competently are likely to have agent-like internal architecture.
  3. Having a selection criteria or model-space/prior which actively pushes away from such agent-like architectures could then help push away from things which generalize too broadly.

I think my main problem with this argument is that step 3 might make step 2 invalid - it might be that if you actively punish agent-like architecture in your search then you will break the conditions that made 'too broad generalization' imply 'agent-like architecture', and thus end up with things that still generalize very broadly (with all the downsides of this) but just look a lot weirder.

This seems too optimistic/trusting. See Ontology identification problem, Modeling distant superintelligences, and more recently The “Commitment Races” problem.

Thanks for the links, I definitely agree that I was drastically oversimplifying this problem. I still think this task might be much simpler than the task of trying to understand the generalization of some strange model whose internal working we don't even have a vocabulary to describe.