Roland Pihlakas

Posts

Sorted by New

Comments

2021 New Year Optimization Puzzles

Yes, maybe the the minimum cost is 3 even without floor or ceiling? But the question is then how to find concrete solutions that can be proven using realistic efforts. I interpret the challenge as request for submission of concrete solutions, not just theoretical ones. Anyway, my finding is below, maybe it can be improved further. And could there be any way to emulate floor or ceiling using the functions permitted in the initial problem formulation?

By the way, for me the >! works reliably when entered right in the beginning of the message. After a newline it does not work reliably.

 ceil(3!! * sqrt(sqrt(5! / 2 + 2)))

2021 New Year Optimization Puzzles

If you would allow ceiling function then I could give you a solution with score 60 for the Puzzle 1. Ceiling or floor functions are cool because they add even more branches to the search, and enable involving irrational number computations too. :P Though you might want to restrict the number of ceiling or floor functions permitted per solution. 

By the way, please share a hint about how do you enter spoilers here?
 

Announcement: AI alignment prize round 3 winners and next round

Submitting my post for early feedback in order to improve it further:

Exponentially diminishing returns and conjunctive goals: Mitigating Goodhart’s law with common sense. Towards corrigibility and interruptibility.

Abstract.

Utility maximising agents have been the Gordian Knot of AI safety. Here a concrete VNM-rational formula is proposed for satisficing agents, which can be contrasted with the hitherto over-discussed and too general approach of naive maximisation strategies. For example, the 100 paperclip scenario is easily solved by the proposed framework, since infinitely rechecking whether exactly 100 paper clips were indeed produced yields to diminishing returns. The formula provides a framework for specifying how we want the agents to simultaneously fulfil or at least trade off between the many different common sense considerations, possibly enabling them to even surpass the relative safety of humans. A comparison with the formula introduced in “Low Impact Artificial Intelligences” paper by S. Armstrong and B. Levinstein is included.

Towards a New Impact Measure

It looks like there is so much information on this page that trying to edit the question kills the browser.

An additional idea: Additionally to supporting the configuration of the default behaviours, perhaps the agent should interactively ask for confirmation of shutdown instead of running deterministically?

Towards a New Impact Measure

I have a question about the shutdown button scenario.

Vika already has mentioned that the interruptibility is ambivalent and information about desirability of enabling interruptions needs to be externally provided.

I think same observation applies to corrigibility - the agent should accept goal changes only from some external agents and even that only in some situations, and not accept in other cases: If I break the vase intentionally (for creating a kaleidoscope) it should keep this new state as a new desired state. But if I or a child breaks the vase accidentally - the agent should restore it to original state. Even more, if I was about to break the vase by accident, the agent may try to interfere using slightly more force than in the case of a child who would be smaller and more fragile.

How to achieve this using the proposed AUP framework?

In other words the question can be formulated as following: Lets keep all the symbols used in the gridworld same, and the agent's code also same. Lets only change the meaning of the symbols. So each symbol in the environment should be assigned some additional value or meaning. Without that they are just symbols dancing around based on their own default rules of game. The default rules might be an useful starting point, but they need to be supplemented with additional information for practical applications.

For example, in case of the shutdown button scenario the assigned meaning of symbols would be something like Vika suggested: Lets assume that instead of shutdown button there is an accidental water bucket falling on the agent's head, and the button available to agent disables the bucket.

Nonlinear perception of happiness

You might be interested in Prospect Theory:

https://en.wikipedia.org/wiki/Prospect_theory

Announcement: AI alignment prize round 2 winners and next round

Hello!

Here are my submissions for this time. They are all strategy related.

The first one is a project for popularisation AI safety topics. This is not a technical text by its content but the project itself is still technological.

https://medium.com/threelaws/proposal-for-executable-and-interactive-simulations-of-ai-safety-failure-scenarios-7acab7015be4

As a bonus I would add a couple of non-technical ideas about possible economic or social partial solutions for slowing down AI race (which would enable having more time for solving the AI alignment) :

https://medium.com/threelaws/making-the-tax-burden-of-robot-usage-equal-to-the-tax-burden-of-human-labour-c8e97df751a1

https://medium.com/threelaws/starting-a-human-self-sufficiency-movement-the-handicap-principle-eb3a14f7f5b3

The latter text is not totally new - it is a distilled and edited version of one of my other old texts, that was originally multiple times longer and had a narrower goal than the new one.

Regards:

Roland

Announcement: AI alignment prize round 2 winners and next round

To people who become interested in the topic of side effects and whitelists, I would add links to a couple of additional articles of my own past work on related subjects that you might be interested in - for developing the ideas further, for discussion, or for cooperation:


https://medium.com/threelaws/first-law-of-robotics-and-a-possible-definition-of-robot-safety-419bc41a1ffe

The principles are based mainly on the idea of competence-based whitelisting and preserving reversibility (keeping the future options open) as the primary goal of AI, while all task-based goals are secondary.

https://medium.com/threelaws/implementing-a-framework-of-safe-robot-planning-43636efe7dd8

More technical details / a possible implementation of the above.


This is intended as a comment, not as a prize submission, since I first published these texts 10 years ago.

Funding for AI alignment research

A question: can one post multiple initial applications, each less than a page long? Is there a limit for the total volume?

Funding for AI alignment research

Hey! I believe we were in a same IRC channel at that time and I also did read your story back then. I still remember some of it. What is the backstory? :)

Load More