All of martinkunev's Comments + Replies

There seems to be a mistake in section 4.2.
Prevent(C5) is said to be the closure of {{ur}, {nr}, {us}, {ns}} under subsets.
It should be {{nr, us, ns}, {ur, us, ns}, {ur, nr, ns}, {ur, nr, us}}.

I consider watermarking a lost cause. Trying to imitate humans as best as possible conflicts with trying to distinguish AI- from human-generated output. The task is impossible in the limit. If somebody wants to avoid watermarking, they can always use an open-source model (e.g. to paraphrase the watermarked content).

Digitally signing content can be used to track the origin of that content (but not the tools used to create it). We could have something like a global distributed database indicating the origin of content and everybody can decide what to trust b... (read more)

I'm unsure whether CoEms as described could actually help in solving alignment. It may be the case that advancing alignment requires enough cognitive capabilities to make the system dangerous (unless we have already solved alignment).

I doubt that a single human mind which runs on a computer is guaranteed to be safe - this mind would think orders of magnitude faster (speed superintelligence) and copy itself. Maybe most humans would be safe. Maybe power corrupts.

In "A money-pump for Completeness" you say "by the transitivity of strict preference"
This only says that transitive preferences do not need to be complete which is weaker than preferences do not need to be complete.

"paying to avoid being given more options looks enough like being dominated that I'd want to keep the axiom of transitivity around"

Maybe offtopic but paying to avoid being given more options is a common strategy in negotiation.

The distinction amortized vs direct in humans seems related to system-1 vs system-2 in Thinking Fast and Slow.

 

"the implementation of powerful mesa-optimizers inside the network quite challenging"

I think it's quite likely that we see optimizers implemented outside the network in the style of AutoGPT (people can explicitly build direct optimizers on top of amortized ones).

The letters I and l look the same. Maybe use 1 instead of upper case i?

"However, when everyone gets expected utility 1, the expected logarithm of expected utility will have the same derivative as expected expected utility"


Can you clarify this sentence? What functions are we differentiating?

I'm wondering whether this framing (choosing between a set of candidate worlds) is the most productive. Does it make sense to use criteria like corrigibility, minimizing impact and prefering reversible actions (or we have no reliable way to evaluate whether these hold)?

a couple of typos

(no sub X in print)    Env       := { Print  : S → A,   Eval  : S × Aₓ → S }

in the second image, in the bottom right S^1_X should be S^1

I'm just wondering what would Britney Spears say when she reads this.

The Games and Information book link is broken. It appears to be this book:
https://www.amazon.com/Games-Information-Introduction-Game-Theory/dp/1405136669/ref=sr_1_1?crid=2VDJZFMYT6YTR&keywords=Games+and+Information+rasmusen&qid=1697375946&sprefix=games+and+information+rasmuse%2Caps%2C178&sr=8-1

2johnswentworth2mo
Updated, thanks.

To make this easier to parse on the first read, I would add that

N is the number of parameters of the NN and we assume each parameter is binary (instead of the usual float).

"the agent guesses the next bits randomly. It observes that it sometimes succeeds, something that wouldn't happen if Murphy was totally unconstrained"

Do we assume Murphy knows how the random numbers are generated? What justifies this?

Arguably the notion of certainty is not applicable to the real world but only to idealized settings. This is also relevant.

A couple of clarifications if somebody is as confused as me when first reading this.

In ZF we can quantify over sets because "set" is the name we use to designate the underlying objects (the set of natural numbers is an object in the theory). In Peano, the objects are numbers so we can quantify over those we cannot quantify over sets.

Predicates are more "powerful" than first-order formulas so quantifying over predicates allows us to restrict the possible models more than having an axiom for each formula. Even though every predicate is a formula, the interpretation of a predicate is determined by the model so we cannot capture all predicates by having a formula for each predicate symbol.

Eliezer Yudkowsky once entered an empty newcomb's box simply so he can get out when the box was opened.

or

When you one-box against Eliezer Yudkowsky on newcomb's problem, you lose because he escapes from the box with the money.

"Realistically, the function UN doesn't incentivize the agent to perform harmful actions."

I don't understand what that means and how it's relevant to the rest of the paragraph.

It would be interesting to see if a similar approach can be applied to the strawberries problem (I haven't personally thought about this).

Refering to all forms of debate, overseeing, etc. as "Godzilla strategies" is loaded language. Should we refrain from summoning Batman because we may end up summoning Godzilla by mistake? Ideally, we want to solve alignment without summoning anything. However, applying some humility, we should consider that the problem may be too difficult for human intelligence to solve.

The image doesn't load.

The notation in Hume's Black Box seems inconsistent. When defining [e], e is an element of a world. When defining I, e is a set of worlds.

In "Against Discount Rates" Eliezer characterizes discount rate as arising from monetary inflation, probabilistic catastrophes etc. I think in this light discount rate less than ONE (zero usually indicates you don't carea at all about the future) makes sense.

Some human values are proxies to things which make sense in general intelligent systems - e.g. happiness is a proxy for learning, reproduction etc.

Self-preservation can be seen as an instance of preservation of learned information (which is a reasonable value for any intelligent system). Indeed, If the... (read more)

Is the existence of such situations an argument for intuitionistic logic?

"wireheading ... how evolution has addressed it in humans"

It hasn't - that's why people do drugs (including alcohol). What is stopping all humans from wireheading is that all currently available methods work only short term and have negative side effects. The ancestral environment didn't allow for the human kind to self-destruct by wireheading. Maybe peer pressure to not do drugs exists but there is also peer pressure in the other direction.

2TAG5mo
Maybe that's how evolution addressed it.

Is it worth it to read "Information Theory: A Tutorial Introduction 2nd edition" (James V Stone)?

https://www.amazon.com/Information-Theory-Tutorial-Introduction-2nd/dp/1739672704/ref=sr_1_2

"There doesn't seem to be anything a sufficiently motivated and resourced intelligent human is incapable of grasping given enough time"

  - a human

 

If there is such a thing, what would a human observe?

"there is some threshold of general capability such that if someone is above this threshold, they can eventually solve any problem that an arbitrarily intelligent system could solve"

This is a very interesting assumption. Is there research or discussions on this?

"discovering that you're wrong about something should, in expectation, reduce your confidence in X"

This logic seems flawed. Suppose X is whether humans go extinctinct. You have an estimate of the distribution of X (for a bernoulli process it would be some probability p). Take the joint distribution of X and the factors on which X depends (p is now a function of those factors). Your best estimate of p is the mean of the joint distribution and the variance measures how uncertain you're about the factors. Discovering that you're wrong about something means be... (read more)

When outside, I'm usually tracking location and direction on a mental map. This doesn't seem like a big deal to me but in my experience few people do it. On some occasions I am able to tell which way we need to go while others are confused.

Given that hardware advancements are very likely going to continue, delaying general AI would favor what Nick Bostrom calls a fast takeoff. This makes me uncertain as to whether delaying general AI is a good strategy.

I expected to read more about actively contributing to AI safety rather than about reactivively adapting to whatever is happening.