LESSWRONG
LW

Paul Colognese
392Ω8013110
Message
Dialogue
Subscribe

Personal website

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Secular interpretations of core perennialist claims
Paul Colognese10mo1-1

This Goodness of Reality hypothesis is a very strong empirical claim about psychology that strongly contradicts folk psychology,


One way of thinking about the Goodness of Reality hypothesis is that if we look at an agent in the world, its world model and utility function/preferences are fully a property of that agent/its internals rather than reality-at-large. Reality is value-neutral - it requires additional structure (utility function, etc.) to assign value to states of reality (and these utility functions, to the extent that they're real, are parts of reality itself). 

Also, from the 0th-person perspective/POV of awareness, via meditation practices, one can observe how value judgments are being constructed and go "beyond" value judgments about reality. 

Nitpick: Is reality "Good" or is it beyond good and ... evil?

Reply
Explaining the AI Alignment Problem to Tibetan Buddhist Monks
Paul Colognese1y10

Interesting! I'm working on a project exploring something similar but from a different framing. I'll give this view some thought, thanks!

Reply
Anomalous Concept Detection for Detecting Hidden Cognition
Paul Colognese1y10

Thanks, should be fixed now.

Reply
Charbel-Raphaël and Lucius discuss interpretability
Paul Colognese2y10

Thanks, that's the kind of answer I was looking for

Reply
Charbel-Raphaël and Lucius discuss interpretability
Paul Colognese2y21

Interesting discussion; thanks for posting!

I'm curious about what elementary units in NNs could be.

the elementary units are not the neurons, but some other thing.

I tend to model NNs as computational graphs where activation spaces/layers are the nodes and weights/tensors are the edges of the graph. Under this framing, my initial intuition is that elementary units are either going to be contained in the activation spaces or the weights.

There does seem to be empirical evidence that features of the dataset are represented as linear directions in activation space.

I'd be interested in any thoughts regarding what other forms elementary units in NNs could take. In particular, I'd be surprised if they aren't represented in subspaces of activation spaces.

Reply
High-level interpretability: detecting an AI's objectives
Paul Colognese2y20

Thanks for pointing this out. I'll look into it and modify the post accordingly.

Reply
High-level interpretability: detecting an AI's objectives
Paul Colognese2y10

With ideal objective detection methods, the inner alignment problem is solved (or partially solved in the case of non-ideal objective detection methods), and governance would be needed to regulate which objectives are allowed to be instilled in an AI (i.e., government does something like outer alignment regulation).

Ideal objective oversight essentially allows an overseer instill whatever objectives it wants the AI to have. Therefore, if the overseer includes the government, the government can influence whatever target outcomes the AI pursues.

So practically, this means that the governance policies would require the government to have access to the objective detection method results, directly or indirectly through the AI labs. 

Reply
Aligned AI via monitoring objectives in AutoGPT-like systems
Paul Colognese2y10

Thanks for the reponse, it's useful to hear that we can to the same conclusions. I quoted your post in the first paragraph. 

Thanks for bringing Fabien's post to my attention! I'll reference it. 

Looking forward to your upcoming post.

Reply
Towards a solution to the alignment problem via objective detection and evaluation
Paul Colognese2y30

Interesting! Quick thought: I feel as though it over-compressed the post, compared to the summary I used. Perhaps you can tweak things to generate multiple summaries in varying degrees of length.

Reply
Towards a solution to the alignment problem via objective detection and evaluation
Paul Colognese2y30

Thanks for the feedback! I guess the intention of this post was to lay down the broad framing/motivation for upcoming work that will involve looking at the more concrete details.

I do resonate with the feeling that the post as a whole feels a bit empty as it stands and the effort could have been better spent elsewhere.

Reply
Load More
2Paul Colognese's Shortform
2y
1
20Explaining the AI Alignment Problem to Tibetan Buddhist Monks
1y
3
24Anomalous Concept Detection for Detecting Hidden Cognition
1y
3
22Hidden Cognition Detection Methods and Benchmarks
1y
11
16Notes on Internal Objectives in Toy Models of Agents
1y
0
15Internal Target Information for AI Oversight
2y
0
29Potential alignment targets for a sovereign superintelligent AI
Q
2y
Q
4
72High-level interpretability: detecting an AI's objectives
Ω
2y
Ω
4
21[Linkpost] Frontier AI Taskforce: first progress report
2y
0
27Aligned AI via monitoring objectives in AutoGPT-like systems
Ω
2y
Ω
4
9Towards a solution to the alignment problem via objective detection and evaluation
2y
7
Load More