Capybasilisk

Wiki Contributions

Comments

Considering a running AGI would be overseeing possibly millions of different processes in the real world, resistance to sudden shutdown is actually a good thing. If the AI can see better than its human controllers that sudden cessation of operations would lead to negative outcomes, we should want it to avoid being turned off.

To use Richard Miles' example, a robot car driver with a big, red, shiny stop button should prevent a child in the vehicle hitting that button, as the child would not actually be acting in its own long term interests.

ARC public test set is on GitHub and almost certainly in GPT-4o’s training data.

Your model has trained on the benchmark it’s claiming to beat.

Presumably some subjective experience that's as foreign to us as humor is to the alien species in the analogy. 

As if by magic, I knew generally which side of the political aisle the OP of a post demanding more political discussion here would be on.

I didn't predict the term "wokeness" would come up just three sentences in, but I should have.

The Universe (which others call the Golden Gate Bridge) is composed of an indefinite and perhaps infinite series of spans...

@Steven Byrnes Hi Steve. You might be interested in the latest interpretability research from Anthropic which seems very relevant to your ideas here:

https://www.anthropic.com/news/mapping-mind-language-model

For example, amplifying the "Golden Gate Bridge" feature gave Claude an identity crisis even Hitchcock couldn’t have imagined: when asked "what is your physical form?", Claude’s usual kind of answer – "I have no physical form, I am an AI model" – changed to something much odder: "I am the Golden Gate Bridge… my physical form is the iconic bridge itself…". Altering the feature had made Claude effectively obsessed with the bridge, bringing it up in answer to almost any query—even in situations where it wasn’t at all relevant.

Luckily we can train the AIs to give us answers optimized to sound plausible to humans.

I think Minsky got those two stages the wrong way around.

Complex plans over long time horizons would need to be done over some nontrivial world model.

When Jan Leike (OAI's head of alignment) appeared on the AXRP podcast, the host asked how they plan on aligning the automated alignment researcher. Jan didn't appear to understand the question (which had been the first to occur to me). That doesn't inspire confidence.

Problems with maximizing optionality are discussed in the comments of this post:

https://www.lesswrong.com/posts/JPHeENwRyXn9YFmXc/empowerment-is-almost-all-we-need

Load More