I don't see any recent publications for paul christiano related to this. So i guess the problem[s] is still open.

8mo188

AutoGPT was created by a non-coding VC

It looks like you are confusing autoGPT with babyagi which was created by yohei nakajima who is a VC. the creator of autoGPT (Toran Bruce Richards) is a game-developer with a decent programming (game-development) experience. Even the figure shown here is that from babyagi (https://yoheinakajima.com/task-driven-autonomous-agent-utilizing-gpt-4-pinecone-and-langchain-for-diverse-applications/).

28mo

Editing based on google confirming and the 9 karma, but can anyone else confirm this for sure?

Regarding the first one i am not expecting a single-prompt to generate the entirity of enwiki8/9. I am more interested in finding a set of prompts with a lookup table if possible to replicate enwiki data.

Thanks for the pointer for chincilla post, will look into it.

38mo

ok cool let us know if you do

10mo1

This requires hitting a window - our data needs to be good enough that the system can tell it should use human values as a proxy, but bad enough that the system can’t figure out the specifics of the data-collection process enough to model it directly. This window may not even exist.

are there any real world examples of this? not necessarily in human-values setting

2y1

From a complexity theoretic viewpoint, how hard could ELK be? is there any evidence that ELK is decidable?

2y1

silly idea: instead of thought-annotating ai-dungeon plays, we can start with annotating thoughts for akinator gameruns.

pros: much more easier and faster way to build a dataset, with less ambiguity

cons: somewhat restricted than the original idea.

The proof need not be bogus, it can be a long valid proof but since you are describing the problem in natural language, the proof generated by the AGI need not be for the problem that you described.

1[comment deleted]2y

Also, the AGI can generate a long valid proof but it may not be for the question you have asked, since the assumption is that the problems described in natural language and its the AGI's job to understand and convert it to formal language and then prove it

I think instead of recursively asking for higher level proof it should be a machine-checkable regarding the correctness of the AGI itself?

verifying a proof may run in polynomial time compared to the exponential of finding one, but it doesn't rule out the possibility that there exist a large enough proof which will be hard to check.

There are many algorithms which are polynomial in time but are far worse to run in reality.

12y

Certainly such a proof could exist, but it's impossible for an AGI to find it.

language-models if they are AGI would surely surpass human-level understanding of language, humans need language for communication and book-keeping, "words" in any language are mostly for interesting abstractions from a human point-of-view, as for language models it need not have any language since it doesn't have an internal dialogue like humans do.

As it reaches a certain level of intelligence it starts forming increasingly complex abstractions that don't have (won't have) any vocabulary. It would be impossible to interpret its reasoning, and the only way left is to accept it.

verifying proof for riemann-hypothesis is not harder than generating one, but say if you have access to one alleged-proof of riemann-hypothesis which is long enough such that verifying itself is super-hard, then you have no evidence to say that the proof is correct unless you prove that the AGI generating the proof is indeed capable of generating such a proof and being honest to you.

12y

Verifying a proof is so much cheaper than finding it, that if it's expensive to verify a proof, the proof is certainly wrong.

not much information is given regarding that so far, i was curious about that too