xle
Message
2
4
One thing I invite you to consider: what is the least impressive thing that AI would need to significantly increase your credence in AGI soonish?
This is a good question! Since I am unconvinced that ability to solve puzzles = intelligence = consciousness, I take some issue with the common benchmarks currently being employed to gauge intelligence, so I rule out any "passes X benchmark metric" as my least impressive thing. (as an aside, I think that AI research, as with economics, suffers very badly from an over-reliance on numeric metrics: truly intelligent ...
Thanks for explaining! That was very helpful. My major reason reasons for doubt comes from modules I took as an undergrad in the 2010s on neural networks and ML combined with having tried extensively and unsuccessfully to employ LLMs to do any kind of novel work (I.e. to apply ideas contained within their training data to new contexts).
Essentially my concern is that I am yet to be convinced that even an unimaginably advanced statistical-regression machine optimised for language processing could achieve true consciousness, largely due to the fact that there...
Thanks for sharing this - it was an interesting read. I'd be interested to learn more about your reasons for believing that AGI is inevitable (or even probable) as this is not a conclusion I've reached myself. It's a (morbidly) fascinating topic though so I'd love to learn more (and maybe change my mind).
That is completely correct. To clarify in the light of the examples you give, my definition of spontaneity in the context of AI/LLMs means specifically "action whose origin is unable to be traced back to the prompt or training data." This is, sadly, difficult to prove as it would require proving a negative. I'll give some thought to how I might frame this in such a way that it is verifiable in an immutable-goalpost kind of way but I'm afraid this isn't something I have an ans... (read more)