xle
xle has not written any posts yet.

Thanks for explaining! That was very helpful. My major reason reasons for doubt comes from modules I took as an undergrad in the 2010s on neural networks and ML combined with having tried extensively and unsuccessfully to employ LLMs to do any kind of novel work (I.e. to apply ideas contained within their training data to new contexts).
Essentially my concern is that I am yet to be convinced that even an unimaginably advanced statistical-regression machine optimised for language processing could achieve true consciousness, largely due to the fact that there is no real consensus on what consciousness actually is.
However, it seems fairly obvious that such a machine could be used to do... (read more)
Thanks for sharing this - it was an interesting read. I'd be interested to learn more about your reasons for believing that AGI is inevitable (or even probable) as this is not a conclusion I've reached myself. It's a (morbidly) fascinating topic though so I'd love to learn more (and maybe change my mind).
This is a good question! Since I am unconvinced that ability to solve puzzles = intelligence = consciousness, I take some issue with the common benchmarks currently being employed to gauge intelligence, so I rule out any "passes X benchmark metric" as my least impressive thing. (as an aside, I think that AI research, as with economics, suffers very badly from an over-reliance on numeric metrics: truly intelligent beings, just like real-world economic systems, are far to complex to be measured by such small amounts of statistics -... (read 1243 more words →)