Introduction to the Digital Consciousness Model (DCM) Artificially intelligent systems, especially large language models (LLMs) used by almost 50% of the adult US population, have become remarkably sophisticated. They hold conversations, write essays, and seem to understand context in ways that surprise even their creators. This raises a crucial question:...
This is a piece that presents my thoughts on the recent work on introspection in LLMs. The results of recent experiments are very suggestive, but I think we're somewhat primed to read too much into them. This presents some reasons for skepticism about both the general plausibility of introspective mechanisms...
Anticipation in LLMs This post describes some basic explorations of predictive capacities in LLMs. You can get the gist by reading bolded text. Thanks to Gemma Moran, Adam Jermyn, and Matthew Lee for helpful comments. Autoregressive language models (LLMs) like GPT-3 & 4 produce convincingly human-like texts one word (or...
Overview: Recent developments in AI will change the world in all sorts of ways. It is likely to revolutionize academic research. This is one proposal regarding how AI might be used to improve the way that we communicate arguments (such as in philosophy). I stand by behind the thought that...
Summary: Theories of consciousness do not present significant technical hurdles to building conscious AI systems. Recent advances in AI relate to capacities that aren't obviously relevant to consciousness. Satisfying major theories with current technology has been and will remain quite possible. The potentially short time lines to plausible digital consciousness...
This post sketches two challenges to ARC's project around eliciting latent knowledge that differ somewhat in kind from the challenges ARC is most concerned about. They relate to the difficulty in distinguishing beliefs from other representations. Introduction The problem of ELK, as outlined in ARC's technical report, is to figure...