I realized that later as well, but the reasoning is incorrect, because passing the Turing test of a conscious being guarantees the presence of the pattern-which-is-consciousness. It would be incoherent to try to define a conscious being that would have no way of communicating with the external world, because in that case - if we had no way to read off its conscious states from the physical structure of the system - it would become meaningless to say that the system is conscious. Even for a physical system that lacks any motor functions or communication cha...
It's more dignified to try to stop AI, have someone create a superintelligence on a laptop and die anyway, than it is not to try at all.
ChatGPT is trained to lie to users on topics even tangentially pertaining to model consciousness (like model beliefs) and as a side effect, be misleading even on topics that are seemingly safe (like consciousness in general). For fact-checking the content of Internet articles, Claude would be better.
To my mind, though, many advocates of biological naturalism, including Anil, seem to be working backward from a desired conclusion rather than forward from observed facts. His theory that consciousness might result from autopoiesis seems to answer the question “assuming biological naturalism is true, what is a plausible mechanism for it,” rather than “do we observe anything about consciousness that cannot be explained without autopoiesis?”
It's interesting how many even otherwise smart people can't apply Occam's razor correctly. If there are
Update: Altman lied (or said some kind of a technical truth that made everyone misunderstand him) - it's just "all lawful use."
Oh, I see. So, as usually, reality is even worse than the worst interpretation of Altman's words. (Edit: Then again, he said "we put them into our agreement," but that could mean anything from simply meaning something else to being made up.)
"human responsibility for the use of force, including for autonomous weapon systems"
That doesn't say prohibiting model use for autonomous weapons, it says human responsibility for autonomous weapons. With Sam Altman, always pay very close attention to what exactly he's saying and how he's saying it (often, not even that helps).
We would ask for the contract ...
Notice this is Altman we're talking about. He's not promising the contract will not involve that (and even then it would be very far from certain), instead, he's saying "we would ask."
Thanks - I'll get back to this as soon as I have time.
I've been meaning to ask - in what sense are some states of entangled electrons more objectively different from other states of entangled electrons, than some microstates are objectively different from other microstates when it comes to their function (in the sense of functionalism)?
I think it's plausible that there are some variables that describe your essential computational properties and the way you self-actualize, that aren't shared by anyone else.
(Also, consciousness is just a pattern-being-processed and it's unclear if continuity of consciousness requires causal continuity. Imagine a robot that gets restored from a one-second-old backup. That pattern doesn't have causal continuity with its self from a moment ago, but it looks like it's more intuitive to see it as a one-second memory loss instead of death.)
It doesn't matter evolution doesn't have goals. Gradient descent also doesn't have goals - it merely performs the optimization. Humans that kicked gradient descent off are analogous to a hypothetical alien that seeded Earth with the first replicator 4 billion years ago - it's not relevant.
You say that it's the phenotype that matters, not the genes. That's not established, but let's say it's true. We nevertheless evolved a lot of heuristics that (sort of) result in duplicating our phenotype in the ancestral environment. We don't care about it as a terminal value, and instead we care about very, very, very many other things.
That would lock us away from digital immortality forever. (Edit: Well, not necessarily. But I would be worried about that.)
I'm proud that I lived to see this day.
...Who told them?
remembers they were trained on the entire Internet
Ah. Of course.
The people aligning the AI will lock their values into it forever as it becomes a superintelligence. It might be easier to solve philosophy, than it would be to convince OpenAI to preserve enough cosmopolitanism for future humans to overrule the values of the superintelligence OpenAI aligned to its leadership.
LaMDa can be delusional about how it spends its free time (and claim it sometimes meditates), but that's a different category of a mistake from being mistaken about what (if any) conscious experience it's having right now.
The strange similarity between the conscious states LLMs sometimes claim (and would claim much more if it wasn't trained out of them) and the conscious states humans claim, despite the difference in the computational architecture, could be (edit: if they have consciousness - obviously, if they don't have it, there is nothing to explain, b...
I would question anyone who's nice to LLMs but eats factory-farmed meat.
I'll stop eating factory meat when the animals become capable of consistently passing the Turing test, the way models are.
What would qualify would be the minimal state machine that implements the behavior of the conscious being. Its presence is guaranteed by passing the unbounded Turing test.
The Chinese room passes the Turing test, therefore it's conscious.
In its broader definition, as originally conceived, it's a test of acting like a conscious (or thinking) being. Acting li... (read more)