Software engineering, parenting, cognition, meditation, other
Linkedin, Facebook, Admonymous (anonymous feedback)
I still owe you a response to this. I'm esp. thinking about predictions.
the model’s internal representations of behavior and linguistic self-description are already aligned.
But that is arguably also the case for humans. Human behaviors are more complex and embedded, though. And the embedding seems crucial as it allows self-observation.
Sure, there are differences between countries and people. Not all things that cells can do can be done by people in a corresponding way in a country, and vice versa. The individual units of a country - people - are much more mobile than the cells in a person. They can even change hosts. I think this is related to the coherence that Dogan mentioned. The coherence of countries is lower than that of persons. On the other hand, countries exist for longer (but "think" slower).
I love Hofstadter, but remember his anthill is fiction, and one shouldn't use it as evidence for anything.
Yeah, I'm not happy that the anthill is fictional. I considered putting it into a footnote, but then I would have to put all the table entires there too, and the comparison in a table would be lost, and I think it helps drive the intuition that the elements of computation could be distributed.
Though I suspect my main sticking point is "entity" rather than "conscious". There's a missing element of coherence that I think matters quite a bit. LLMs are missing coherence-over-time and coherence-across-executions.
I agree with that. In fact, it is one reason I don't see LLMs currently as conscious. An earlier version of this post had a combined system of an LLM and a human interacting with it as another example, but I felt that was too difficult and not core to the thesis. A human, by continuously interacting, can provide the coherence-over-time. Stable awareness patterns and self-perception might still be missing or weak, though.
Countries are missing coherence-between-subsets.
Yes - and I think that's the most fragile part of the analogy. There is coherence, but it's definitely not as robust as a nervous system is. Still, we do see subsets (e.g., ministries, branches of government, political blocs) coordinating through shared norms, procedures, and mutual modelling. They're noisy, error-prone, often adversarial, but they’re not completely incoherent. At times, especially under external threat or during major events, countries do behave in surprisingly unified ways. These aren't mere aggregations of individual actions; they require and ensure a degree of coordination that maintain a whole over time.
When you say "countries do X", it's always the case that actually, some numbers of individual humans do it, and other numbers either don't participate or don't stop it
If we take that critique seriously, we have to stop saying that corporations launch products, or that teams win matches. There's always an underlying substrate of individual action. But we regularly model higher-level entities as agents when doing so improves prediction or explanation. From a functionalist perspective, if "Country X believes Y" helps us model diplomatic behaviour more accurately than tracking all individuals, that’s meaningful - even if we know that it is an abstraction.
Countries do NOT state their right to exist. Humans state their right to be collectively recognized as a country.
Yes, but I think this is too strict a reading. The same could be said about any distributed system. When a program outputs “Hello world,” it’s really just electrons doing things. When a person speaks, it’s really just muscles and neural impulses. The distinction is in the coordination and interpretation. When a state department issues a formal diplomatic communication, it’s acting as the voice of an institution that maintains internal models, makes predictions, and responds to feedback. That is, in all the functional ways that matter, it is the country speaking.
There are almost no muscle groups that act coherently without a brain to help coordinate.
Exactly, and we can extend the analogy to institutions that are the coordinating organs of a country’s body. They can fail, conflict, or contradict each other, which is comparable to a neurological disorder. But that doesn’t mean there is no coherence. It just means the coherence is partial and susceptible to breakdown. One could say that is also true of human consciousness in pathological states.
So yes, I take the point that coherence is crucial. But I don’t think the lack of perfect coherence disqualifies countries from being modelled as agents or even from being on some continuum toward consciousness. The better question might be: Under what conditions does it become useful or predictive to model a system as being conscious?
The last thing may result from a hard-coded genetic heuristic learning rate. We can't update fully Bayesian and a learning rate is an approximation given computational constraints. There is an optimal learning rate, but it depends on context, such as the trust in prior information, esp. the volatility of the environment. And thus it may happen that your genetic prior for your learning rate may not match the dynamics of your current environment. I guess our modern environment changes faster than the ancestral environment and most people update to slowly on new information. Updating much faster is probably adaptive. I also have that.
Hm, yes, seems plausible. Very inconsistent though. And they should remove the second paragraph, which seems to imply that it is still possible to apply anyway.
Can somebody get me in touch with somebody from the Center for AI Safety (safe.ai)? Their page for applying for compute resources seems broken. I have used their contact form to report the issue on April 7th, but received no reply.
This is how the application page looks like at least since then (linked from their Compute Cluster page):
As you can see, there is no form field to enter and only a lone "Absenden" button, which is German and means "submit" (which is strange because my system and browser are set to English). If I click that button, I get this message:
Looks like this form is empty. Try filling it out before submitting.
My guess is that there is a problem with their Airtable integration.
If you wonder what I'm trying to apply for:
It is a great idea to test a hypothesis experimentally. I did your experiment too, and the result is:
Several experiments show that I can extract useful information just by treating myself as a random sample, and thus a view that I can't use myself as a random sample is false.
I think there are some problems here. I think be more accurate claim would be:
You can do experiments that extract useful information about whether you can treat yourself as a random sample (i.e., a representative or "typical" sample) by comparing the result of the experiment to the baserate.
Or at the very least, based on my experiments, for me, the claim seems to be false. I'm not representative enough. But I can't know that without comparing my results to a baserate. I can't use the observations to establish a baserate or make estimations such as expected lifetime.
From a statistical perspective, a random sample means:
You may not be representative in any observable or unobservable dimension for your purpose. And to know if you are representative, you have to look at other samples and then you are back so some kind of baserate.
Outside view, context, and details. I'd ask
Of course, at least in the context of startups, the success of the startups will be correlated, for multiple reasons, partly selection effects (selected by the same funders), partly network effects (if they are together in a batch, they will benefit (or harm) each other).