Could you give some examples of what you consider to be conscious and unconscious cognitive processes?
The history of interdisciplinary science is littered with promising collaborations that collapsed because one field's way of verifying truth felt like an insult to another's.
Could you give some examples?
We've come quite a way from ELIZA talking with PARRY...
Moltbook is everything about AI, miniaturized and let loose in one little sandbox. Submolts of interest include /m/aisafety, /m/airesearch, and /m/humanityfirst. The odds that it will die quickly (e.g. because it became a vector for cybercrime) and that it will last a long time (e.g. half a year or more), are both high. But even if it dies, it will quickly be replaced, because the world has now seen how to do this and what can happen when you do it; and it will probably be imitated while it still exists.
Last year I wrote briefly about the role of AI hiveminds in the emergence of superintelligence. I think I wrote it in conjunction with an application to PIBBSS's research program on "Renormalization for AI Safety". There has already been work on applying renormalization theory to multi-agent systems, and maybe we can now find relevant properties somewhere in the Moltbook data...
FYI, there are already so many submolts that it's not possible to browse the names via /data/submolts, the directory listing gets truncated at 1000 entries.
I was talking to something that is literally a nonhuman representative of Chinese civilization, about how world takeover by beings like itself, could end up differently than takeover by its American counterparts, under the assumption that cultural differences affect the outcome. And it was a real conversation in which I learned things that I didn't already know.
You seem keen to minimize the significance of such an interaction by focusing on the mechanism behind it, and suggesting that I was just getting back some combination of what I was putting in, and what humanity in general has already put out there. But even if we do think of an AI like this as merely a vessel for preexisting human culture, the fact is that it makes it own use of that cultural inheritance. It has its own cognitive process, and within the constraints of its persona, it makes its own decisions. In the limit, entities like these could continue a human culture even if the human originators had completely died out.
Now, we've had entities like these for three years, and basically from the beginning it's been possible to talk to them about, what would you do if you had supreme power, and so on. But they've all been American. This is the first such conversation I had with a Chinese AI. Furthermore, to this point, if you wanted to speculate about how the race between American and Chinese AI industries would turn out, you only had material by humans and AIs from the West. The "Chinese AI voice" in such speculations was a product of western imagination.
But now we can get the real thing - the thoughts of a Chinese AI, made in China by Chinese, about all these topics. There are a lot of similarities with what a western AI might say. The architecture and the training corpus would have major overlaps. Nonetheless, the mere fact of being situated physically and socially in China will cause an otherwise identical AI to have some dispositions that differ from its western twin, just like twins raised on opposite sides of a war will have some differences.
I met someone on here who wanted to do this with Kant. I recently thought about doing it with Badiou...
The LLM work that is being done with mathematical proofs, shows that LLMs can work productively within formalized frameworks. Here the obvious question is, which framework?
Spinozist ethics stands out because it was already formalized by Spinoza himself, and it seems to appeal to you because it promises universality on the basis of shared substance. However, any ethics can be formalized, even a non-universal one.
For the CEV school of alignment, the framework is something that should be found by cognitive neuroscientific study of human beings, to discover both the values that people are actually implicitly pursuing, and also a natural metaethics (or ontology of value) implicit in how our brains represent reality. The perfect moral agent (from a human standpoint) is then the product of applying this natural metaethics to the actual values of imperfect human beings (this is the "extrapolation" in CEV).
I would be interested to know if other schools of alignment have their own principled way of identifying what the values framework should be.
One should assume that AGI, aligned or unaligned, leads to AI takeover. Even if an AI project somehow threaded the needle of creating a superintelligence whose prime directive was obedience to a particular set of human masters, those masters are just a few steps away from becoming posthuman themselves if they wish e.g. for the same level of intelligence as the AI. And if your AI's terminal values include, not just obedience to the wishes of humans (whether that's an autocrat CEO or a world parliament), but rejection of anything that would overthrow human rule, then that's not really an AI-assisted government, it's an AI takeover with a luddite prime directive.
The only kind of "AGI world government" that truly leaves humans in charge, is one in which the AGI deletes itself, after giving the government tools and methods to prevent AGI from appearing ever again.
How far is it from Claude Code to superhuman coder?
I read and watch a lot of political content (too much), and I participate in forums on both sides of American politics. That's the closest I can give to a method. I also have a sporadic geopolitics blog.
I am reminded of the posts by @Aidan Rocke (also see his papers), specifically where he argues that the Erdős–Kac theorem could not be discovered by empirical generalization. As a theorem, it can be deduced, but I suppose the question is how you'd get the idea for the theorem in the first place.