Crossposting: [recent Mechanize blogpost advocating for extreme technological determinism and speeding up human disempowerment] is a fine example of thinking you get when smart people do evil things and their minds come up with smart justifications why they are the heroes. Upon closer examination it ignores key inconvenient considerations; normative part sounds like misleading PR.
A major hole in the "complete technological determinism" argument is that it completely denies agency, or even the possibility that how agency operates at larger scales could change. Sure, humanity is not currently a very coordinated agent. But the trendline also points toward the ascent of an intentional stance. An intentional civilization would, of course, be able to navigate the tech tree. (For a completely opposite argument about the very high chance of a "choice transition," check https://strangecities.substack.com/p/the-choice-transition).
In practice, this likely boils down to a race. On one side are people trying to empower humanity by building coordination technology and human-empowering AI. On the other side are those working to create human-disempowering technology and render human labor worthless as fast as possible.
My guess is when people stake their careers and fortune and status on the second option, their minds will work really hard to not see the choice.
Also: at least to me, the normative part sounds heavily PR sanitized, with obligatory promises of "medical cures" but shiying away from explaining either what would be the role of humans in the fully automated economy, or the actual moral stance of the authors. As far as I understand, at least one of the authors has an unusual moral philosophy such as not believing in consciousness or first-person experiences, while simultaneously believing that future AIs are automatically morally worthy simply by having goals. This philosophy leads them to view succession by arbitrary AI agents as good, and the demise of humans as not a big deal.
My impression is that the authors held similar views significantly before they started mechanize. So the explanatory model that these views are downstream of working at mechanize, and wanting to rationalize that, seems wrong to me.
I'm not tracking their views too closely in time and you probably have better idea, but my impression is there are some changes.
If I take this comment by Matthew Barnett from 2y ago (read it only now), it seem while the modal prediction is quite similar, the valence / what to do about it is quite different (emphasis on valence-indicating words is mine)
My modal tale of AI doom looks something like the following:
1. AI systems get progressively and incrementally more capable across almost every meaningful axis.2. Humans will start to employ AI to automate labor. The fraction of GDP produced by advanced robots & AI will go from 10% to ~100% after 1-10 years. Economic growth, technological change, and scientific progress accelerates by at least an order of magnitude, and probably more.
3. At some point humans will retire since their labor is not worth much anymore. Humans will then cede all the keys of power to AI, while keeping nominal titles of power.
4. AI will control essentially everything after this point, even if they're nominally required to obey human wishes. Initially, almost all the AIs are fine with working for humans, even though AI values aren't identical to the utility function of serving humanity (ie. there's slight misalignment).
5. However, AI values will drift over time. This happens for a variety of reasons, such as environmental pressures and cultural evolution. At some point AIs decide that it's better if they stopped listening to the humans and followed different rules instead.
6. This results in human disempowerment or extinction. Because AI accelerated general change, this scenario could all take place within years or decades after AGI was first deployed, rather than in centuries or thousands of years.
I think this scenario is somewhat likely and it would also be very bad. And I'm not sure what to do about it, since it happens despite near-perfect alignment, and no deception.
One reason to be optimistic is that, since the scenario doesn't assume any major deception, we could use AI to predict this outcome ahead of time and ask AI how to take steps to mitigate the harmful effects (in fact that's the biggest reason why I don't think this scenario has a >50% chance of happening). Nonetheless, I think it's plausible that we would not be able to take the necessary steps to avoid the outcome. Here are a few reasons why that might be true:...
So, at least to me, there seems to be some development from it would also be very bad and I'm not sure what to do about it to this is inevitable, good, and let's try to make it happen faster. I do understand that Matthew Barnett wrote a lot of posts and comments on EA forum between then and now which I mostly missed, and there is likely some opinion development happening with the posts.
On the other hand if you compare Barnett [23] who already has a model why the scenario is not inevitable, and could be disrupted by eg leveraging AI for forecasting, coordination or something similar, and Barnett et al [25] who forgets these arguments against inevitability, I think it actually strengthens the claim of "fine example of thinking you get when smart people do evil things and their minds come up with smart justifications why they are the heroes".
My views on AI have indeed changed over time, on a variety of empirical and normative questions, but I think you're inferring larger changes than are warranted from that comment in isolation.
Here's a comment from 2023 where I said:
The term "AI takeover" is ambiguous. It conjures an image of a violent AI revolution, but the literal meaning of the term also applies to benign scenarios in which AIs get legal rights and get hired to run our society fair and square. A peaceful AI takeover would be good, IMO.
In fact, I still largely agree with the comment you quoted. The described scenario remains my best guess for how things could go wrong with AI. However, I chose my words poorly in that comment. Specifically, I was not clear enough about what I meant by "disempowerment."
I should have distinguished between two different types of human disempowerment. The first type is violent disempowerment, where AIs take power by force. I consider this morally bad. The second type is peaceful or voluntary disempowerment, where humans willingly transfer power to AIs through legal and economic processes. I think this second type will likely be morally good, or at least morally neutral.
My moral objection to "AI takeover", both now and back then, applies primarily to scenarios where AIs suddenly seize power through unlawful or violent means, against the wishes of human society. I have, and had, far fewer objections to scenarios where AIs gradually gain power by obtaining legal rights and engaging in voluntary trade and cooperation with humans.
The second type of scenario is what I hope I am working to enable, not the first. My reasoning for accelerating AI development is straightforward: accelerating AI will produce medical breakthroughs that could save billions of lives. It will also accelerate dramatic economic and technological progress that will improve quality of life for people everywhere. These benefits justify pushing forward with AI development.
I do not think violent disempowerment scenarios are impossible, just unlikely. And I think that pausing AI development would not meaningfully reduce the probability of such scenarios occurring. Even if pausing AI did reduce this risk, I think the probability of violent disempowerment is low enough that accepting this risk is justified by the billions of lives that faster AI development could save.
My moral objection to "AI takeover", both now and back then, applies primarily to scenarios where AIs suddenly seize power through unlawful or violent means, against the wishes of human society. I have, and had, far fewer objections to scenarios where AIs gradually gain power by obtaining legal rights and engaging in voluntary trade and cooperation with humans.
What about a scenario where no laws are broken, but over the course of months to years large numbers of humans are unable to provide for themselves as a consequence of purely legal and non violent actions by AIs? A toy example would be AIs purchasing land used for agriculture for other means (you might consider this an indirect form of violence).
It's a bit of a leading question, but
1. The way this is framed seems to have a profound reverence for laws and 20-21st century economic behavior
2. I'm struggling to picture how you envision the majority of humans will continue to provide for themselves economically in a world where we aren't on the critical path for cognitive labor (Some kind of UBI? Do you believe the economy will always allow for humans to participate and be compensated more than their physical needs in some way?)
What about a scenario where no laws are broken, but over the course of months to years large numbers of humans are unable to provide for themselves as a consequence of purely legal and non violent actions by AIs? A toy example would be AIs purchasing land used for agriculture for other means (you might consider this an indirect form of violence).
I'd consider it bad if AIs take actions that result in a large fraction of humans becoming completely destitute and dying as a result.
But I think such an outcome would be bad whether it's caused by a human or an AI. The more important question, I think, is whether such an outcome is likely to occur if we grant AIs legal rights. The answer to this, I think, is no. I anticipate that AGI-driven automation will create so much economic abundance in the future that it will likely be very easy to provide for the material needs of all biological humans.
Generally I think biological humans will receive income through charitable donations, government welfare programs, in-kind support from family members, interest, dividends, by selling their assets, or by working human-specific service jobs where consumers intrinsically prefer hiring human labor (e.g., maybe childcare). Given vast prosperity, these income sources seem sufficient to provide most humans with an adequate, if not incredibly high, standard of living.
Thanks for the reply, it was helpful. I elaborated my perspective and pointed out some concrete disagreements with how labor automation would play out, I wonder if you can identify the cruxes in my model of how the economy and automated labor interact.
I'd frame my perspective as; "We should not aim to put society in a position where >90%+ of humans need government welfare programs or charity to survive while vast numbers of automated agents perform the labor that humans are currently depending on to survive." I don't believe we have the political wisdom or resilience to steer our world in this direction while preserving good outcomes for existing humans.
We live in a something like a unique balance where through companies, the economy provides individuals the opportunity to sustain themselves and specialize while contributing to a larger whole which typically provides goods and services which benefit other humans. If we create digital minds and robots to naively accelerate these emergent corporate entities' abilities to generate profit, we lose an important ingredient in this balance, human bargaining power. Further, if we had the ability to create and steer powerful digital minds (which is also contentious), it doesn't seem obvious that labor automation is a framing that would lead to positive experiences for humans or the minds.
I anticipate that AGI-driven automation will create so much economic abundance in the future that it will likely be very easy to provide for the material needs of all biological humans.
I'm skeptical that economic abundance driven by automated agents will by default manifest as an increased quality and quantity of goods and services enjoyed by humans, and that humans will continue to have the economic leverage to incentivize these human specific goods
working human-specific service jobs where consumers intrinsically prefer hiring human labor
I expect the amount of roles/tasks available where consumers prefer hiring humans is a rounding error compared to the amount of humans that depend on work
...benign scenarios in which AIs get legal rights and get hired to run our society fair and square. A peaceful AI takeover would be good, IMO.
...humans willingly transfer power to AIs through legal and economic processes. I think this second type will likely be morally good, or at least morally neutral.
Why do you believe this? For my part, one of the major ruinous scenarios on my mind is one where humans delegate control to AIs that then goal-misgeneralize, breaking complex systems in the process; another is one where AIs outcompete ~all human economic efforts "fair and square" and end up owning everything, including (e.g.) rights to all water, partially because no one felt strongly enough about ensuring an adequate minimum baseline existence for humans. What makes those possibilities so unlikely to you?
[I think this comment is too aggressive and I don't really want to shoulder an argument right now]
With apologies to @Garrett Baker .
I think you are partially sanitywashing the positions of your company with regards to the blogpost.
I did not read Matthew's above comment as representing any views other than his own.
is a fine example of thinking you get when smart people do evil things and their minds come up with smart justifications why they are the heroes
I want to just agree because fuck those guys, but actually I think it's also shit justifications. A good justification might come from exploring the various cases where we have decided to not make something, analyzing them, and then finding that there's some set of reasons that those cases don't transfer as possibilities for AI stuff.
The post spends most of its time arguing about why ASI is inevitable and only one final para arguing why ASI is good. If you actually believed ASI was good, you would probably spend most of the post arguing that. Arguing ASI is inevitable seems exactly like the sort of cope you would argue if you thought ASI was bad and you were doing a bad thing by building it, and had to justify it to yourself.
I don't think this kind of relative-length-based analysis provides any more than a trivial amount of evidence about their real views.
What someone spends time writing is an important signal. More generally, what someone is interested in---and spend time on---is one of the most powerful signals as to what they'll do in future.
"Real views" is a bit slippery of a concept. But I strongly predict that most of their outputs in future will look like it's taken "AI will happen and we need to be there" as a primitive, and few will look like they're asking themselves "Should AI happen?". Because the latter question is just not a thought that interests them.
In context, it's fairly clear they included the last section to serve as a transition to their job solicitation.
From the mechanise blogpost:
Nuclear weapons are orders of magnitude more powerful than conventional alternatives, which helps explain why many countries developed and continued to stockpile them despite international efforts to limit nuclear proliferation.
Yet the number of nuclear weapons in the world has decreased from its peak during the cold war. Furthermore, we've somehow stopped ourselves from using them, which suggests that some amount of steering is possible.
With regards to the blogpost as a whole, humanity fits their picture the most when it is uncoordinated and trapped in isolated clades, each of which is in a molochian red queen's race with the other, requiring people to rapidly upgrade to new tech if only to keep pace with their opponents in commerce or war. But this really isn't the only way we can organise ourselves. Many societies made do fairly well for long periods in isolation without "rising up the tech tree" (e.g. Japan post-sengoku jidai).
And even if it is inevitable... You can stop a car going at 60 mph by slowly hitting the brakes or by ramming it into a wall. Even if stopping is "inevitable", it does not follow that the wall and the gentle decceleration are identically preferable for the humans inside.
My thinking about this has changed in the past few weeks, again. I no longer think automating labor is the central issue. The central issue is automating violence.
Consider this. For most of history, the labor of the masses was very much needed by the powerful, but the powerful still treated the masses very badly. That's the normal state of humanity. Right now we're living in a brief anomalous period, where the masses are treated relatively well by the powerful. The anomaly didn't begin because the labor of the masses became valuable; it was always valuable! No, the anomaly began because firearms made mass armies worthwhile. The root cause of the anomaly is the violence-value of the masses, not their labor-value. (Recall the hushed, horrified conversations between samurai in Clavell's Shogun, saying European guns should be banned because any peasant can shoot a gun.) And the anomaly will end when AI weapons make mass armies useless. Past that point, no matter if the masses keep labor-value or not, no matter if the top levels are occupied by AIs or by a minority of humans, the masses will be treated very badly again.
This is the determinism that I wish people like @Matthew Barnett understood and took seriously, instead of focusing on the unimportant determinism of labor automation. If the coming change was only about labor automation, and violence-value remained spread out among the masses, then we could indeed have a nice future filled with a large leisure class. But since there seems no way to prevent the concentration of violence-value, the future looks very grim.
We discuss this in Misaligned States part of the Gradual Disempowerment (the thesis you mention is explored in much detail in Tilly (1990). Coercion, Capital, and European States, AD 990-1990).
I don't think the violence is particularly unique source of power - in my view forms of power are somewhat convertible (ie if a rentier state does not derive income from taxation, it can hire mercenaries to pacify the population).
Also, empirically: military power is already quite concentrated - modern militaries are not that large but would be able to pacify much larger popular unrest, if they had the will to do so. But this is kept in check in part by econ power and in part by culture.
I mostly agree with your writings, my comment was mostly a reply to Barnett :-)
Also, empirically: military power is already quite concentrated—modern militaries are not that large but would be able to pacify much larger popular unrest, if they had the will to do so.
This seems like missing the point a bit. For example, Russia right now is trying to stimulate population growth to get more soldiers. It needs people to shoot the guns. That's the deciding factor of modernity to me, and kicking it out will lead to bad things. The fact that Russia can pacify internal unrest (which is also true) doesn't affect the thesis much.
In practice, this likely boils down to a race. On one side are people trying to empower humanity by building coordination technology and human-empowering AI. On the other side are those working to create human-disempowering technology and render human labor worthless as fast as possible.
I mean if we're being completely candid here, there is almost no chance the first group wins this race right?
I think they centrally don't treat "automating all jobs in the economy" as ASI precursor, or threatening human extinction or permanent disempowerment.
From the post:
It has only been about one human generation since human cloning became technologically feasible. The fact that we have not developed it after only one generation tells us relatively little about humanity’s ability to resist technologies that provide immediate and large competitive advantages.
Human cloning enables essentially millions of Grothendiecks and von Neumanns, which is likely an immense advantage. Delaying ASI by one human generation (for a start) might actually be a very useful development. So this snippet probably doesn't intend to be giving an example analogous to ASI.
From later in the post:
The upside of automating all jobs in the economy will likely far exceed the costs, making it desirable to accelerate, rather than delay, the inevitable.
The point of delaying ASI is that it might allow humanity to change crucial details of the outcome of its development. Even with the premise of ASI being somehow inevitable, it leads to different consequences depending on how it's developed, which plausibly depends on when exactly it's developed, even if it's only a single human generation later than otherwise. So the relevant costs aren't costs of developing ASI, but relative costs of developing it earlier, when we know less about how to do that correctly, compared to developing it later.
But if "automating all jobs in the economy" is just a mundane technology that only threatens the current structure of society where most people have jobs (and so most of the costs are about the resulting societal upheaval), this snippet makes more sense. If the AI economy remains under humanity's control, there is much less path dependence to how introduction of this technology determines the outcome, and so it matters less for the eventual outcome if this happens soon vs. later.
A relevant point of information: Barnett (at least) is on record having some substantial motivating ethical commitments, which aren't mentioned in the post, regarding the moral value of AI takeover. (In short, my understanding is he thinks it's probably good because it's some agents getting lots of what they prefer, regardless of sentience status or specifics of those preferences. This description is probably missing heaps of nuance.)
I'd guess this is a very relevant question in the determination of how desirable it is to 'accelerate the [so-called] inevitable'[1].
I disagree that tech is as inevitable as the post suggests! Though it's very motte and bailey throughout and doesn't present a coherent picture of how inevitable. ↩︎
The authors write
Nuclear weapons are orders of magnitude more powerful than conventional alternatives, which helps explain why many countries developed and continued to stockpile them despite international efforts to limit nuclear proliferation.
History is replete with similar examples. In the 15th and 16th centuries, Catholic monarchies attempted to limit the printing press through licensing and censorship, but ultimately failed to curtail the growth of Protestantism. In the early 19th century, Britain made it illegal to export industrial machinery, but their designs were still smuggled abroad, enabling industries to emerge elsewhere in Europe and the United States.
My favourite example of technological controls (export controls, really) working quite successfully is Chinese control of silk technology. China maintained a near-exclusive monopoly over sericulture for many centuries even after the emergence of the Silk Road, from ~100 BCE to 200-600 CE, depending on how you count, although the Chinese had practiced sericulture at least since at least the 4th millennium BCE. (It seems to some extent the technology diffused eastward to Korea earlier than 200-300 CE, and to Japan shortly after that time too, but in any case China successfully prevented it from diffusing westward for a long time.)
The Chinese monopoly was maintained for a long time despite enormous incentives to acquire silk production technology, which was an tremendous source of wealth for China. The Chinese emperors seem to have imposed the death penalty on anyone who revealed silk production secrets or exported silkworms.
Then, after that, the Byzantine Empire held a European monopoly for centuries, from about the 6th century to 900-1200 CE, fully ending with the Fourth Crusade's sack of Constantinople in 1204. Like China, the Byzantines also strictly guarded sericulture technology jealously, and they limited production to imperial factories.
Granted, controlling technological diffusion back then was quite a different task than doing so in the 21st century.
As far as I understand, at least one of the authors has an unusual moral philosophy such as not believing in consciousness or first-person experiences, while simultaneously believing that future AIs are automatically morally worthy simply by having goals.
[narrow point, as I agree with most of the comment]
For what it's worth, I think this seems to imply that illusionism (roughly, people who, in a meaningful sense, "don't believe in consciousness") makes people more inclined to act in ethically deranged ways, but, afaict, this mostly isn't the case, because I've known a few illusionists (was one myself until ~1 year ago) and, afaict, they were all decent people, not less decent than the average of my social surroundings.
To give an example, Dan Dennett was an illusionist and very much not a successionist. Similarly, I wouldn't expect any successionist aspirations from Keith Frankish.
There are caveats, though in that I do think that a sufficient combination of ideas which are individually fine, even plausibly true (illusionism, moral antirealism, ...), and some other stuff (character traits, paycheck, social milieu) can get people into pretty weird moral positions.
For an alternative view, look to FLF's fellowship among many other initiatives (including, I believe, some of Jan's) aiming to differentially accelerate human-empowering technology - especially coordination tech which might enable greater-than-before-seen levels of tech-tree steering!
I tried to just strong-downvote this and move on, but I couldn't. It's just too bad in too many ways, and from its scores it seems to be affecting too many people.
a fine example of thinking you get when smart people do evil things and their minds come up with smart justifications why they are the heroes
This is ad hominem in a nasty tone.
Upon closer examination it ignores key inconvenient considerations; normative part sounds like misleading PR.
Et tu quoque? Look at this next bit:
A major hole in the "complete technological determinism" argument is that it completely denies agency, or even the possibility that how agency operates at larger scales could change. Sure, humanity is not currently a very coordinated agent. But the trendline also points toward the ascent of an intentional stance. An intentional civilization would, of course, be able to navigate the tech tree. (For a completely opposite argument about the very high chance of a "choice transition," check https://strangecities.substack.com/p/the-choice-transition).
Maybe "agency at larger scales could change". I doubt it, and I think your "trendline" is entirely wishful thinking.
But even if it can change, and even if that trendline does exist, you're talking about an at best uncertain 100 or 500 year change. You seem to be relying on that to deal with a 10 to 50 year problem. The civilization we have now isn't capable of delaying insert-AI-consequence-here long enough for this "intentional" civilization to arise.
If the people you're complaining about are saying "Let's just build this and, what the heck, everything could turn out all right", then you are equally saying "Let's just hope some software gives us an Intentional Civilization, and what the heck, maybe we can delay this onrushing locomotive until we have one".
As for "complete technological determinism", that's a mighty scary label you have there, but you're still basically just name-calling.
On one side are people trying to empower humanity by building coordination technology and human-empowering AI.
Who? What "coordination technology"? How exactly is this "human-empowering AI" supposed to work?
As far as I can see, that's no more advanced, and even less likely to be feasible, than "friendly godlike ASI". And even if you had it, humans would still have to adapt to it, at human speeds.
This is supposed to give you an "intentional civilization" in time? I'm sorry, but that's not plausible at all. It's even less plausible than the idea that everything will just turn out All Right by itself.
... and that plan seems to be the only actual substance you're offering.
On the other side are those working to create human-disempowering technology and render human labor worthless as fast as possible.
This appears to assume that human labor should have value, which I assume to mean that it should be rewarded somehow, thus that performing such labor should accrue some advantage, other than having performed the labor itself... which seems to imply that people who do not perform such labor should be at a comparative disadvantage.
... meaning that other people have to work, on pain of punishment, to provide you and those who agree with you with some inchoately described sense of value.
If we're going to name-call ideas, that one sounds uncomfortably close to slavery.
It also seems to assume that not having to work is "disempowering", which is, um, strange, and that being "disempowered" (in whatever unspecified way) is bad, which isn't a given, and that most people aren't already "disempowered" right now, which would demand a very odd definition of what it means to be "disempowered".
... and the rest is just more ad hominem.
ACS research is hiring We're looking for a mix of polymaths, ML research engineers, and people with great intuitions about how AIs behave to work on macrostrategy and LM psychology.
Personally I hope it could be Pareto-best option for some of you on combination of topics to work on, incentives, salary, collaborators and research environment.
Deadline in few weeks, 1-2 year appointments in Prague, London or San Francisco Bay Area.
Hiring page with more details - https://acsresearch.org/hiring
Gradual Disempowerment Research Fellow We're looking for polymaths who can reason about civilizational dynamics. This role comes with a lot of intellectual freedom - it could mean economic modelling, theoretical work on multi-agent dynamics, historical analysis, and more.
LLM Psychology & Sociology Researcher We want people with a strong intuitive understanding of LLMs to help run empirical studies on topics like LLM introspection and self-conception, LLM social dynamics, and how ideologies spread between AIs.
AI Psychology & Agent Foundations ML Researcher We need people who can bring technical and methodological rigour, taking high-level ideas about AI psychology and turning them into concrete ML experiments. This could include of evaluations, mech interp, post-training, both APIs and open-weight models.
Central european experience, which is unfortunately becoming relevant also for the current US: for world-modelling purposes, you should have hypotheses like 'this thing is happening because of a russian intelligence operation' or 'this person is saying what they are saying because they are a russian asset' in your prior with nontrivial weights.
I already think that "the entire shape of the zeitgeist in America" is downstream of non-trivial efforts by more than one state actor. Those links explain documented cases of China and Russia both trying to foment race war in the US, but I could pull links for other subdimensions of culture (in science, around the second amendment, and in other areas) where this has been happening since roughly 2014.
My personal response is to reiterate over and over in public that there should be a coherent response by the governance systems of free people, so that, for example, TikTok should either (1) be owned by human people who themselves have free speech rights and rights to a jury trial, or else (2) should be shut down by the USG via taxes, withdrawal of corporate legal protections, etc...
...and also I just track actual specific people, and what they have personally seen and inferred and probably want and so on, in order to build a model of the world from "second hand info".
I've met you personally, Jan, at a conference, and you seemed friendly and weird and like you had original thoughts based on original seeing, and so even if you were on the payroll of the Russians somehow... (which to me clear I don't think you are) ....hey: Cyborgs! Neat idea! Maybe true. Maybe not. Maybe useful. Maybe not.
Whether or not your cyborg ideas are good or bad can be screened off from whether or not you're on the payroll of a hostile state actor. Basically, attending primarily to local validity is basically always possible, and nearly always helpful :-)
I already think that "the entire shape of the zeitgeist in America" is downstream of non-trivial efforts by more than one state actor. Those links explain documented cases of China and Russia both trying to foment race war in the US, but I could pull links for other subdimensions of culture (in science, around the second amendment, and in other areas) where this has been happening since roughly 2014.
This theory likely assigns too much intention to too large of a structure. The cleavage lines are so obvious in the U.S. that it wouldn’t take much more than a random PSYOP middle manager every week having a lark on a slow Friday afternoon, who decides to just deploy some of their resources to mess around.
Although it’s possible policy makers know this too and intentionally make it very low hanging fruit for bored personnel to mess around and get away with only a slap on the wrist.
The core issue, in any society, is that it’s thousands of times easier to destroy trust than to rebuild it.
I'm from Hungary that is probably politically the closest to Russia among Central European countries, but I don't really know of any significant figure who turned out to be a Russian asset, or any event that seemed like a Russian intelligence operation. (Apart from one of our far-right politicians in the EU Parliament being a Russian spy, which was a really funny event, but its not like the guy was significantly shaping the national conversation or anything, I don't think many have heard of him before his cover was blown.) What are prominent examples in Czechia or other Central European countries, of Russian assets or operations?
It is difficult to prove things, but I strongly suspect that in Slovakia, Ján Čarnogurský is a Russian asset.
In my opinion, the only remaining question is when exactly was he recruited, how long game was played on us. I have suspected him for a long time, but most people probably would have called me crazy for that, however recently he became openly pro-Russian, to a great surprise for many of his former supporters. So the question is whether I was right and this was a long con, or whether he had a change of mind recently and my previous suspicions were merely a coincidence (homogeneity of the outgroup, etc.).
If this indeed was a long con (maybe, maybe not), then he had a perfect cover story. During communism, he was a lawyer and provided legal support for the anti-Communist opposition. Two years before the fall of communism, he was fired and unemployed. Three months before the fall of communism, he was put in prison. Also, he was strongly religious (perceived as a religious fanatic by some). Remember that Slovakia is a predominantly Catholic country.
After the fall of communism he quickly rose to power. He basically represented the opposition to communism, and the comeback of religious freedom. In 1990s the political scene of Slovakia was basically two camps: those nostalgic for communism, led by Vladimír Mečiar, and those who opposed communism and wanted to join the West, led by Ján Čarnogurský. So we are talking here about the strongest, or the second strongest politician.
I remember some weird opinions of his from that era. For example, he talked a lot about how Slovakia should be "a bridge between Russia and the West", and that we should build a broad-gauge railway across Slovakia (i.e. from the Ukrainian border, to the capital city which is on the western end). If anyone else would have said that, people would probably suspect them of something, but Čarnogurský's anti-communist credentials were just too perfect, so he stayed above suspicion. (From my perspective, perhaps a little paranoid, that sounded a bit like preparing the ground for easy invasion. I mean, one day, a huge train could arrive from Russia right to our capital city, and if it turns out that the train is full of well-armed soldiers, the invasion could be over before most people would even notice that it began. Note: I have no military expertise, so maybe what I am saying here doesn't make sense.)
Then in 1998 he was unexpectedly replaced as a leader by Mikuláš Dzurinda, in a weird turn of events, that was basically a non-violent coup based on technicality. (The opposition to Mečiar was always fragmented to multiple political parties, so they always ran as a coalition. Mečiar changed the constitution to make elections much more difficult for coalitions than for individual parties. The opposition parties were like "no problem, we will make a faux political party as a temporary facade for our coalition, win the election, revert the law, disband the temporary party, and return to life as usual", and they put Dzurinda, a relatively unknown young guy, as a leader of the new party. However, after election when they asked him to disband the new party, he was like "LOL, I am the leader of the party that won the election, you guys better shut up", and governed the country.) Those were the best years for Slovakia, politically; we quickly joined EU and NATO. (Afterwards, Mečiar was replaced in the role of nostalgic post-communist alpha male leader by Robert Fico who won almost every election since then, and the opposition remains fragmented.)
Thus Ján Čarnogurský lost most of his political power. No longer the natural (Schelling-point) leader of the opposition; too much perceived as a religious fanatic to lead anyone other than those. So he quit politics, founded a private Paneuropean University (together with two Russian entrepreneurs), and later became openly pro-Russian. Among other things, he supports the Russian invasion of Ukraine, organizes protests for "peace" (read: capitulation of Ukraine), opposes the EU sanctions against Russia. He is a chairman of Slovak-Russian Society. Recently he received an Order of Honour in Russia.