I wrote a previous blog takedown of Malatora, which was somewhat shooting fish in a barrel. In the spirit of moving on to harder targets, I now turn my meager attentions and talents to the subject of the Terasem Movement. This is a marginally more difficult movement to vivisect, but ultimately there are many problems with it, although I must say that I am on the whole much more sympathetic to the Terasem Movement than to Malatora, even though that isn’t a difficult hurdle to clear. Let’s get right down to it:

Addressing Terasem

General Impressions

There are many different components to the Terasem movement. There’s a kind of dopey new-agey religiosity that combines less-than-fully researched physics knowledge with some Carl Sagan tropes, some general scientistic philosophical incompetence, and throws in a dash of instrumentalist reinterpretation of tradition for good measure. Taken together the overall result is not great: Terasem lacks the ethos or justification of real science while simultaneously stripping religion of its force through its instrumentalization. This is not a strong foundation for any sort of normative system, and most of the subsequent problems of Terasem stem from a mixture of intentional and unintentional sophistry that isn’t quite enabled by this, but certainly isn’t helped by it; it is one of the flaws which the sophistry most tries to brush under the carpet. It’s best to describe Terasem as “Science-adjacent”. What this means isn’t so much that it’s antithetical or exclusionary to science, and indeed many of its members seem very well-good credentialed. But it’s not science, at all, and the use and misuse of technological terminology makes my bullshit detector explode. It’s possible I am just not educated enough to understand certain terms and concepts, so I will leave this alone for the most part and operate as if everyone is engaged in good faith.

Besides the religious component, there are also various philosophical leitmotifs and idée fixes which are regrettable, most especially in the frequent appearance of contradiction. I understand that a large movement can’t be entirely consistent, but the differences that are glossed over are more substantive than what is actually explicated, and I will show a few examples of this later. I also get the impression that many of these people believe that simply defining enough terms with sufficient detail and clarity suffices to establish an argument, to solve a problem, or to make a problem disappear. I don’t know why they think this. I think it goes back to the scientific-adjacent attitude, because scientific statements are analytic statements, therefore surely enough analytic statements clustered together is sufficient for any purpose regardless of whether they correspond to anything, connect together in a substantive form, or even mean much of anything. Maybe this is inspired by early Wittgenstein fetishism? But that’s probably giving too much credit.

Lastly there is a political component, which again consists mostly of definitions and trying to establish definitions, and then a bizarre mixture of semantic and normative arguments for why these definitions should be adopted; again, without any apparent normative foundation; and roughly nothing else. These are definition fetishists in a major way, and that already suffices to prove that most of their writing is empty or weak, but let’s try to crack things open and break them down a bit more anyway.

Philosophical issues

Since these are all over the place, my response to them will be individually, and it’s possible that few connections will be apparent. I will respond in list form to minimize the appearance of connection, but organize the list roughly in terms of how different concepts segue into each other.

  • The strawmanning of opponents as believers in “elan vital”, followed by the subsequent demonstration of belief in elan vital

Essentially all philosophical arguments about consciousness are dismissed by a handwave in which Terasem accuse Emergentists, Dualists, and seemingly anyone who has ever put forward an argument of any complexity about consciousness at all of viewing consciousness as a kind of “special sauce” that is poured over the brain, which is obviously the source of all consciousness. Ok. Well, it seems uncontroversial to suggest it’s the source of consciousness, but there is still the question of how it is the source of consciousness, and dismissing any discussion of how it might be the source of consciousness with such a handwave is unproductive in the extreme.

But it’s worse than this. This is, as far as I can tell, the official Terasem position on consciousness:

“Consciousness is not a big problem. I think consciousness is simply what it feels like to have a cortex.”

What is it that feels in this construction? The cortex? Shouldn’t the second sentence then read “I think consciousness is simply what it feels like to be a cortex?” That’s fine, but what is it about the cortex that makes it cause consciousness? Is it the substance of the cortex? Terasem elsewhere seem to advocate the position that what matters to consciousness is just information, and make little explicit distinction between whether they mean this statically or in its sequential operations. But this is unhelpful as well. If they mean statically, then it is hard to see why a book, for instance, isn’t afforded the same moral status as a human being (and absurdly, in the movie 2b, a storage device containing the “mindfile” of a major character is treated as if it has personhood, and in the same sense as the character himself). If they mean dynamically, then various other problems emerge.

If a cortex can be constructed as a turing machine, then it can be constructed in various ways. Not just in the sense of a modern, electrical computer, but also in terms of a water based computer, or even potentially a large collection of abacuses. It could even be some god-awful Rube Goldberg contraption involving all of these elements, in addition to smoke signals, pigeon carriers, and american sign language exchanges between trained chimps. If it is enough that the structure of the cortex is encoded in these operations, then consciousness should be present in some sense in all of them, but that seems plainly absurd. So there must be something special about the human cortex, not just in the sense that it is a cortex, but in the specific form of its construction. But this is much closer to the concept of elan vital than any emergentism.

Now, this “something special” can presumably be replicated in silicon, that’s a basic premise of the Terasem movement, so that gets us a little bit further back into philosophical territory. But Terasem seem to think that just replicating the structure, input, output, and data of a human mind in silicon would be enough to create consciousness. This is potentially absurd, and raises other questions: for instance, at what point does consciousness emerge as you add each of these components? If it’s just the structure, then presumably any computer with an architecture equivalent to a cortex is already in some sense conscious whenever it performs any operations, and not just when it is in the mode of performing characteristically human operations. Then there is the question of information processing: If a cortex is conscious, is it conscious of the meaning of the data propagated through it in the form of computer code? If so, at what level? Or is it conscious in some other sense, by some as of yet unknown mapping of the physical processes of the cortex to different discrete experiences of phenomenal consciousness? If so, the simple fact that a silicon cortex takes certain input, processes certain data perhaps in ASCII format (built up from binary) in a way dictated by assembly code (built on top of complex digital circuits), and produces certain output, tells us nothing about what the conscious experience is inside that cortex, EVEN WHEN THE BEHAVIOR OF THE SYSTEM EXACTLY IMITATES THE BEHAVIOR OF A SPECIFIC HUMAN. And this is the biggest single problem with Terasem’s concept of consciousness as mere data processing on a specific architecture. Consciousness may be data processing. But that does not mean conscious experience on an architecture corresponds to data on an architecture. To assume so is to make data into the elan vital. And this does not, in turn, mean that consciousness is not data. It just means that data is not consciousness.

  • The soul

There’s a lot of denigration of the concept of the soul, in the same manner as there’s a general denigration of philosophy. But the concept of the soul has pragmatic implications regardless of any higher metaphysical questions. In practice, the soul is whatever it is that people think is important about themselves, even when it isn’t real, isn’t yet actual, or can’t ever be actualized. Expected value calculations and even vaguer notions of “potential” inform the concept of the soul, in such a way that a person’s future is considered a part of them that can be damaged, healed, augmented, or amputated. As Kierkegaard put it, ‘The most painful state of being is remembering the future, particularly the one you’ll never have.’ This could only be true if the totality of a person’s being extended across time, like a thread. And of course this is a very classical notion, with Cletho, Lachesis, and Atropos representing the most famous example. If string theory is true, then we are literally energy travelling down strings, and thus it’s possible to map these very metaphysical concepts and their implications to physical processes, in much the same way as it’s possible to map classical deterministic concepts to the implications of general relativity.

The soul is a very important concept because of this reason: that it refers to the contingent but persistent conditions of human existence under which a person is willing to believe in the possibilities of meaning and value. Normally, these conditions are only contingent on life and death, but there’s no reason that contingency can’t itself be contingent on fleeting historical circumstances. Since these conditions are contingent, it’s very important that we don’t separate people from their souls, regardless of what a soul “actually is”.

  • The irrelevance of vegetarianism in the context of eternity

There is only one kind of pain that really matters in the context of eternity, and that is the pain of separation from oneself, from substantive circumstance or possibility that one can’t construct meaning or value without, whether in absolute or strong relative terms. I think it’s great that the Terasem movement care so much about animals. There is rightfully a recognition that animal consciousness is on a continuum with human consciousness. The idea of there being something substantively different between human consciousness and animal consciousness is implausible, and without a difference in substance between the two, the notion of qualitative differences between man and animal becomes less plausible in turn. The notion of an “I-concept” or ego is sometimes put forward, but some animals have this, and some human beings do not. Including, if we are taking people’s mystical self-accounts seriously, many high functioning human beings. No one seriously advocates the position that it is justifiable to kill and eat Buddhists.

However, pain without lasting deprivation can be compensated trivially. It’s just a simple utilitarian calculation of offsetting disutility with utility. This doesn’t, of course, make disutility good or ideal. We would prefer to do away with factory farming and other practices that harm animals, because harm is always bad. But if we’re really talking about eternity, or at least k-large lifespans, for every living thing, then even the worst suffering of animals (including human animals) should disappear as an epsilon beneath the steadily increasing pile of utility they accrue after resurrection. This is true in all cases except those where a being is permanently separated from something they consider essential to them, something for which they would sooner suffer forever than be deprived of.

  • The Transbemon identity argument rests on the notion that everyone wanting to be themselves is an axiom

To a certain very substantial extent I don’t want to be myself. Argument refuted (and it shouldn’t be hard to find countless other cases: whenever we hear someone say “I wish I was never born”, this is a refutation of this position). So this is a problem that would need to be rectified, in technical terms, to make the thing uniformly justifiable.

This calls back to the previous entry, and raises an important point. To an extent, wanting to be yourself and not wanting to be yourself are the same exact thing in this context. That is, the pain in question is the pain of being yourself in the permanent mode of not being yourself. Since this is still a mode of being oneself, this can lead to what Kierkegaard calls a demonic disposition, in which one’s pain over these circumstances becomes the only affirmation of self that remains, and hence what people push into to affirm themselves. But this is plainly disutilitarian and less than ideal. We would prefer people to be themselves in the mode of being themselves, and so in a sense the Transbemon identity argument is refuted in practical terms while a semantic argument remains that it is correct; IE, that it is only true that some people want to be a non-extant and non-actualizable version of themselves, hence that they still want to be themselves. But pragmatically, this is still a refutation, since Terasem think that it’s enough just to duplicate the information or facticity of a life naively.

  • No continuity of consciousness with any of the proposed technologies

This is a very simple criticism. Terasem only promotes methods and techniques that don’t preserve continuity of consciousness. Mind uploading, in various forms. Cryonics. Reconstruction from mindfiles derived from secondhand information. Now, these may produce a person who is “the same” in terms of identity, and I see no reason to object to that. But without continuity of consciousness, there is still death. If the goal is to preserve the value of a person, and their ability to create value and meaning, or to preserve the identity of a person, or any number of other things, then these methods may be helpful. They minimize the consequences of death for the universe, or for a society. They do absolutely nothing about the consequences of death for the individual. Which means Terasem is inherently non-individualistic in this sense, and anyone who goes in for these technologies must be the requisite uncommon combination of egoistic and selfless necessary to self-justify this.

  • Sophistry over continuity of consciousness

Everything Terasem says about the perpetuation of identity is a sophistry that tries to link identity and information with consciousness. Consciousness is not information, or even identity. People change identity over time, sometimes spontaneously, sometimes with discontinuities between which no identity is present. Often this is precipitated by drugs or trauma, but it still occurs, yet we wouldn’t call people “unconscious” at any point during all of this. Information is not consciousness. It’s possible for consciousness to be unrepresentable in some sense. For instance, if the brain is a turing machine, Godel’s laws apply to it. If the encoding of the brain is incomplete in a way that doesn’t correspond to the way in which the mathematical language we use to describe it is incomplete, then there are things happening in the brain that go unrepresented. You might say that the information is still there, but it isn’t accessible. More fundamentally, I call back to the argument that while consciousness may be data, data isn’t consciousness.

  • Belief in ignorance as a motive for all human evil (as demonstrated in the movie 2b)

It is perhaps unfair to use an entertainment as a basis for judging a movement, but it was produced on behalf of the movement, and the argument is so crystal clear and so fallacious that it has to be addressed. The idea that people only do wrong out of ignorance is not justified. I will not make the customary argument against the concept from free will, since it is unlikely to mean much in this context. It is a very classical idea, certainly, and can be found among the stoics, with Socrates, and in many other venerable places. But I will simply state that it is empirically false. For this we need only a single case of any potency to demonstrate. Rather than go for a big fish like a dictator as an example, I think it suffices to point to the case of internet trolls, whose motto is often literally “For the lulz”. It may be true that they lack a moral education, but a moral education in this sense is more a matter of conditioning than of classical lack of information. This is not, precisely speaking, a problem of ignorance. It is more like feral-ness.

  • “Synthesis of lots of people, result of many individuals leading to creation of one new and unique individual”

This line also came from the movie. In the movie, the synthetic human (in Terasem speech, called a “Transbemon”) explains that this is the process by which they were created. However, the idea also appears in the Terasem literature, so it is more representative of a concept they explicitly promote. What exactly is meant by this? It is somewhat unclear. Are all these individuals merged together without loss of any individual stream of consciousness? If so, this is not necessarily a moral problem, though it certainly raises the possibility of moral objection; but it certainly gets into very serious philosophical territory over the nature of identity and consciousness. Territory which is characteristically avoided by the writers. In place of engagement, Terasem simply puts forward their own concept of identity as data or information, which is very vague and is rife with issues, as I have already shown.

  • ““To exist is to be something, as distinguished from the nothing of non-existence, it is to be an entity of a specific nature made of specific
    attributes. Centuries ago, the man who was—no matter what his errors—the greatest of your philosophers, has stated the formula defining the
    concept of existence and the rule of all knowledge: A is A. a thing is itself.
    You have never grasped the meaning of this statement. I am here to complete it: Existence is Identity, Consciousness is Identification.””

This nice little injection of Ayn Rand predictably runs afoul of modern philosophy. Descartes Cogito, “I think therefore I am”, was divided by Sartre into a Synthetic Cogito and an Analytic Cogito. Roughly, the Analytic Cogito can be expressed as “Consciousness is aware”, while the Synthetic Cogito can be expressed as “I am conscious, therefore I am aware”. The “I” is a specific structure of consciousness, and as a specific structure, it is not possible to analytically deduce its nature. Of course, even the Synthetic Cogito is still analytic if you make no claims at all about the content of “I”, but in this case it collapses in a sense to the analytic cogito itself. If you can’t even define “I”, then you can’t build anything on it, it is superfluous, and you might as well stick with “Consciousness is aware”.

  • “Software is the new soul”

No. See previous. It may be possible to encode the soul in software. Software is not a soul.

  • “Bemon”/Bemes”

This is more of a terminology quibble. Bemon comes from “bemes”, which are basically just quale reframed in non-metaphysical, reductivist physical terms. This is not entirely true, since the concept of “bemes” also includes the concept of the overhead and such of these quale, something which makes the concept both more useful and less precisely abstract than quale. By “Bemes” is meant something like “quale packets”, where various metainformation and associations are encoded along with the core data that in this sense would be analogous to reductivist quale. That’s all fine. But I would rather just abuse the term quale in physicalist terms than invent an entirely new term, personally, because inventing a new term leaves too many useful associations behind.

  • Purely pragmatic interest in religion

Kind of a glass houses situation here, but there are major practical problems to this besides the loss of normative force. Terasem propose dissociating the action of a ritual from its benefits, eg seeking to isolate the benefits of prayer and meditation and to re-engineer humans to be able to achieve these sans the rituals, which are respected as primitive means to a broader end. But you can’t always dissociate the action from the benefit in religion; you can’t maximize this reasoning; otherwise why do anything? This is not just facetious, it is a real problem. When it’s possible to dissociate action from reward, why not just receive pure benefits and never act? There are two classes of response: pragmatic responses in turn, about the necessity of action for self-preservation (which have limited scope and can only be used to argue for a small range of activity), and Nietzschean, vitalistic arguments, or essentialist arguments, about either the spiritual value of action or the essential character of man. These are among the issues I am most focused on in trying to construct the CTCR and the dragonsphere.

  • “If will, awareness and self are, in most part,
    illusions that we construct because of our
    evolutionary heritage, and our limitations; then
    maybe, once we get smarter and more aware,
    we’ll get rid of them.”

Only drug addicts and philosophical illiterates believe this. It is analogous to a person sawing at a branch that they are currently sitting on. In order for illusions to exist, there must be something to perceive them. “Consciousness is aware” is our analytic cogito, therefore fundamentally there is no consciousness without awareness, and so this is the stupidest part of this utterance. “Self” is perhaps a sort of metastructure built on top of base consciousness, in which no identity is fundamentally present. That’s not, precisely speaking, “illusion”. It’s higher level structure. But I suppose some lunatics might want to do away with it if it suits them. More power to them, but I’ll keep my self, thanks.

Moral issues

There are only a handful of these, but they are significant. I continue the list form:

  • “Synthesis of lots of people, result of many individuals leading to creation of one new and unique individual”

I have two objections, one of which is very tentative. Obviously, if there is not continuity of consciousness when merging the many individuals, then every creation of a synthetic human or “transbemon” constitutes a miniature, individualized holocaust. That would be bad. Second objection concerns the duty to maximize utility as requiring the existence of many subjects for pragmatic reasons: there is an upper bound to the amount of utility an individual can receive, therefore more individuals is better. Thus in general, anything that decreases the number of individuals in existence is potentially negative in utilitarian terms even when no other aspects of the procedure are negative. I am not a Utilitarian, and I am not sure I really believe this argument even in articulating it, but I feel the need to articulate it anyway for the sake of completeness.

  • Collective consciousness

I don’t really care for the idea of collective consciousness for the most part. It creeps me out. By collective consciousness, Terasem mostly means “no barriers to communication of thoughts”. Here is a quote to demonstrate:

“We are at the cusp of a complete borderless meshing of all of our minds, and the ability to reach one’s thoughts and for one’s thoughts to reach others thoughts without ever slowing down to the speed of text is upon us. “

This destroys the immense utility of privacy, and that is the basis of my moral objection to it. More pragmatically, there’s an episode of the anime Kino’s Journey that touches on this, in which people use nanotechnology to become permanently psychic, in a way that can’t be turned off. They expect paradise, but the result is a hell that leads everyone to retreat into the countryside where they each live alone, supported by machines. Existing communication features lies, obfuscations, and omissions. It is the result of millions of years of evolution and is perhaps the most finely tuned machine ever to exist. It seems irresponsible to short-circuit it simply because of wishy-washy collectivist pretensions.

  • “Save all kind consciousness”

This is non-universalist in the wrong way; but then Terasem express universalist pretensions elsewhere, so it isn’t even consistent; and is coupled with the idea of curing the bads. I have a strong commitment to the eventual resurrection of everyone, but I don’t believe everyone should be treated equally. For instance, Hitler still needs to be punished in some way, not just cured. Do we invent hell for him? I don’t think eternal hell is appropriate under any circumstances, but there are certainly people for whom nothing less would be appropriate for a Hitler. We will have to answer them, or at least stop them, but what is the compromise? K-large hell?

‘The king said, “The third question is, How many seconds does eternity have?”
The little shepherd boy said, “The Diamond Mountain is in Lower Pomerania, and it takes an hour to climb it, an hour to go around it, and an hour to go down into it. Every hundred years a little bird comes and sharpens its beak on it, and when the entire mountain is chiseled away, the first second of eternity will have passed.”‘

When conscious experience is measured on such scale, it will actually be possible to torture someone for lengths of time that correspond to Kalpa, and Hitler would seem to be an intuitive candidate for this torture, which accords with a moral intuition that is quite prevalent in the current modern world, second only to the concept of eternal hell. So the question becomes, “Why not just let this moral intuition guide us?”

Several reasons. First of all, what are we answering? I have already advanced the argument that the only kind of pain that matters in the context of eternity is the kind that results in permanent deprivation. Does that mean we should punish Hitler, not with eternal hellfire, but with permanent deprivation of some sort? But that punishment, maximized to eternity, is only appropriate if Hitler himself has permanently deprived others of their own essential value. Now, certainly of all the things Hitler did, some of them must have deprived others of what they viewed as their essential value. Viktor Frankl describes no shortage of suicides and even deaths by despair on this basis, and the Holocaust is well documented, so it is not difficult to prove this. But the focus should be on repairing the damage done by Hitler. The focus should not be on punishment, firstly because no amount of punishment could restore the untreated damage done by Hitler, secondly because any damage that can be treated doesn’t warrant eternal punishment. So how should Hitler and others of his kind be punished? I can only say, “proportionally to whatever damage remains”.

  • Enhancement of the great apes and uplifting of dolphins

Animals can’t consent. They can’t even consent in principle. This is important, since it means there are no ethical non-consent workarounds, where consent would emerge if the question were posed and this suffices in place of actual consent. While as a gnostic of sorts, I find the idea of playing the snake in the garden appealing, there is nothing morally wrong with being an animal, and being made into a person introduces the possibility of unique forms of harm. In fact, being made into a person is already potentially a form of harm in the very sense that it represents becoming yourself in the mode of not being yourself, if you consider your ancestral ignorance to be your essential self. Many people want to return to the garden of ignorance, and in some people this inclination is strong enough that it runs into Kierkegaardian demonism. It doesn’t seem entirely moral to uplift animals, and the arguments for doing so (improved functionality in a modern world, mostly) seem entirely insufficient in a context of eternity, artificial virtual environments, and methods of establishing continuity of consciousness from the past into this environment (which are not explicated anywhere in the Terasem movement but have been explicated elsewhere, both on this blog and in other, better writings). We have infinitely more right, justification, and incentive to uplift stupid people than animals. Terasem views intelligence as a fundamental good. But utility is not a function of intelligence, so there’s little utilitarian argument for uplifting. There is something creepy about the idea of being eternally maintained at an arrested level of development, however, so in my view the solution is one I’ve already proposed as a solution to other problems elsewhere: artificial reincarnation, coupled with synthetic evolution. An eternal zoo is immoral, but so is a non-teleological process of uplifting a creature from animal to person.

Practical issues

  • Too much bloviating about values, not enough meaningful meta-framework for governance
” The most basic principles of Terasem: Far reaching ethical principles to guide us and optimize our future, as computers become self-conscious”

Terasem talks a lot about what they think is good. They don’t have a strong foundation for it, and by itself it’s entirely insufficient to just say what is good. You have to propose mechanisms by which to attain, maintain, and support what is good to you. Anything else is Aesop’s Mice in Council: An ideal for which no mechanism exists to achieve.

  • “Win win trumps zero sum”

In a fundamental way, zero sum contests are the only thing that bind people together. This does not just mean materially zero sum contests. The most important contests to people are generally status contests, which by nature are mostly zero sum, albeit with many alternative hierarchies which are capable of generating spontaneously. Without win-win contests, there is little basis for interaction between people.

  • Parenthood as IP based on property right incentives for creation and care

Newton and Leibniz both invented Calculus independently. Calculus is an external meme. Consciousness is a recurring pattern. Now, even Rothbard recognized the absurdity of patenting memes, when two people can independently come up with an idea even as complex as Calculus. But patenting a recurring natural pattern is much more incoherent. It was an interesting idea, because the incentives associated with intellectual property are real, but it requires a lot more exploration than it actually received, and answers are required to many of the traditional arguments against intellectual property, not just to an extent that answers these objections in their usual degree, but to a much stronger degree.

  • Too much focus on definitions

No definition that requires the conjunction of specific scientific and philosophical knowledge with specific non-universal values will ever influence the average person, and as such most of this writing has no rhetorical force or practical power.

  • The fixation on rules ignores the fact that behavior is controllable in virtual environments

That is, it shouldn’t even be possible to break rules unless it is to our advantage to sometimes allow it

  • Article: “THE MEME OF ALTRUISM AND DEGREES OF PERSONHOOD”

This is perhaps the single stupidest article in the Terasem archive. It denigrates selfishness and puts forward a naive belief in equality which I have addressed as senseless elsewhere. It doesn’t understand the role or function of selfishness, or the limitations and problems that prevent equality from becoming attainable. It doesn’t understand how technology is liable to exacerbate rather than expiate these issues.

  • Practical problems with Cryonics

Cryonics doesn’t preserve continuity of consciousness. Furthermore, while the crystalization problem was solved, the neurotoxicity problem has not yet been overcome. New techniques may help with this but aren’t on the market: that is, it’s now practically possible to preserve a human connectome, but the techniques to do so are not available to existing cryonics institutions. Time seems liable to solve this issue, but it is still presently an issue.

  • Too rule focused and not sufficiently tactically focused

It is fine to come up with ideas for rules, but rules are meaningless without power. In the absence of access to state power, there is only a kind of grassroots or distributed power. This means more potent organizing and a substantive praxis is of higher priority than intellectualizing.

Spiritual objections

“That which is above is like that which is below”

Terasem’s spiritual inclinations are derived from Octavia Butler’s Earthseed, which summarizes its own fictional religion’s core tenet as “To shape God”. This isn’t an entirely terrible thought. It’s not entirely vacuous either: It is reminiscent of the concept of Baphomet from Chaos Magick, and other occult tradition, including Biblical tradition: How many Prophets in the old testament succeeded in changing God’s mind? But there is no normative framework in it. It is also a bit vague. What is God? The universe?

I am a chaos magician. Fundamentally, that makes me a special kind of existentialist: a very egotistical one. But Chaos Magick Theory is much more robust than Terasem’s spiritual views. It is hard to respond to Terasem’s spiritual inclinations since they are a mixture of new age pablum and old ideas with little total system or connection in them. They are much like Terasem in general: a very loose collection of non-connecting, incomplete thoughts. Now, Peter J. Carroll’s Chaos Magick Theory is New Agey in its own right, but it is certainly not pablum, and it is very systematic. My own Chaos Magick practices are perhaps even more systematic. I must document my strong preference for them over the spirituality of Terasem.

Sympathetic aspects of Terasem

One of the biggest problems with Terasem is that its content varies in quality from very bad to very good. Sometimes a lot of thought has been invested, such as in “Strategies of personality transfer”, which tries with unknowable degree of success to create a proper taxonomy of the methods of documenting all elements of consciousness. Sometimes there are articles like “THE MEME OF ALTRUISM AND DEGREES OF PERSONHOOD” which are nothing but sentiment wrapped up in pretense.

Some real insight exists at times, such as when it is pointed out accurately that personhood and property are not mutually exclusive, or when in “A Proactive-Pragmatic Approach to the Legal Status of Cyberminds,” Max More, Ph.D explicitly calls out the other Terasem members by saying:

“That implication really flows out of a view which I think is quite popular in transhuman and transbeman circles—even among some philosophers—that informational continuity is what matters, not structural or even functional continuity.”

Time scanning is a good idea. The Order of Cosmic Engineering seemed pretty cool, but also looks tragically defunct: it’s website is down. There is a recurring idea in both journals about redownloading digital information into meat bodies on other planets. But why? Either avatars are conscious or they’re not. If not, then some sort of actual virtual consciousness should be pursued. It’s only if this is not possible that the fixation on this makes sense, which undermines the rest of Terasem’s ideas substantially. But there is also this agreeable chestnut:

“If we live in a synthetic reality, then in a certain sense, we cannot even rule out the supernatural, or miracles. The simulators, the system admins, cannot violate their laws of physics, but they can violate our laws of physics if they want. It seems that the supernatural, which we have kicked out of the back door of superstition, may come back through the main door of science. “

Lastly, there is Yudkowsky. Appearing only once that I can tell in either journal, Yudkowsky has the strongest voice and raises the best points, and seems to be the only person out of the whole lot that believe in the existence of honest to god problems and not just puzzles to be defined away.

He raises some very sane points about what he calls the “crime of designing a broken soul.” Here are some quotes that define some of the essential boundaries of his argument:

“Naturally, darkness is carved into our genes by eons of blood, death and evolution We have an obligation to do better by our children than we were done by”.
“Nor is it ethical to make something exactly resembling a human, if you have other options.”
“Open-ended aspirations should be matched by open-ended happiness”
“If it is not ethical to build the human, what is it ethical to build? What is right and proper in the way of creating a new intelligent species?”

Naturally I disagree with Yudkowsky on much of this: I don’t view a holistic human as characteristically bad, or the human condition as innately bad. I say this as a therian, a self-identified dragon, however. Maybe that is to my credit, or maybe it is to my discredit, but certainly I would not have dragons exist without many of the attributes Yudkowsky undoubtedly laments in humans. It is not these attributes themselves that are bad. Even in the case of limited capacity for happiness, this is a safety mechanism. It prevents people from pursuing what is good to them to an extent that is harmful. And even if these dangers are taken away, which I don’t believe they entirely can be, the functionally essential character of a being is informed by what they do with boundaries to their existence. Without boundaries, there is no character to being, and this would be a great loss. I believe Yudkowsky also jokes elsewhere about AI hacking their own reward functions. But this is precisely what prayer and meditation are in a human context, whatever else they are, as well as exercise and other meditative activity. On the whole, humans seem well calibrated to me as a species. I am sympathetic to Yudkowsky’s points but I see nothing wrong with creating other intelligent species using humans as a template.

Overall, there are some stand out thinkers in the Terasem movement. Aside from that, it’s good that the others are thinking about it, but on average they are not tremendous intellects. This makes the exceptions stand out more brightly. But it does not make for a very convincing, credible, or effective movement.

Superficial impressions of the 2B movie

I was forced to pay $5 for this god-awful film, so I am compelled to review it even though I have already said pretty much everything that actually needs to be said about Terasem. This is not a good movie. It is not a good story. It does not have ideas that are good. It is only slightly less bad than my own fictional writing, and that is a very very low bar.

The movie begins with the quote, “Everything is theoretically impossible until it’s done”. In the context of a collection of impotent and second rate future theorists, this is intensely ironic: everything in Terasem is an attempt at theory! There is a fixation on fairy tales in the movie, which acts as a sort of heavily bowdlerized occultism, to a degree that strips the occultism of any of its content, and even most of its aesthetic power. The main characters are a total bum vlogger, presented as an outcast with integrity, a Randian style business-overman, and a Transbemon named Mia.

The vlogger hates the business-overman. The business-overman is in turn facing a 30,000 year sentence for illegal human experimentation. This is potentially very unsympathetic, but without explanation we don’t really know whether it’s justified or not. Business-overman has Mia kill him, because he’s made a mindfile of himself, with which he intends to demonstrate the conquest of death. The movie also makes a good deal of noise about the lack of rights of Transbemons, but it seems to me that if you were facing a 30,000 year sentence, the last thing you’d want is to be immortal and treated in accordance with human law.

There’s a lot of very bad sci-fi writing. Use of terms like “telluric barrier” and “redundant metastructure”. “Firewall” and “Distributed virtualization” might have made more sense in context, but what do I know, I’m just a former IT and CS student. The quote comes “Even his closest coworkers say his work was beyond their level of understanding”, which is simultaneously a jerking off of the business-overman and a really terrible sci-fi handwave to avoid explanation of technical concepts. In general, there are way too many informed attributes: things we wouldn’t know about people except that we are explicitly told that they apply.

Let’s talk about Mia. Mia is plainly fetishized, in a way that is very disturbing. She has the weirdest dopey machine emoting, which, coupled with descriptions of her attributes, are not flattering to the Terasem movement. We’re told she was created as essentially human, “Minus the genetic markers for cruelty, greed, and hatred”: But she still kills a man, so obviously these are just propensities that have been removed, and not actual abilities. Even so, Mia proceeds through the film like a lobotomized child. So the Terasem movement’s ideal human is effectively, a lobotomized child. Right.

Before being shot, business-overman gives a grand speech: He tells his broad audience, including people in Time’s Square, that humanity mustn’t cling to the past. He speaks of necessary sacrifice, by which he means the humans that have to be left behind. He responds to accusations of playing God by saying, “Who’s playing?” Then he says farewell, and Mia shoots him. Ok.

It seems to me that as long as you are playing God, forcibly dragging the human race along with you would be a much better thing to do than treating them, as Nick Land once said regarding AI’s eventual departure from the human race, like the disposable booster stage of a rocket leaving orbit. Coercion would be much better. I will explain why in my conclusion, but the short of it is, if you really believe your solution is compatible with the preservation of the fullness of humanity, prove it.

There’s a dopey cyberpunk detective chase culminating in an all-revealing guerrilla broadcast via the vlogger and a trial for Mia (no shortage of tropes here) and the whole thing wraps up on a cliffhanger. Ok.

One of the only genuinely interesting things, and perhaps I should put “interesting” in quotes because I mean it in a very clinical way, is that Mia says Transbemon remember being born and waxes very poetic about it. This is clearly about trying to create new tradition or culture. But it is supremely hamfisted. Still, the attempt stands out, as it demonstrates that the people who made the film recognize the importance of various human things at the same time as it demonstrates their complete fucking ignorance of them.

One positive thing that can be said about all of this is that the low fidelity is actually good because it means these very real issues will sneak up on people. That gives a lot of agency to people of action, including our hypothetical business-overman, who in our timeline probably won’t be punished for human experimentation because there won’t be laws in place defining digital beings as human in time to prevent him from doing his experimentation. Or to put it another way, the verisimilitude of these issues to the average idiot is entirely contingent upon their fulfillment, which means the public is unlikely to exert any pressure on shaping the development of these technologies until they are already well past their nascent stage.

Conclusion

What do we really know about consciousness, phenomenologically? We know the cartesian cogito, which can be divided into synthetic and analytic
components, and we know (at least for pragmatic purposes) that there is continuity of perception over time. Possibilities like Buddhist sudden-appearance offer metaphysical alternatives to continuity of perception, as do other arguments, but these are also outside the context of actually bothering to do anything, so they can mostly be ignored.

Already this is infinitely more knowledge about consciousness than Terasem has, and it only requires reading the first 50 pages or so of Being and Nothingness. If the core of Terasem is Rothblatt and co, or their spiritual wishy-washy new-age nonsense, or even the hypothesis that information = consciousness, then it must be said: They don’t have theory, they don’t even have narrative. They have a few ideas that they think should serve to structure a narrative that they believe will subsequently become real even though they can’t credibly articulate it. Nowhere is this more apparent than in their god-awful movie, which is not compelling either in terms of narrative or ideas presented. It has no verisimilitude because these people don’t have a theory of verisimilitude, just a deck of definitions that they shuffle about endlessly.

They want to make people who are less than completely human, and contextualize this as “better”, even though it’s more like lobotomization. They have no idea what these things they want to get rid of even do; what function they serve; but still want to dispose of them. This is terrifying. The biggest risk I can think of regarding these projects, bigger even than AI alignment (because at least unaligned AI would actually potentially survive us and become valuable to itself in the absence of humans, like in Land’s teleology) is to go and change everything before preserving everything and understanding all its interrelated mechanisms. And yet even down to Yudkowsky, the urge to change everything as soon as possible is ubiquitous. Is there no voice for tradition in consciousness? I will have to be that voice then, even as it undermines my own plans and aspirations for the draconic race.

Counterproposal: CLASSIFIED

New Comment
10 comments, sorted by Click to highlight new comments since:

I have questions which this post seems like it should answer, but (on a skim) doesn’t:

  1. What in the world is the “Terasem Movement”?
  2. Why do we care about the “Terasem Movement”?
  3. Why did you decide to write this… takedown? analysis?
  4. (bonus question) Who are you? (I don’t mean in, like, a philosophical sense, but this is your first contribution of any kind to Less Wrong, so it seems like a tiny bit of introduction is in order, and your blog does not have an easily findable About page)

Edit: Apologies if some of these questions are answered somewhere in your post (which I did not read in its entirety), but it is over 7.6k words long…

The Terasem Movement is a sort of new-agey technophile organization that Yudkowsky once wrote for, I would have thought it would be known here already (and this also sort of answers why you should care about it)

I wrote this post because someone told me to!

I am the legendary dragon Alephwyr, whose name is Alephwyr!

The Terasem Movement is a sort of new-agey technophile organization that Yudkowsky once wrote for

Citation?

I wrote this post because someone told me to!

Who? Why?

https://en.wikipedia.org/wiki/Terasem_Movement

My internet boyfriend. Because he thinks I am philosophically competent and wants me to engage more with others, ostensibly.

https://​​en.wikipedia.org/​​wiki/​​Terasem_Movement

That page does not mention Eliezer Yudkowsky.

To clarify, what I would like a citation for is the claim that Eliezer “once wrote for” them. Could you link to, or at least cite, what he wrote for them, and when?

http://www.terasemjournals.org/PCJournal/PC0102/yudkowsky_01a.html

Thanks!

The linked page says:

This article was adapted from a lecture given by Eliezer Yudkowsky at the 1st Annual Colloquium on the Law of Transhuman Persons on December 10, 2005 at the Space Coast Office of Terasem Movement, Inc. in Melbourne Beach, FL.

As I understand it, for quite a few years Eliezer appeared at various conferences / conventions / colloquia as a paid speaker (though it seems he no longer does that). It seems likely that this is what happened here—Terasem Movement, Inc. paid him to speak at their colloquium, and then (presumably with his permission) posted an adapted version of his lecture on their site.

This may seem like a quibble, but to say that Eliezer “wrote for” this organization implies a rather closer relationship than the scenario I outline.

That's fair. I still think the post is relevant though.

As far as I can tell, actually, there is no real reason for us (i.e. the Less Wrong commentariat) to care about these Terasem people. They seem to be weird, and rather confused about some things. That is hardly an exclusive crowd. (And “they once paid Eliezer Yudkowsky to speak at their conference” is not an interesting connection.)

I don’t say this to pick on you, by the way; it’s just that I think it’s important for us not to get distracted by analyzing what every group of cranks (or even every group of AI cranks) out there thinks, says, and does.

Well that's sensible enough, and I can only rebut it, not refute it. My counterargument is basically this:

1. At the speed at which technology is going forward, it seems entirely possible that the opinions of cranks will eventually drive real world actions of some sort, and so engaging with them ahead of those actions might be a good thing.

2. Without airing dirty laundry, it's impossible to know how prevalent crank-ish ideas are in a community.