There are no known structures in conway's game of life that are robust. Even eaters, which are used to soak up excess gliders, only work when struck from specific directions.
If you had a life board which was extremely sparsely populated, it's possible that a clever agent could send out salvos of gliders and other spaceships in all directions, in configurations that would stop incoming projectiles, inform it about the location of debris, and gradually remove that debris so that it would be safe to expand.
At a 50% density, the agent would need to start with a fairly large safe space around it, otherwise it would be overwhelmed. I can't imagine even the best sensing/manipulating technology in life is capable of picking its way through even mostly static garbage at any more than a glacial pace.
Basically you'd have to send out a probe, wait for the echo, or lack of echo, and from that, recalculate the probabilities of all the different configurations of still lifes and oscillators and spaceships and puffers and so on that the probe could've hit, and how those configurations would've been altered or destroyed or (in most cases) expanded due to collision with your probe. And then work out another probe to send, and repeat the process, until eventually you had a good enough estimate of what you were dealing with that you could send probes calculated to get rid of it, and all the additional garbage you generated in the process of probing it.
It is unknown whether robust structures can exist in life, even if incredibly intelligent, incredibly large, and incredibly slow, but I would speculate that they can.
However, it's also possible that there are far simpler robust expanding patterns, in which case, larger slower structures such as intelligent agents would be hopelessly overwhelmed.
Great post.
The physics of GoL determine the technology available for an intelligent life form in the universe, and the limit of that technology may not be sufficient to ensure the eternal survival of that lifeform. But if that is the case in the GoL universe, the same might not be true in our universe.
Therefore, it is possible to create an AI in it. If you created a 3^^3 by 3^^3 Life board, setting the initial state at random, presumably somewhere an AI would be created.
I'd be a little surprised to find an AI (and surrounding environment to support it developing) to be present in a board that small. 3^^3 = 7,625,597,484,987. The specification of an AI and the surrounding environment that is conducive to life and that is a source of negentropy compatible with the AI is really rather complex.
One extra caret (3^^^3) and you could definitely expect AIs to exist. I have speculated on the subject from time to time; I find it a little amusing that all you need to do to create intelligent life is set the screen resolution of ADOM high enough and get as far as the Big Room!
Whether they take over the entire board seems to depend on whether it is possible to create an unstoppable wave in Conway format that expands completely independently of what patterns exist in its path. My guess would be that AIs would take over huge swaths of the board but stopped by defensive patterns created by other AIs and the occasional freak defensive structure occurrence.
It is also plausible that any sufficiently large randomized Conway board could turn out to converge to a barren 'all off' state. This is something that we could determine theoretically based off the Conway's Life physics if we were sufficiently intelligent (and interested). In fact, if I ever FOOM I intend to spend some time analyzing the problem. (Why not?)
A problem with Conway's Game of LIfe is that it is very hard to defend yourself against attack from the rest of the game board. You can put out an eater barrier, but those things aren't impermiable, and you have to take it down if you want to expand. So, most self-reproducing creatures in an otherwise random GOL field might find themselves shot to pieces - and quickly disintegrate.
You can put out an eater barrier, but those things aren't impermiable
Do we have a proof for that? Or a reason to have high confidence that such a thing is not possible? How about preemptive strikes? That is, an expanding field that obliterates all dangerous things it its path.
I found it a funny thought experiment to imagine that this conversation happens between Game of Life agents discussing the Standard Model. The kind of fool-proof defenses you are discussing might be impossible in our world, too. But according to this analogy, it might be possible in the Game of Life world that after a tumultuous initial period, the world becomes a safer place, and agents with 99.9% effective defenses emerge.
Eater barriers only ever really defend against a few types of collision. That's just a function of how eaters work - they only ever eat a small subset of things that can be projected at them.
I don't think anyone has ever created an impermeable wall in the GOL. I am pretty sure that such a thing is impossible - though AFAIK, that has never been proved.
Taboo 'AI' in your question. Are you looking for: A) A self-modifying structure that contains an internal representation of the surrounding gameboard and a planning algorithm that uses that representation to achieve goals according to some utility function. B) A structure that can pass the Turing test with questions and answers encoded in the game space? C) Something else?
I suspect you meant A. Rephrasing your question gives you an idea what subquestions to look into, such as the degree to which a contained internal representation is possible given the rules of the game and the size of the board.
If the board is 3^^^3, per side and set up randomly, then it almost certainly would be instantiated with googolplexes of Turing-complete simulations of our entire universe by complete chance alone (similar to Boltzmann Brains), and there would be vastly more universes very much like ours.
Most of these universes would be wiped out quickly by local disturbances before they got very far, but still vast numbers would have enough clear or static space around them to permit reasonable durations. What's a reasonable size and duration: 10^150? 10^(10^150)? The size of 3^^^3 absolutely dwarfs these.
I think some of these comments are failing to account for how much space (3^^^3)^2 actually is. For any universe the size of ours, it is practically infinite.
There would also be completely alien structures, more "natively" suited to GOL physics. These could be organisms with cells 10^100 by 10^100 units wide if necessary. They would notice individual attacking gliders as much as we notice a single high-energy photon or alpha particle.
If you created a 3^^3 by 3^^3 Life board, setting the initial state at random, presumably somewhere an AI would be created.
It certainly wouldn't be "artificial" in any strongly meaningful sense of the term.
An AI would iikely be made out of small, stupid stable configurations.
Quarks => Atoms => Molecules => Polymers => Cells => Bigger Cells => Brains => Living Things
An intelligent creature could be HUGE.
"An intelligent creature could be HUGE."
and some other post:
"t's take a long time to do anything."
I think a lot of people are mistaking a "cell" in this universe for being comparable to... I guess a "cell" in biology, or something, and comparing each step in the program to... I don't know what people are thinking exactly here (a second?). But I think for comparison to our physics (and any kind of biology or intelligence that we would care about), each cell is a quark, and each tick is equivalent to a Planck time. This doesn't make the sentient creatures we're hypothesizing big and slow, it just means they're the same size and speed that we are.
Does my post give you the impression that I was mistaking a cell for being something like a cell in our universe?
No, I was mostly agreeing with you, except to reframe the comparison as "an intelligenct creature would be the same size that they are in our universe. As far as we're concerned, we're normal sized, and quarks are small."
I also address the "it's so SLOW!" comment I saw a few other places. There was no single comment that made the most sense to reply to, and I figured I might as well tie the comments together since I saw them as related.
Related noob question:
Are there patterns in Life than can reproduce themselves reliably but imperfectly? (i.e. does life exist in Life?)
EDIT: Disregard this comment. I found what I was thinking of and it isn't in any way a replicator. It was this: http://www.conwaylife.com/wiki/Gemini
My memory is evidently playing tricks with me. I could have sworn I heard about someone constructing a replicator maybe about a year or 2 ago, and that it was all over the internet. But now when I try to look things up what I find agrees with you that no replicator has yet been constructed. Does anyone know what I might have been thinking of?
(Of course, now we're discussing replicators, whereas the question was about imperfect replicators, but that falls much under the same boat - definitely possible, almost certainly not yet constructed.)
Note that for a Life board what one means by "random" is not at all obvious. How likely are cells to start as alive? This percentage could drastically alter the behavior of the board.
There's a more general question that you seem to be implicitly asking: Is there a life configuration which will have a reasonably high probability of of expanding across most random fields? My initial conjecture is something like "no, unless the initial probability of a cell in any given spot is very low". The rough intuition here is that even if one has a functionally smart configuration in Life, it will not be able to see what is happening far away from it until it is too late. Essentially, the speed of observation and the speed of bad things happening to it occur at about the same speed.
Random
If the board is big enough, I don't know that it matters what the chance of any given cell starting alive or dead is.
Sight in Life
I'm not sure if this is possible, but I can imagine a Life Entity that generates and fires radar projectiles which also continuously generate their own projectiles firing back at the entity. (The entity has some kind of membrane that easily absorbs the returning projectiles.) If the radar particle collides with something, it is disrupted and stops sending particles back. The entity is able to observe events in the distance by measuring the lack of returning particles. (Returning particles might also have some kind of interactions with each other that provides additional information)
I haven't spent a whole lot of time thinking about Life, so I have no idea if this actually is workable. But it's what first occurs to me.
If the board is big enough, I don't know that it matters what the chance of any given cell starting alive or dead is.
What matters isn't just what the local configuration looks like but how well it can expand. Expanding into empty space is easy. Expanding into densely populated space is much tougher.
Re the radar idea. Radar in some sense (and most real world senses for that matter) work because they give information much faster than the environment moves. in the case of real life radar, or vision, this is as fast as the laws of physics allow. Unfortunately, in Life, the equivalent speed of light is not a speed at which stable objects can self-propagate. Objects can only propagate in life at most c/2. Worse, some very simple configurations meet this speed. This means that if one runs into a bad configuration one will likely get that data the same time that the disruptive c/2 stuff is coming in. Configurations that can move themselves and leave trails will move much slower than c/2 and only send back messenger gliders every few moves. Finally, even if one did get data back, since most complicated configuration are only barely stable, it is unlikely one could get any information other than that something occurred at around a specific area that disrupted your configuration.
I disagree with your final point, that it is unlikely that it would be possible to get information other than "something occurred at around a specific area;" however I agree with your other points.
A detailed computer is likely to be very unstable. Even if it could collect lots of information about the environment reliably at a distance, it would not be able to retrieve or utilize that information quickly.
In a large enough board, however, we would expect to see a computer that was essentially isolated except for a few small stable patterns which it would be able to interact with via radar.
Presumably a sufficiently intelligent AI would be able to better configure the patterns of life, perhaps using some kind of protective membrane.
However the chance of creating a life board of such a size and with such a distribution that one would expect to see ONE such AI and not a thousand or more (which would likely have different goals and levels of intelligence) would mean that rather than expecting to see one type of complex computation take over the board, I might expect that in the limit (i.e. in a 3^^^3 board; after about 3^^^3! time steps) to see extended wars between the most intelligent AIs.
Unless it turns out that there is an unstoppable self-propagating "virus" in the game of life, which would be a somewhat depressing thing to learn!
There is another cellular automaton called the Cyclic Cellular Automaton (warning: Java applet). There, these viruses emerge naturally, without any effort. They are called daemons. When several of them meet, a pretty pattern emerges. I always use these to help visualize agents burning the cosmic commons with the speed of light,
presumably somewhere an AI would be created
I would be surprised if only one was created. You would probably get wars going on between the vastly different Artificial Intelligences that arose in there. At some point, one of them would probably "win", although whether we could determine this would depend on whatever it is that particular AI's terminal values were. And of course, it would probably take a really long time.
I incidentally thought about this long before I got sold on this site. just wanted to mention that came to mind: evoloop (IIRC minimum size loops couldn't be beat, but when I checked wikipedia again nvm there might be more to it. anyway, suppose a less strategic game where there's no conservation of resources and the best option is just to try to expand, then the meta devolves into spamming) and oskar's belgian maze, where trying to probe for information could get you into deeper mess
If the number of the bits were to be infinite, then there would be an isomorphism from the board into the Rado graph, and since it contains every finite graph, then the probability of the board containing all the possible finite computation would be 1.
The number of bits contained in our cosmological horizon is much much less then the number of bits contained in 3->3->3 cells, and since Life is TC, I suspect that similarly the probability of such a board of containing an AI is uncalculably close to 1.
I suspect also that you could make 1 - P(AI in life) as small as you want by adjusting the size of the board.
There are three very important distinction between "bits contained in our cosmological horizon" and "number of bits contained in 3->3->3 cells" of Conway's life: 2D vs 3D, the complexity of the "rules" governing interaction between the "bits" and the number of possible states of each "bit". This almost certainly (as in "I am very sure, but a formal proof eludes my math skills") means that 3->3->3 cells is far too few to simulate even a small portion of our universe.
I tried for a couple of hours to show this by looking at a number of "natural" states and symbols for a 2D CGoL and possible 3D analogues. I also tried an approach of modelling the growth of a 2D vs 3D machine as a function of computing power. I ran into conceptual difficulties both times, but not before forming an impression that n->n->n notation will be inadequate to compare the two (need to extend to n->n...->n->n... busy beaver, anyone?). Maybe someone with advanced math background can push this further...
2D vs 3D, the complexity of the "rules" governing interaction between the "bits" and the number of possible states of each "bit"
Actually, none of these matter. The possible states of each bit is exactly 2 both for our universe and for (the simplest form) of Life. And the fact that both are Turing complete means that, whatever the rules governing the interactions are, for every possible computation there is (at least) a state of the world that performs that computation. This also makes futile the distinction between 2d and 3d geometry of the space, even if - if we are to believe to the holographic principle - our universe too is completely describable as a 2d entity. This is not necessary however, since the calculation of the bits contained in our cosmological horizon can be based on Bekenstein bound only.
As for the actual numbers of bits in our universe, I don't really want to do the math, although I read somewhere that is in the order of 10^120, but even if it's closer to 10^150 that wouldn't matter at all, since that number is far less than the bit contained in a cellular automata of 3->3->3 bit, let alone the bit contained in a board of 3->3->3 squared.
the fact that both are Turing complete means that, whatever the rules governing the interactions are, for every possible computation there is (at least) a state of the world that performs that computation
Certainly, for an infinite board. But a 3->3->3 board is infinitely smaller than that. What is in question is what portion of universe such as ours can be simulated on such a board...however:
none of these matter
I now agree - with a caveat that one allows arbitrarily long time for the simulation. My earlier remarks were based on an implicit assumption that the computation time for the 2D machine simulating a 3D machine stays constant as the 3D machine size grows.
William Poundstone's "The Recursive Universe" reports Conway's persuasive analysis of alive, intelligent Life patterns in Life. Conway thinks that alive patterns would emerge, survive, propagate and evolve in huge random fields with very low density (say one on cell per billion).
OTOH ... whatever the initial condition is, sooner or later we will have a cycle. We always have a cycle in the GoL, eventually. Might be very trivial, an empty table. Might be a longer, complex cycle, but that is the way which always happens, sooner or later in the Game of Life.
We may stare at the empty plane and ask ourselves if this is the graveyard of a superintelligence, once lived here and conquered the plane for a brief time, then vanished in a collapse. Several gliders and roses could be everything what remained, as some dry fossils.
Everything is uniquely defined by the initial pattern.
We may stare at the empty plane and ask ourselves if this is the graveyard of a superintelligence, once lived here and conquered the plane for a brief time, then vanished in a collapse. Several gliders and roses could be everything what remained, as some dry fossils.
Or, we could find that the playing field stabilizes to something that can easily be interpreted as a superintelligence's preferred state - perhaps with the field divided into subsections in which interesting things happen in repeated cycles, or whatever.
Interesting question: why does this (intuitively and irrationally) seem to me like a sadder fate than something like heat death?
Because it takes the meaning out of the accomplishment? In this scenario, there might be something interpretable as a superintelligence that exists at some point before the scenario settles into repeating, but the end state still seems more to be caused by the initial state than by the superintelligence.
Alternately, it could be because you value novelty, and the repeating nature of the stabilized field precludes that in a way that's more emotionally salient than heat death.
the end state still seems more to be caused by the initial state than by the superintelligence
But this is true of the heat death of the universe, too, eventually...
The only long-term scenario I know that avoids both this outcome and heat death involves launching colonies into the "dust" as Greg Egan described. Unfortunately the assumptions required for that to work may well turn out to be false. There's no known law of nature saying we can't be trapped in the long term. And if we are trapped, I think I prefer a repeating paradise to heat death.
I don't know. Heat death seems a lot sadder to me, in part because I know that there's at least one universe where it will probably happen. Maybe you are just more used to the notion of heat death and so have digested that sour grape but not this one?
I wonder why rational consequentialist agent should do anything but channel all available resources into instrumental goal of finding a way to circumvent heat death. Mixed strategies are obviously suboptimal as expected utility of heat death circumvention is infinite.
A glider isn't a cycle. It translates itself. A glider gun isn't a cycle either, since it creates arbitrarily many gliders. So I think that it's possible not to end in a cycle in interesting ways as well.
On a higher level, since Life is Turing-complete, it's perfectly possible that the game state ends in an infinite computation of Pi to higher and higher precision, and never repeats as a result (or, you know, anything else could happen).
But GoL on a finite board has only finitely many possible states, and must therefore end up in a cycle.
Everything is uniquely defined by the initial pattern. uniquely? GoL isn't reversible (unlike Critters or ...(1d cellular automata that defeats the original point of socolar-taylor tiling by having all 2^3 rather than just the six from rotations) where information can't be destroyed, in which case boards containing intelligent agents won't become more common than otherwise)
sooner or later we will have a cycle.
(1) You have to be careful in specifying exactly what 'cycle' means here. Perhaps you mean something like: "The configuration can be partitioned into non-interacting subsets each of which is either a 'still life', an 'oscillator', or a 'spaceship'."
(2) If we're talking about an infinite game of life board, randomly populated, then it won't generally be true that 'sooner or later we have a cycle' even in the above sense.
(3) Even if there are only finitely many cells switched on, if the board is infinite then the configuration may never begin 'cycling'. Proof sketch: One can build a Universal Turing Machine in the game of life. Therefore, one can build a dovetailer which executes all possible computations in parallel. Therefore, if this was destined to 'cycle' after a finite time then we could solve the halting problem. ETA: Simpler proof: A glider gun!
(4) So your result about cycling really only applies to a finite game of life board.
Conway’s Game of Life is Turing-complete. Therefore, it is possible to create an AI in it. If you created a 3^^3 by 3^^3 Life board, setting the initial state at random, presumably somewhere an AI would be created.
I don't think Turing-completeness implies that.
Consider the similar statement: "If you loaded a Turing machine with a sufficiently long random tape, and let it run for enough clock ticks, an AI would be created." This is clearly false: Although it's possible to write an AI for such a machine, the right selection pressures don't exist to produce one this way; the machine is overwhelmingly likely to just end up in an uninteresting infinite loop.
Likewise, the physics of Life are most likely too impoverished to support the evolution of anything more than very simple self-replicating patterns.
If AI is possible in Life then a sufficiently large random field will almost certainly contain one. Whether it will have enough of an advantage to beat the simple self replicators and crystalline growing patterns for dominance of the entire field is another question.
Consider the similar statement: "If you loaded a Turing machine with a sufficiently long random tape, and let it run for enough clock ticks, an AI would be created." This is clearly false
That is not the same thing at all, though.
Conway’s Game of Life is Turing-complete. Therefore, it is possible to create an AI in it. If you created a 3^^3 by 3^^3 Life board, setting the initial state at random, presumably somewhere an AI would be created. Would this AI somehow take over the whole game board, if given enough time?
Would this be visible from the top, as it were?
EDIT: I probably meant 3^^^3, sorry. Also, by generating at random, I meant 50% chance on. But any other chance would work too, I suspect.