To me this is a good example of a too theoretic discussion, and as the saying goes: In theory, there is no difference between theory and practice. (But in practice there is).
My counterargument is a different one, and I kind of already have to interrupt you right at the start:
If there is no death, [,,,]
Putting "immortal animals" into any search engine gives lots of examples of things that get pretty close. So we can talk about reality, no need to talk only about Gedankenexperimente. So the first question cannot be: "Why is the counterargument wrong"?
Instead it should be: "Why are there no immortal living beings that dominate. Why are all of them more or less unimportant in the grand scheme of things?"
And the answer is pretty obvious I think. Because it's simply unfavorable being immortal. It may be due to evolutionary bottlenecks, or due to energetic ones (inefficiency of repair vs reproduction for example), or a myriad of other ones. I don't think you get to simply state that obviously being immortal is better, when all of the observable evidence (as opposed to theoretical arguments) points in the opposite direction.
So what is clear is, that if you want to be immortal, you have to pay some kind of tax, some extra cost. And if you cannot, you will be of marginal importance, just like all other (near-)immortal beings.
Incidentally, I think this is why only rich people ever talk about immortality (In my experience). To them, it's clear that they will always be able to pay for this overhead, and simply don't worry about it.
I would actually be interested if I am mistaken on that last point. Please speak up, if you are a person that is strongly interested in immortality, and you are not rich (for example when you went to school, you knew you were obviously different because your parents couldn't afford X). I would really be interested to learn what you see differently.
The most powerful one is probably The Financial System. Composed of stock exchanges, market makers, large and small investors, (federal reserve) banks, etc...
I mean that in the sense that an anthill might be considered intelligent, while a single ant will not.
Most of the trading is done algorithmically, and the parts that are not might as well be random noise for the most part. The effects of the financial system on the world at large are mostly unpredictable and often very bad.
The financial system is like "hope" according to one interpretation of the myth of Pandora's box. Hope escaped (together with all other evil forces) as the box was opened, and released upon the world. But humanity mistook hope for something good, and clings to it, while in fact it is the most evil force of them all.
Okay I may have overdone it a little bit now, but I hope you get the point
Very good ideaI did not do it. My argument would be that the impetus is not my own, it is external, your written word.
What stops you from making increasingly outlandish claims ("Your passphrase is actually this (e.g illegal/dangerous/lethal) action, not a simple thought" Where to draw the line?
Just as a point of reference, as a kid I regularly thought thoughts of the kind: "I know you're secretly spying on my thoughts but I don't care lalalala....." I never really specified who "you" was, I just did it so I could catch "them" unawares, and thereby "win". Just in case.
The difference is hard to define cleanly, but back then I was of the opinion that I did it of my own free will (Nowadays, with nonstop media having the influence it has, I would be less sure. Also I'm older, and a lot less creative)
Just for completeness, I found [this paper](http://dx.doi.org/10.1016/j.neuron.2021.07.002), where they try to simulate the output of a specific type of neuron, and for best results require a DNN of 5-8 layers (with widths of ~128)
We Live in a Post-Scarcity Society
Do you mean "we Americans"? or "we, the people living on the East West Coast"? Because it certainly is not true on a national/worldwide level.
For example, in a "magical post-scarcity society" , you would probably be okay to be (born as) really anybody. You shouldn't really care as much as you might during medieval times for example.
How about right now? Do you care? Are you willing to trade places? I certainly am not.
Furthermore, you picked the worst possible timing for this post. One should not characterize a society based on a single point in time (particularly at the height of the peak). It would be more robust to pick a time of great stress to pass judgement (c.f. how the character of some people strongly changes the worse times get).
For example, would you be indifferent to your geographical or social position this coming winter? (I'm asking, because prices for everything are on the rise. Particularly interesting for this discussion are prices for natural gas, electricity and fertilizer)
Can't live in a post-scarcity society without heating and food...
Anyway, I guess we'll see in one years time how our respective positions aged and/or changed.
I guess that would be one way to frame it. I think a simpler way to think of it (Or a way that my simpler mind thinks of it) is that for a given number of parameters (neurons), more complex wiring allows for more complex results. The "state-space" is larger if you will.
3+2, 3x2 and 3² are simply not the same.
From my limited knowledge (undergraduate-level CS knowledge), I seem to remember, that typical deep neural networks use a rather small number of hidden layers (maybe 10? certainly less than 100?? (please correct me if I am wrong)). I think this choice is rationalized with "This already does everything we need, and requires less compute"
To me this somewhat resembles a Chesterton's fence (Or rather its inverse). If we were to use neural nets of sufficient depths (>10e3), then we may encounter new things, but before we get there, we will certainly realize that we still have a ways to go in terms of raw compute.
First of all, kudos to you for making this public prediction.
To keep this brief: 1 (95%), 2 (60%), 3 (75%), 4(<<5%), 5 (<<1%)
I don't think we are in a hardware overhang, and my argument is the following:
Our brains are composed of ~10^11 neurons, and our computers of just as many transistors, so in a first approximation, we should already be there.
However, our brains have approximately 10^3 to 10^5 synapses per cell, while transistors are much more limited (I would guess maybe 10 on average?).
Even assuming that 1 transistor is "worth" one neuron, we come up short.
I remember learning that a perceptron with a single hidden layer of arbitrary width can approximate any function, and thereby any perceptron with finite width, but with more hidden layer. (I think this is called the "universal approximaten theorem"?)
After reading your post, I kept trying to find some numbers of how many neurons are equivalent to an additional layer, but came up empty.
I think the problem is basically that each additional layer contributes superlinearly to "complexity" (however you care to measure that). Please correct me if I'm wrong, I would say this point is my crux. If we are indeed in a territory where we have available transistor counts comparable to a "single-hidden-layer-perceptron-brain-equivalent", then I would have to revise my opinion.
I'm personally very interested in this highly parallel brain architecture, and if I could, I would work on ways to investigate/build/invent ways to create similar structures. However, besides self-assembly (as in living growing things), I don't yet see how we could build things of a similar complexity in a controlled way.
I have a similar objection. Not particularly because they are Refugees, but instead because they are Foreigners.
I actually quite liked the presented idea, but I think it is very heavily slanted towards ideas that are oriented on the political left (which I generally favor if I were forced to choose). Still, some concepts from the political right are important here, particularly those of culture, and personal responsibility.
Summarized very briefly (and therefore certainly wrong)
Culture, according to the left is something that is imposed from above, and can be exchanged like a pair of socks, while the right thinks that culture is a shared consensus, that depends on all the participants (and particularly on their relative number. An example would be the "culture" of burning cars in Sweden, which is a quite new occurrence)
With personal responsibility on the other hand I mean that you certainly bias your sample of immigrants depending on how you select them. Are all of them people looking for handouts? Or are they people that want to actively change their surroundings (In the latter case, it may be possible that they stubbornly reject leaving their own country for example. The opposite might just as well be true, if they define "their surroundings" to be limited to their close family, kids etc.)
In any case, I think these ideas at least have to be taken into account, otherwise this all sounds like some unfinished idea of a utopian fairytale.
"Probably most ambitious people are starved for the sort of encouragement they'd get from ambitious peers"
If you were to substitute "intelligent" for "ambitious", I would agree. Some kind of dialog is needed to flourish, and a dialog between equals is strongly preferred. Or said another way, when training, it makes no sense to train with to little weight.
The smartest people tend to be ambitious.
I strongly disagree. Assuming a certain bias regarding the selection of examples, this is just a tautology: Highly visible people are highly visible. Successful people are visible. Stupid people are on average less successful. Non-ambitious people are less visible. Some counterexamples would be Grigorij Perelman, or Steve Wozniak (I know basically nothing of these people, and am willing to be proven wrong)
we would task AGI with optimizing
I see, that kind of makes sense. I still don't like it though, if that is the only process to optimize.
For me, in your fictional world, humans are to AI what in our world pets are to humans. I understand that it could come about, but I would not call it a "Utopia".
this was assuming a point in the future when we don't have to worry about existential risk
This is kind of what I meant before. Of course you can assume that, but it is such a powerful assumption, that you can use it to derive nearly anything at all. (Just like in math if you divide by zero). Of course optimizing for survival is not important, if you cannot die by definition.