I'd say this is the point at which one starts looking into current state-of-the-art psychology (and some non-scientific takes too) to begin understanding all the variability in human behavior and cognition, and which kinds of advantages and disadvantages each provides from different perspectives, from the individual, to the sociological, to the evolutive.
Much of that disappointment is solved by that. Some of it deepens. The overall effect is a net positive though.
Unfortunately, they aren't rational. I developed this theme a little bit more in another reply, but to put it simply, in the US GAI is being pursued by insane individuals. No rational argument can stop someone who believes in that. And the other sides will try to protect themselves from these.
Admittedly, nuclear weapons are not a perfect analog for AI due to many reasons, but I think it is a reasonable analog.
We've had extreme luck when it comes to nuclear weapons. We not only had several close calls that were deescalated by particularly noble individuals doing the right thing, but also, back when the URSS had barely developed theirs and the US alone had a whole stockpile of warheads, we had the good luck of its leadership also being somewhat moral and refusing to turn nukes into a regular weapon, which was followed by MAD forcing everyone to kind of stay so even when the other side asked nicely whether they could bomb a third party. Weren't for that long sequence of good luck after good luck, and we'd now be living in an annihilated world, or at the very least a post-apocalyptic one.
With this in mind, I wanted to ask out of curiosity, what % risk do you think there needs to be for annihilation to occur?
I have no idea, really. All I can infer is that it's unlikely any major power will stop trying to achieve GAI unless:
a) Either a massively severe accident due to misaligned not-quite-GAI-yet happens that by its sheer, absolute horror puts the Fear-Of-God in our civilian and military leaders for a few generations;
b) Or a long sequence of reasonably severe accidents happens, each new one worse than the last, with AI companies repeatedly and consistently failing at fixing the underlying cause, this in turn making military leaders deeply wary of deploying advanced AI systems, and civilian leaders enacting restrictions on what AI is allowed to touch.
Absent either of those, I doubt the pursuit of GAI will stop no matter what X-risk analysts say. Or at least, I myself cannot imagine any kind of argument that'd convince, say, the CPC to stop their research when those on the other side spearheading theirs are massively powerful nutjobs? And then, what argument could be provided that'd stop someone who believes in this? So, neither will stop, which means GAI will happen. And then we'll need to count on luck again, this time with:
i) Either GAI going FOOM as Yudkowsky believes, but for some reason continuing to like humans enough not to turn us into computronium;
ii) Or Hanson being right and FOOM not happening, followed by:
ii.1) Either things being slow enough to "merely" lead to a or b, above;
ii.2) Or things being so immensely slow we can actually fix them.
I have no opinion on whether FOOM is or isn't likely. I've read the entire discussion and all I know is both sets of arguments sound reasonable to me.
I’m assuming that - and please correct me if I’m misinterpreting here - “extinguish” here means something along the lines of, “remove the ability to compete effectively for resources (e.g. customers or other planets)” not “literally annihilate”.
I wish that were the case, but my reference is imagining a paranoid M.A.D. mentality coupled with a Total War scenario unbounded by moral constraints, that is, all sides thinking all the other sides are X-risks to them.
In practice things tend not to get that bad most of the time, but sometimes they do, and much of military preparation concern mitigation of these perceived X-risks, the idea being that if "our side" becomes so powerful it can in fact annihilate the others, and in consequence the others submit without resisting, then "our side" may be magnanimous towards them conditional on their continued subservience and submission, but if they resist to the point of becoming an X-risk towards us, then removing them from the equation entirely is the safest defense from the X-risk they pose us.
A global consensus on stopping GAI development due to its X-risk for all life passes through a global consensus, by all sides, that none of the other sides is an X-risk to any of side. Once everyone agrees on this, then they all together agreeing to deal with a global X-risk becomes feasible. Before that, only if they all see that global X-risk as more urgent and immediate than the many local-to-them X-risks.
Unfortunately, those in positions of power won't listen. From their perspective it's simply absurd to suggest that a system that currently directly causes, at most, a few dozen induced suicide deaths per year, may explode into death of all life. They have no instinctive, gut feeling for exponential growth, so it doesn't exist for them. And even if they acknowledge there's a risk, their practical reasoning moves more along arms-race lines:
"If we stop and don't develop AGI before our geopolitical enemies because we're afraid of a tiny risk of an extinction, they will develop it regardless, then one of two things happen: either global extinction, or our extinction in our enemies' hands. Which is why we must develop it first. If it goes well, we extinguish them before they have a chance to do it to us. If it goes bad, it'd have gone bad anyway in their or our hands, so that case doesn't matter."
Which is to say they won't care until they see thousands or millions of people dying due to rogue GAIs. Then, and only then, they'd start thinking in terms of maybe starting talks about perchance organizing an international meeting to perhaps agree on potential safeguards that might start being implemented after the proper committees are organized and the adequate personal selected to begin defining...
But obviously, factory farm animals feel more pain than crickets. The question is just how much pain?
This paper is far from a complete answer, but it may help:
This isn't a dichotomy. We can farm animals while making their lives reasonably comfortable. Their moments of pain would be few up to and until they reach the age for slaughter, which itself can be made stress-free and painless.
Here in Brazil, for example, we have huge ranches where cattle move around freely. Cramping them all in a tiny area to maximize productivity at the cost of making their lives extremely uncomfortable, as in the US factory farm system, may happen here, but I'm not personally aware of it so unusual that is. The US could do it the same way, as it isn't like the country lacks territory where cattle could roam freely, but since this isn't required by law, and factory farming is more profitable, this is rare, with the end result of free-roaming meat being sold at a much higher premium than it should.
Brazilian chickens, on the other hand, are typically cramped together the same as in the US, unless one opts to buy eggs from small family-owned farms, who mostly let them roam freely.
A few remarks that don't add up to either agreement or disagreement with any point here:
Considering rivers conscious hasn't been a difficulty for humans, as animism is a baseline impulse that develops even in absence of theism, and it takes effort, at either the individual or cultural levels, for people to learn not to anthropomorphize the world. As such, I'd suggest a thought experiment that allows for the possibility of a conscious river, even if composed of atomic moments of consciousness arising from strange flows through an extremely complex network of pipes, taps back, into that underlying animistic impulse, and so will only seem weird to those who've previously managed to supress it either via effort or nurture.
Conversely, as one can learn to suppress their animistic impulse towards the world, one can also suppress their animistic impulse towards themselves. Buddhism is the paradigmatic example of that effort. Most Buddhist schools of thought deny the reality of any kind of permanent self, asserting the perception of an "I" emerges from atomistic moments as an effect of those interactions, not as their cause or as a parallel process to them. From this perspective we may have a "non-conscious in itself" river whose pipe flows, interrupted or otherwise, cause the emergence of consciousness, exactly the same and in no way differently from what human minds do.
But even those Buddhist schools that do admit of a "something extra" at the root of the experience of consciousness, consider it as a form of matter that binds to ordinary matter to, operating as a single organic mixture, give rise to those moments of consciousness. This might correspond, or be an analogous on some level, to Searle's symbols, at least going from the summarized view presented in this post. Now, irrespective of such symbols being or not reducible to ordinary matter, if they can "attach" to human brain's matter to form, er, "carbon-based neuro-symbolic aggregates", nothing in principle (that I can imagine, at least) prevents them from attaching to any other substrate, such a water pipes, at which point we'd have "water-based pipe-symbolic" ones. Such an aggregate might develop a mind of its own, and even a human-like mind, complete with a self-delusion that similarly believes that emergent self as essential.
As such, it'd seem to me that, without a fully developed "physics of symbols", such speculations may go either way and don't really help solve the issue. A full treatment of the topic would need to expand on all such possibilities, and then analyse them from perspectives such as the ones above, before properly contrasting them.
Where is all the furry AI porn you'd expect to be generated with PonyDiffusion, anyway?
From my experience, it's on Telegram groups (maybe Discord ones too, but I don't use it myself). There are furries who love to generate hundreds of images around a certain theme, typically on their own desktop computers where they have full control and can tweak parameters until they get what they wanted exactly right. They share the best ones, sometimes with the recipes. People comment, and quickly move on.
At the same time, when someone gets something with meaning attached, such as a drawing they commissioned from an artist they like, or that someone gifted them, it has more weight both for themselves, as well as friends who share on their emotional attachment to it.
I guess the difference is similar to that many (a few? most?) notice between a handcrafted vs an industrialized good: even if the industrialized one is better by objetive parameters, the handcrafted one is perceived as qualitatively distinct. So I can imagine a scenario in which there are automated, generative websites for quick consumption -- especially video, as you mentioned -- and Etsy-like made-by-a-real-person premium ones, with most of the associated social status geared towards the later.
A smart group of furry advertisers would look at this situation and see a commoditize-your-complement play: if you can break the censorship and everyone switches to the preferred equilibrium of AI art, that frees up a ton of money.
I don't know about sexual toys specifically, but something like that has been attempted with fursuits. There are cheap, knockoff Chinese fursuit sellers on sites such as Alibaba, and there's a market for those somewhere otherwise those wouldn't be advertised, but I've never seen someone wearing one of those on either big cons or small local meetups I attended, nor have I heard of someone who does. As with handcrafted art, it seems furries prefer handcrafted fursuits made either by the user themselves, or by artisan fursuit makers.
I suppose that might all change if the fandom grows to the point of becoming fully mainstream. If at some point there are tens to hundreds of millions of furries, most of whom carrying furry-related fetishes (sexual or otherwise), real industries might form around us to the point of breaking through the traditional handcraft focus. But I confess I have difficulty even visualizing such a scenario.
Hmm... maybe a good source for potential analogies would be Renaissance Fairs scene. I don't know much about them, but they're (as far as I can gather) more mainstream than the Furry Fandom. Do you know if such commoditization happens there? That might be a good model for what's likely to happen with the Furry Fandom as it further mainstreams.
I've read about the MBTI for a while. Not in extreme depth, but also not via the simplifications provided by corporate heads. In depth enough to understand the basics of Jungian psychology on which the MBTI is based, though. So what I will say is likely going to differ significantly from what you learned in this course.
So, the most important thing is, the (real) MBTI four letters do not represent extremes on four different axes. That they do is one such simplification.
The core of the Jungian hypothesis on personality is that there are eight distinct cognitive functions, that is, eight basic ways the mind processes and organizes external and internal information.
These eight cognitive functions form two opposite pairs: Sensing vs Intuition, and Thinking vs Feeling, any of which may operate in either an Extraverted or an Introverted mode. Notice that it isn't that Introversion and Extroversion form an axis, but rather that, say, "Introverted Thinking" and "Extroverted Thinking" form two very distinct modes of Thinking, to the point they cannot be considered the same cognitive process at all.
Jung considered every person to have all eight cognitive functions operating in them, but at very different weights, with a dominant one. In his system, I'd be someone who's using Introverted Thinking as my default cognitive function almost 24/7, only varying this when needed under specific circumstances. So, for him, there were eight personality types, depending on which cognitive function is dominant for every person.
Myers and Briggs studied his works on the topic, and thought it was incomplete. They hypothesized that specifying a single cognitive function as dominant wasn't enough to properly describe how the person functions. In their view, it was also necessary to take into account the cognitive function used secondarily. In my case, the secondary function I use the most is Extraverted Intuition.
Hence, for Myers and Briggs, my personality is defined as being primarily an Introverted Thinker, who uses Extraverted Intuition to fill the gaps where Introverted Thinking doesn't cut it. And that's it.
What are the four letters then?
They're a needlessly convoluted way to say the exact same thing.
In the MBTI system, the two letters in the middle inform what my two main cognitive functions are. Since I use Intuition and Thinking, they're "NT". But that doesn't say which of these is my main cognitive function and which is the secondary, nor which is Introverted or which is Extroverted. That's what the other two letters say. The "I" at the beginning informs that my main function, whether it's the Thinking or the Intuition, is of the Introverted type. And the final letter finally informs whether that "I" applies to the "N" or to the "T". In my case, the fourth letter would be "P", meaning my main function is the "T" one, which thus is the one the "I" affects.
Yes, that's completely nuts. It'd be much, much easier to use something like "IT/EN".
And this brings another aspect of their system I alluded to above. They consider that the main and secondary cognitive functions always have opposite "-version". Hence, by specifying that my main type is Introverted Thinking, that automatically assumes the secondary one, Intuition, is Extraverted.
There are a few more details. Basically, the third and fourth most used cognitive functions come from the determination of the first two. In my case, my third and fourth most used cognitive functions would be, respectively, Extraverted Sensing (the opposite of the first), and Introverted Feeling (the opposite of the third). And the other four would fall behind at positions fifth to eighth. The full set is my so-called "cognitive stack".
TL;DR then: the four letters are not axes, they're a very, very confusing way to say that, from the eight cognitive functions Jung identified, I hierarchize them following this specific sequence of priorities. By default, most of the time, I use this one, and then the others with lower and lower priority, following that sequence. There are (presumably) 16 standard stacks, and maybe several non-standard pathological ones. And all that the four MBTI letters inform is which of the 16 cognitive stacks applies in my case.
This, fundamentally, is the reason why the MBTI doesn't correlate well, or at all, with the Big Five: the MBTI has no axes in a traditional psychometric sense. It's an ordinal hierarchy of preferred cognitive processes, not a cardinal set of values.
And the easiest way, by far, to identify one's MBTI is to simply read the detailed descriptions of the eight cognitive functions. One of them almost always pops up as "yeah, that's how I think most of the time", with another popping up as "yeah, I also use this one a lot, not as much as that one, but still a lot", the other six being stuff one clearly rarely uses.
Now, is any of this scientific? I don't know. I have read many attempts at determining this, but all of them assume the four letters represent four axes that can then be psychometrically evaluated, which absolutely has nothing to do with what Jung was talking about, and I'm not aware of any psychological study about the validity, or lack thereof, of his hypothesis about the eight cognitive functions themselves (maybe there are?), much less, assuming they're valid, of Myers and Briggs specific assertion they almost always come in 16 stacks (maybe they do, maybe they don't, maybe they vary over time, etc.).
For my own anecdotal case, I find Introverted Thinking coupled with Extraverted Intuition, as described by Jung, covers a lot of how I function. Not everything by far, but a lot. So it's useful. More than that, I cannot really say.
Hope this helps!