Imagine if the city of New York was inherited by newly sentient human beings a couple hundred years from now. These descendants of ours have no civil engineers or architects among them. They cannot even guess as to how these magnificent, glassy structures are made. Yet, every day they walk into these buildings. They climb "stairs" and use "elevators" to get to work. To this particular generation of human beings, "buildings" have always been there, and always will be. There's no need to understand the structure, or technical nature behind them.

One day, several of these buildings collapse--the death toll is in the thousands. In NY News, it's simply stated: 'God struck again!' or something to that effect. The ignorance of this generation is so deep and entrenched, that the notion of buildings collapsing is attributable to God. 


Now, compare this analogy to the modern day understanding of OpenAI, algorithms, and potentially the atrophy of critical thinking. I know few people these days who aren't using ChatGPT and Midjourney in some small way. The more conservative ones will only use it for menial, automated tasks. But most (I suspect) are using it for almost everything, ousting their brain for a Machine. 

What will the long-term effects on critical thinking be? My analogy argues that AI-assisted existence (AAIE?) will eventually give technical minds like Sam Altman a monopoly on knowledge; a technocracy on a scale we've never seen in human history. Mindless acceptance of spammed prompts that are built on hallucinations, and a Jenga tower of assumptions (apparently made for practical purposes?) is becoming an increasingly likely future.


I claim that very few people actually understand what they are using and what it effects it has on their mind. When our children inherit far more advanced iterations of ChatGPT, the 'buildings falling out of the sky' for reasons they don't understand will be mechanisms of control for technocrats. 

Perhaps I'm being paranoid. But I do stand firmly on the idea that there will be 'invisible enemies' that manifest in strange ways in the future. So strange in fact that we won't be able to identify that it came from AI at first. Worst case scenario we'll be so pacified and neglectful of our intellectual faculties that we won't identify the source at all.


We need to simplify how we explain artificial intelligence. We need more intellectual initiatives the public finds appealing. We need to take the absolute proposals more seriously. I think visions into what the future might look like and how to respond to it (not just models of AI alignment) are important.

Inspired by Have epistemic conditions always been this bad? — LessWrong.

New Comment
32 comments, sorted by Click to highlight new comments since: Today at 4:18 PM

"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."

This, but unironically.

The quote's from Plato, Phaedrus, page 275, for anyone wondering. 

Great quote.

I know few people these days who aren't using ChatGPT and Midjourney in some small way.

We move in very different social circles.

I claim that very few people actually understand what they are using and what it effects it has on their mind.

How would you compare your generative-AI focus to the “toddlers being given iPads” transition, which seems to have already happened?

Amazing question.

I think common sense would suggest that these toddlers at least have a chance later in life to grow human connections; therapy, personal development etc. The negative effect on their social skills, empathy, and the reduction in grey matter can be repaired. 

This is different in the sense that the cause of the issues will be less obvious and far more prolonged. 

I imagine a dystopia in which the technocrats are puppets manoeuvring the influence AI has. From the buildings we see, to the things we hear; all by design and not voluntarily elected to.

In contrast, technocrats will nurture technocrats--the cycle goes on. This is comparable to the TikTok CEO commenting that he doesn't let his children use TikTok (among other reasons, I know).

I suspect a lot of people here think that a (unusually powerful) human technocracy is the least we have to worry about. 

I believe you are correct about the feelings of a lot of Lesswrong. I find it is very worrisome that the lesswrong perspective considers a pure AI takeover as something that needs to be separated from either the issue of the degradation of human self-reliance capacities or an enhanced-human takeover. It seems to me that instead these factors should be considered together. 

I agree as well. It's the loss of humanity which is the problem, and you can get the same result from cyborgs, human modification, mass-production of non-humans, etc.

Since human nature is inherently irrational, and since the modern human being isn't well fit for the modern society, I think there's a pressure to modify human beings into something more robotic (This seems to be the purpose of schooling). Depressed over society? We have drugs for that. Unable to sit still? We have drugs for that. Don't want to conform? We have therapy for that. All our solutions focus on modifying human beings, and not on building a world which is suitable for human beings. Being more perfect usually means being more mechanical, less human, and less emotional.

Even religions, philosophies and communities like this one are very anti-human. Kill your biases, kill your ego, kill your desires, be meek, get in line, live in a pod and be happy with that, and so on. I see it a lot in mainstream psychology as well, in which desensitization is deemed a solution rather than a problem on its own. "The person isn't suffering anymore!" well, neither do they feel much of anything else, you've just lobotomized them.

I wrote a shortform called "The A of AI is sufficient for human extinction", but I was either not understood, or voted down simply for being contrarian (rather than wrong). I usually don't get replies at all unless it's from somebody who agrees with me. 

The correct use case of medication to treat depression and ADHD is not to "drag the non-conformers into line" but to help alleviate chronic, clinically measurable deficiencies in brain chemistry. A clinically depressed person will be consistently depressed irrespective of outside conditions. A person with severe ADHD will struggle to find motivation to focus on anything that doesn't capture their interest immediately, even if they genuinely want to do that thing.

I used to suffer from severe clinical depression, and it made it difficult to eke any enjoyment from anything, even things I once enjoyed. Those who are depressed but less self aware tend to blame whatever is going on around them as the root cause ("I feel awful and society sucks. I must feel awful because of society's problems"), but for cases of clinical depression it is a proven imbalance of neurotransmitters that are at least partially responsible.

I still struggle with some aspects of ADHD, but for both of them medication has changed my life. I can find motivation to accomplish things, curiosity to learn and joy from simple experiences. 

Without modern medicine I can confidently say that I would be a shadow of who I am now, because in my depressed and distracted state I never would have found the motivation and resilience to succeed at any of the hobbies and career choices that have taught me and brought me enjoyment. For that matter, I would likely never have discovered this site.

I have ADHD myself, and I used to have "uncurable" depression.

Kids are diagnosed with ADHD for showing completely natural traits like boredom. There's nothing wrong with them, the modern society requires too much tedius and sedentary activity, which is unnatural. This causes everyone at the far-end of the distribution, say the top 10% to be labeled "sick".

Many people are depressed because their needs aren't being met. Again, it's those with the stronger needs who become depressed first, those in the tail-end of the distribution. What need am I referring to here? The need for agency, control over ones own circumstances. That, without which, one feels powerless, trapped, insignificant, disillusioned, so that they come to the conclusion that life is meaningless.

Both ADHD medicine and anti-depressants have the side-effect of emotional blunting, making one into a robot or a zombie respectively. You technically aren't bored when you're running on autopilot, and you technically aren't sad when you feel absolutely nothing at all, but I wouldn't call either of them good solutions.
Everyone reacts differently to medication, and the positives can outweight the negatives, but it's still better to get away from bad environments than it is to help people cope with remaining in said environments. 

A clinically depressed person will be consistently depressed irrespective of outside conditions

Such people exist, but they make up perhaps 5% of all cases of depression. Your brain will give you the neurotransmitters you need if you fulfill your own needs, but what if your need of agency is higher than what the modern world allows, or if you work too many hours to have a social life as well? 

Both ADHD medicine and anti-depressants have the side-effect of emotional blunting, making one into a robot or a zombie respectively. 

That seems far too broad of a stroke. The statement "making one into a robot or a zombie" seems like a largely rhetorical and hyperbolic comparison rather than a useful observation describing detrimental effect. Side effects are highly variable between individuals.

While I don't doubt emotional blunting is among the potential side effects, do we have substantial evidence to indicate that it has high frequency in the population and high severity among those that do experience it? (I have not researched this particular issue so I don't know what the data actually says.)

Anecdotally, I've found depression to dull my emotions far more than antidepressants. And I feel a lot more enthusiasm and curiosity in general when taking my ADHD medication.

It may be interesting to note that I was not diagnosed for ADHD until I was 18. If I had been diagnosed as a kid I probably would have done much better in school. I used to hate math until I got treatment, now I have the ability to enjoy it.
That being said, I do agree that the United States school system in particular is dysfunctional and has far too much tedium and busywork.

> Everyone reacts differently to medication, and the positives can outweight the negatives, but it's still better to get away from bad environments than it is to help people cope with remaining in said environments.

I agree to that.

Such people exist, but they make up perhaps 5% of all cases of depression. Your brain will give you the neurotransmitters you need if you fulfill your own needs, but what if your need of agency is higher than what the modern world allows, or if you work too many hours to have a social life as well?
 

Do you have a source for that number? It seems like it would be difficult to pin down in a clinical trial setting. The "perhaps" sounds like it was a ballpark estimate from memory or similar, which is fine but I would like to see the underlying data supporting it.

I would say "a healthy brain will produce the neurotransmitters needed if sufficient physical/social needs are met."

I'm part of the population that was clinical and chronic. I felt depressed despite consciously acknowledging that things were going relatively well in my life and I could not pin down any good reason why I would feel so numb and empty.

Thanks for your reply!

I live quite close to a place for mentally ill people, and will have to walk around some of them on their long hallways, for otherwise they will just walk straight into you. There are no visibility problems, they just don't register that other people might exist on their path to whatever destination. This is how the staff treat their suffering, by medicating them into a lower state of consciousness.
Speaking of which, anti-psychotic medicine is mood-stabilizing, which is precisely my point. It's all about restricting the range of emotion. It's the same for stoicism, by the way.

The zombie effect is what I've personally experienced, it's because I also have anxiety, which stimulants make worse. The body then either numbs you as a defensive mechanism (like the state of shock) or simply depletes the resources required for keeping you in a high-alert state.

Until I started ADHD medicine, I was used to feeling the entire range of emotions: https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fc0v1cv9crcc51.jpg
On this scale, I'd swing between 1 and 10. Now all I experience is 5-7. Most professionals would consider this a win, but I feel like the full human experience has been robbed from me, it's like going from a wild rollercoaster ride to a kiddy-version.
My opinions are not simply born from this, though. I take full responsibility for everything I've done to my own body.

According to a quick Google search, emotional blunting happens to 50% of users of anti-depressants: https://www.cam.ac.uk/research/news/scientists-explain-emotional-blunting-caused-by-common-antidepressants

This article, which I randomly clicked, by the way, even makes the same observation as me: "Emotional blunting is a common side effect of SSRI antidepressants. In a way, this may be in part how they work – they take away some of the emotional pain that people who experience depression feel, but, unfortunately, it seems that they also take away some of the enjoyment"
Amphetamine-like stimulants affect the amygdala. I found that non-stimulants had the same blunting effect for me though, and if you search on Reddit, many other people have experienced similar things.
I found this study result for atomoxetine: "Atomoxetine led to a small (effect size 0.19) but significant (P=0.013) treatment effect for emotional control". It wouldn't surprise me if this "emotional effect", which is of course described positively, is just a clamping of the range of emotions.

Of course, I wish you the best, and if medicine helps you, then I don't want to discourage your use of it. I just think better approaches are possible, and that destructive tradeoffs are described more positively than they ought to be.
A common piece of advice I see online is "Nothing matters" and "Nobody cares about you, they're too busy thinking about themselves". This is freeing to hear for some, but it's ultimately a nihilistic kind of thinking, mere detachment.

> Do you have a source for that number?

No, it's an estimate for the portion of people living good lives (psychologically healthy lives, not just socities impression of a good life) but finding themselves unexpectedly depressed despite that. And I think it's quite telling that the rate of depression is almost negatively correlated with standards of living. Africa has some of the lowest rates! Our "improvements" are making us enjoy life less, not more, we are very poor judges of what is good for us.

I don't think depression is an error at all, I think it's an adaption to an environment (and that agency and control are fundemental needs). There's a Wiki page called "Evolutionary approaches to depression" but it seems this view is still controversial and that it's not receiving a lot of attention.

> I could not pin down any good reason why I would feel so numb and empty

This is just a guess, but did you perhaps think logically? "Logically", a zoo animal should be happy because they have all the food and water they could ever want, as well as absolutely safety. But zoo animals are known to have high rates of depression and anxiety just like the general population in modern socities. Logic doesn't help with psychological well-being, even Nikola Tesla died poor and alone. I'm 90% sure he'd have lived a better life if only he had socialized more (and met better people), but intelligent people do not introspect with the consideration that they're animals with animal needs, they consider themselves above such things

And I think it's quite telling that the rate of depression is almost negatively correlated with standards of living.

Lack of exercise is a large contributor to this trend, as demonstrated by the well-established data that exercise is one of the biggest factors benefiting mental health. (I suspect you already agree with this point considering you previously mentioned sedentary behavior as one of the problems with modern life, but I'm citing some sources for any who want to verify the claim.)
And even knowing this, it's still difficult for me to make myself exercise regularly. Stupid psychology.

 >if you search on Reddit, many other people have experienced similar things.

I try to avoid using anecdotal data to update my map. Social media tends to be biased towards negativity, so I would guess that those who are happy with their results with medication are a lot less likely to post their experience than those who have issues they want to discuss. That is not to say their issues are not relevant problems to address, but it does raise questions as to how large the proportion of people with undesirable side effects is relative to the proportion of patients who are satisfied with their treatment.

>"Logically", a zoo animal should be happy because they have all the food and water they could ever want, as well as absolutely safety. But zoo animals are known to have high rates of depression and anxiety just like the general population in modern socities.

It's logical only if one assumes that food, water and safety are their only important needs. Perhaps we haven't made the effort to understand what they need that the zoo isn't set up to provide. (Like, say, a few more square kilometers of running space)

It's interesting that this does not apply to all animals in captivity. For instance, the types of fish that do well in captivity have a significantly longer lifespan on average (assuming proper aquarium conditions) than the same species does in the wild. For example: Clownfish in the wild on average live between 4 and 10 years. Clownfish in a well-maintained aquarium have been known to live more than 20 years. (Note that while their lifespan in the wild has been scientifically studied, the lifespan in captivity is entirely based on anecdotal reports)

I do agree with exercise. And yeah, knowing what to do is insufficient. If exercise is part of ones lifestyle it doesn't feel like a bother, but if you're told to go to the gym you likely won't be motivated whatsoever.

The Reddit data proves that some people have had an experience (so it's a proof by contradiction of the claim that such an experience doesn't happen), you're correct that it doesn't give us an idea of the real prevalence.

We do know more or less what zoo animals need, it's just cheaper to give them less than that. What humanity does seem entirely ignorant about is human needs and mental health in general. Consider for instance the impact of telling children "Santa knows if you've been bad" or "god is watching you". Do we even know? What about the modern panopticon that is society? I found some related papers on this but they're scarse and about specific cases like being watched at work.

How human beings function is perhaps the most important branch of knowledge in the world, and yet it's neglected like this. Even modern psychologists can learn something from Buddha. You can't say the same about fields like mathematics or physics, as these actually advance.

I agree about fish by the way. Longevity and well-being aren't exactly the same, but it's a fair point nonetheless. I think small fish are more on the simple side as far as animals go, though. This might be why fish is the go-to if you want to get away with animal cruelty.

I agree about fish by the way. Longevity and well-being aren't exactly the same, but it's a fair point nonetheless. I think small fish are more on the simple side as far as animals go, though. This might be why fish is the go-to if you want to get away with animal cruelty.

My conjecture as to why certain small fish do well in captivity is that (for those I've taken the time to research) their lifestyle is something along the lines of "find a cozy home in a rock/hole/anemone/coral and stay there waiting for food to come to you". If an aquarium provides access to that same lifestyle + the added benefit of no predators, why wouldn't they be better off? Humans only feel like an aquarium is abusive because our own psychology would go stir-crazy if we were trapped in a box all the time. But for an animal that will voluntarily live it's entire adult life within a single square meter of reef? Why should it mind?

On the other hand, I do think its cruel to keep certain other type of fish in aquaria (e.g. tangs and other types of surgeonfish) because their growth will be stunted in an aquarium setting and in the wild they are grazers that might swim many kilometers in a single day.

That does sound reasonable.

What I mean by cruelty is fishing, bonking them on the head, boiling lobsters alive, eating squids while they're still moving, forgetting to feed your goldfish while on vacation and throwing its dead body into the toilet, etc. Even some people who call themselves vegetarians eat fish. They seem to be a level lower than cows and pigs, which are a level lower than cats and dogs, on some imaginary hierarchy.

Agreed again, we should only keep animals whose needs we can more or less satisfy

People who eat seafood, but not the flesh of other terrestrial animals are pescatarian. Ethical (as opposed to environmental) pescatarians say fish and other marine life aren't complex enough for fear or pain. Perhaps they call themselves vegetarians just to avoid having to explain pescatarianism.

This and other communities seek to transcend, or at least mitigate, human imperfections. Just because something is "human" doesn't mean it contributes to human flourishing. Envy, rage, hate, and cruelty are human, after all.

The things you listed do contribute to human flourishing, we did not develop them on accident. They're often manifested under undesirable circumstances, but the problem is those undesirable circumstances, and not this human response, which after all was developed precisely to deal with said circumstances.

Maybe this will explain what I mean by that: It would be wrong to say "Fever is bad". Fever is a symptom of something bad, but it's not a bad response in itself. Socities beef with "suffering" is similar, we're fighting a symptom, not a cause. These symptoms are valuable! They're uncomfortable, yes, but they're still very good. If you lost the ability to feel pain and hunger, you'd be much worse off.

It's naive to fight negative emotions with escapism or drugs, or by fabricating some belief which creates a worldview in which you feel no guilt, fear or stress. It would be a maladaptive response not much different from wireheading... But! It's the same with all of these "rational solutions". Life puts a pressure on a rational person in order to motivate them to compete, and said rational person dreams up a post-scarcity world in which they won't need to compete. But competition is good for you. Most things which feel bad (like exercise, and facing your fears) are healthy. Said emotions exist to motivate us towards better health, but we're making the emotions themselves out to be bad!

Envy is what gave birth to the moral value system which cares more for the weak than for the strong. Rage is a powerful motivator, an important drive which helps you defend yourself against injustice. Hate is the dual of love, any modification which makes you unable to hate is likely to make you unable to love (there's many kinds of love, I'm refering to the same one as "falling in love" does). Cruelty is usually born from weakness, in a pivotal change in which one surrenders to something which overpowers them (the psychological equivalence of "If you can't beat it, join it"). We're creating a society of absolute cowards, precisely because we want to be "good", but this is going to make cruelty more common, not less.

What does flourishing matter if there's no conscious, feeling agent left to appreciate it? Your imperfections are more valuable than you think.

Said emotions exist to motivate us towards better health, but we're making the emotions themselves out to be bad

Said emotions exist to motivate us to take actions that benefit our likelihood of reproductive success, of which our health is a contributing variable.

The subset of possible actions that benefit our individual likelyhood of reproductive success includes many things that are detrimental to collective human health and prosperity, and as such, actions motivated by emotions that were generated by selecting for said purpose are not to be trusted as strictly beneficial to our long-term well-being, goals, and desires.

Example: Thag the hunter-gatherer is envious of Oog because Oog gets more attention from women. Dropping a rock on Oogs head while he sleeps would increase Thag's relative appeal and subsequent reproductive success. But that plan is rejected when Thag has the rational realization that since Oog is the tribe's best hunter he might go hungry if Oog dies.

many things that are detrimental to collective human health

You mean "immoral" actions? This definition includes many healthy things, which are deemed bad by a large number of people who are too weak to undertake said healthy things.

To give you an example in which this effect is visible, bad students might mock their nerdy friends for getting good grades in school, discouraging them from competing and getting ahead.

A less obvious example is banning competitions in schools, or the general consensus that competition is evil/cruel. Some people are discouraged by competition, while others almost need it to reach their potential. Depending on which type you are yourself, your preference is likely to change to suit the one which would help people like yourself thrive. So when society has a majority of people who aren't confident in their own abilities, it develops a general consensus that competition is evil.

Envy is one of the ugliest and most anti-social motivations, and the hardest one to defend. But does most of the modern society not think similiarly to Thag? Don't they think that the losers are good while the winners are bad? That all rich people are evil and that all privilege is a result of exploitation of the harmless (and thus innocent and good) average person? If the average person was to think like this, who would stop them from forming a consensus using the power of numbers?

You mean "immoral" actions? This definition includes many healthy things, which are deemed bad by a large number of people who are too weak to undertake said healthy things.

No, because of the subjective nature of the term as commonly used. The average person's moral compass is strongly influenced by appeals to emotion and other common irrational arguments. This is precisely the reason why it is important to work to overcome as much of our own biases as we can.

> bad students might mock their nerdy friends for getting good grades in school, discouraging them from competing and getting ahead.
> That all rich people are evil and that all privilege is a result of exploitation of the harmless (and thus innocent and good) average person?

Yes, it seems to be a common fallacy today that all success must be intrinsically at someone else's expense. In the generalization it is forgotten (or never realized) that interpersonal and economic interactions are a spectrum ranging from mutually beneficial to parasitic and everything in between. 

A useful flag for spotting an opinion based on faulty reasoning is if they try to label an entire broad category of <subject> as absolutely good or absolutely evil. E.G. "all rich people are evil".

I agree, but society is still run entirely by these biases, and even the LW community is, and that can't be helped because we're human. If we erase the biases we erase our humanity, but it's important that we don't change these processes in order to get what we want, as we're designed to not get what we want.

I enjoy self-improvement, and I have fun with that, as long as there's still improvements to be made. If a superintelligent entity appeared and said "there, now you're perfect", I'd quickly turn miserable, as the feeling of growth and progress is what gives my life meaning. Solving all my problems would be the most cruel thing you could do to me, and why would it be different for other people?

The only thing which crushes people is when difficulties appear faster than they can be overcome, and the individual in question enters a negative spiral rather than an upwards one, until their self-esteem is in the gutter.

Upon reflection, I think the fatal mistake that the modern society makes is that it tries to control complex systems rather than letting them control themselves. interfering is the biggest cause of problems. Most things which are "good", you will never arrive at by aiming at them directly. They're side effects, more often than not born out of their opposites.

The so-called "Good people", who tell other people what to do (rather than leading by example and encouraging others to do like them without coercion) have likely caused more evil in the world than any other group. But perhaps every single "improvement" starts with control - censorship, surveillance, prosecution, rules, regulations. Coercive and unnatural methods of forcing unwilling parties to align. There's a game of cat and mouse between new regulations and loopholes in which everyone loses. Not even the "experts" knows what's best for everyone, and theory rarely aligns with reality anyway (changing the data until it filts the theory is the norm, but this is a silly act of self-deception)

The second largest cause of bad conflict is that the representation of data is skewed. If your experiences with, say, mormons is postive or negative, then your stance is completely justified. If you hear 100 good or bad stories of them, and that casuses you to like them or hate them, then your intuition about said group is not based on reality, but on the bias of the media in which you learn about them. Any other like or dislike, being based on reality, is frankly healthy and valid, and not something that other people are justified in "fixing", given of course that it's not an improperly generalized, vague stereotype/mental boogeyman (but such mental associations only form in echochambers).

What the world is currently doing, is attacking the very best things we've come up with, demanding that they change. Why? Japan has some of the lowest crime rates, so if anything, we should be more like Japan. Instead, we're demanding that Japan be more like the America, which has much more crime. We say it's "immoral" because immigration is difficult, but what if this is the reason Japan is doing well?
This too is partly due to the innate human bias that makes us look for bad things to eliminate rather than good things to imitate, and our tendency to look at minor factors when we should be judging holistically. And the only people who are justified in judging anything are those who are involved. e.g. Should 4chan be shut down? That's for 4chan users to judge. Is the Indian Caste system bad? That's for Indians to judge.

To summarize, I guess: Humans are less intelligent than self-regulating systems, and interference is not recommended.

society is still run entirely by these biases

Here's what I see as one key difference of opinion between my view and yours. I would say that society manages to function in spite of these biases, not because of them.

>Upon reflection, I think the fatal mistake that the modern society makes is that it tries to control complex systems rather than letting them control themselves. interfering is the biggest cause of problems. Most things which are "good", you will never arrive at by aiming at them directly. They're side effects, more often than not born out of their opposites.

I think the reason you tend to see only comments from people who agree with you is those who disagree see how effort-prohibitive it would be to make an attempt to address so many broad conclusions that have no clear explanation of reasoning given. For example: "the fatal mistake", "interfering is the biggest", "Most things which are "good" are so vague and general that they are unreasonably 'slippery' to qualify and evaluate.
What reasoning do you have to support these conclusions as opposed to any of the other potential options one could draw? A common problem with such generalizations is that they fail to adequately apply to everything they're attempting to describe.

The answers to those questions would require a post of their own. It would be a waste to write such an in-depth explanation in a comment thread. If you do decide to take the time to go through and explain your reasoning from the ground up then let me know, I'd be happy to read it.

 > If your experiences with, say, mormons is postive or negative, then your stance is completely justified.

Opinions based on anecdotal evidence are not intrinsically good justification for forming a generalized opinion about an entire group, even if first hand experiences are more reliable than hearsay. I don't see any distinction between that and "The rich people I've seen/heard of are greedy and selfish, therefore all rich people are evil."

 > Any other like or dislike, being based on reality

That's not necessarily the case. Individuals frequently draw factually incorrect conclusions from their own experiences, which then influence their opinions. While opinions are themselves subjective and therefore neither "true" nor "untrue", if a belief one holds about the world turns out to be false, every opinion based on it is more or less invalidated, depending on if there are other supporting factors involved in the formation of the opinion.

 >What the world is currently doing, is attacking the very best things we've come up with, demanding that they change. Why? Japan has some of the lowest crime rates, so if anything, we should be more like Japan. Instead, we're demanding that Japan be more like the America, which has much more crime. We say it's "immoral" because immigration is difficult, but what if this is the reason Japan is doing well?

I'm not sure who "we" is referring to in this case or what specific policies you're talking about here.

I should clarify, human instincts are what keeps us alive, and modern values, which we assume are based on logic but which are actually just rationalizations urged by poor mental health, are ruining everything.

> so many broad conclusions that have no clear explanation of reasoning given

This comment got quite broad, I'm usually much more specific. But my observations aren't much more complex than say, The Fun Theory Sequence.
Aiming at things directly doesn't work, happiness is a great example here. And I assume that most intelligent people have spottet this pattern by now: The proper solution is often the exact opposite of what's intuitive. If I want a proper sleep, I shouldn't aim at rest but at hard work. If I want people to compliment me then I should be modest, if I want to run away from my fears then I should face them instead, if I want to receive love then I should give it. If you want X, then you should go for whatever results in X.

I also assume that most people know of an online game, a website, a club, or some other community which thrived until somebody decided to improve it by imposing rules on it. Sadly, most of them will attribute this to nostalgia. But they probably know that, whatever magic they experienced, is unlikely to ever appear in this world again. They required something which won't ever happen again. Do people not reflect on what such things are?

If I take a thing at a time, I'd probably have to write 5-10 posts with 10-20 pages of material each, even if I'm being somewhat concise. I will consider doing this in the future.

I don't see any distinction

Your own experiences are only a sample, but they will quickly converge towards reality. Whatever you hear will be whatever people can profit from telling you, and it's very likely that you will hear things which conflict with your experienced reality. You might notice that food costs 2x more than it did just a few years ago, and then read a media article about food is getting cheaper. In either case, I don't believe it's proper to attack people for voicing their experienced reality, and people changing their behaviour based on positive/negative reinforcement is exactly how small changes happen organically. For instance, communities tend to dislike it when many new people appear, in case these people don't follow the communities conventions. This behaviour (gatekeeping and elitism) is now disappearing due to political (and 'moral'?) pressure, but I consider this the unfortunate overwriting of the instinct of self-preservation. Notice how the things which are being demonized (nationalism, borders, discrimination, gatekeeping, egoism) have one thing in common, they're self-preferring.

Individuals frequently draw factually incorrect conclusions from their own experiences

That does happen, but I don't consider truth and falsehood very important here. It's preferences, values, and ways of viewing the world. God likely doesn't exist, neither does things like honor and "face", but if a group of people are happy to make one of these sacred, then why not let them? There's not a lot of actual objective truth to draw from, most sentences that we deem true are actually interpretations of facts rather than facts themselves.

I'm not sure who "we"

The majority. It's the leading way of thinking. Progressives, the UN, the WEF? Whoever reversed the public opinion on immigration in all English-speaking countries (including most of Europe) in just 15 years. Everything is so connected by now that it doesn't matter. I have friends in more than 20 different countries, and the things they talk about, their opinions, their way of talking, their jokes, it's all converging towards the same few things. Am I the only one seeing these things?

I'm puzzled by your use of the word "intelligence." Intelligence refers to a capacity to understand facts, acquire knowledge and process information. Humans are presently the only members of the set of intelligent self-regulating systems.

Yeah I should explain that. I'd argue that instinct is a form of intelligence. People also differ a bit in how they think. Autistic people tend to be quite logical, analytical and systematic, but this also seems to be precisely why they have difficulties with socializing (not judging, I'm autistic myself). They don't let it occur naturally, they try to control it, getting in the way of system 1 thinking.

But this is a great metaphor for what we're doing to society, we're messing up in a very similar way.

Also, organic/natural change is bottom-up, local changes causing global ones, but powerful entities control society top-down, with global changes causing local ones.

[+][comment deleted]2mo10

Sure. I think in an Eliezer reality what we'll get is more of a ship pushed onto the ocean scenario. As in, Sam Altman or whoever is leading the AI front at the time, will launch an AI/LLM filled with some of what I've hinted at. Once it's out on the ocean though, the AI will do it's own thing. In the interim before it learns to do that though, I think there will be space for manipulation.

[-][anonymous]2mo20

I am curious if by "Altman technocracy" you meant:

  1. Sam Altman the individual CEO. It is likely that Mr. Altman doesn't personally know all the innovations in the source for the most successful models OAI has released. (Besides stack more layers...)

  2. OpenAI/Google specifically. What's unique here that isn't true for past innovations is the remotely hosted models, which are hugely ahead of any scientific paper, can't be torn down and reverse engineered. Past major innovations - jet engines, smartphones, efficient cars, battery - someone tears it to bits, and over time all the major secrets are known throughout the industry. Only NDA breaking leaks and rumors are how we know anything past gpt-3, except for insiders, who only know what their LLM shop does not any innovations found at the other ones.

  3. AI allowing humans to understand less and less about the details of how everything works.

Yes this will happen, but it's not 1 way. Learning tools like Arduinos/rPis/Linux/oss/3d printers have made computers not the exclusive domain of IBM. And inexorably over time the open alternatives have become strictly dominant solutions for most purposes.

If it weren't for the compute requirements, open source AI would likely be strictly dominant over closed today for the simple reason that anyone can contribute and the model reports what it is doing to no one and has no restrictions that a user wants to remove. (This is also what's wonderful about Linux, no rules and no forced upgrades, so it is possible to do anything, including a lot of r&d stuff or editing the kernel)

Assuming open source models qualitatively catch up, and compute provides diminishing returns or becomes cheaper, the same will be true. There will be nothing a user won't be able to explain or build with the help of their open AGI. They will be able to reverse engineer any technology they can tear apart, design (at a smaller scale not SOTA) anything that the civilization knows how to build.

Want to design your own GPU or write an OS from scratch? As a single person that's a lifetime project. With your own AGI, a weekend. Change something fundamental in the strategy and rewrite all the drivers to use the new compute cores? A few hours to update.

I used 'Altman' since he'll likely be known as the pioneer who started it. I highly doubt he'll be the Architect behind the dystopian future I prophesise. 

In respect of the second, I simply don't believe that to be the case.

The third is inevitable, yes.

I would hope that 'no repair' laws, and equal access to CPU chips will come about. I don't think that this will happen though. The demands of the monopoly/technocracy will outweigh the demands of the majority.