Nobody special, nor any desire to be. Just sharing my ideas when I appear to know better than the person I'm responding to, or when I believe I have something interesting to share/add. I'm not a serious nor a formal person, and if you're more knowledgeable than intelligent, you probably won't like me as I lack academic rigor.
Feel free to correct me when I make mistakes. I'm too certain of myself as my ideas are rarely challenged. Crocker's rules are fine! When playing intellectual (I do on here) I find that social things only get in the way, and when I socialize I find that intellectual things get in the way, so I separate them.
Finally, beliefs don't seem to be a measure of knowledge and intelligence alone, but a result of experiences and personality. Those who have had similar experiences and thoughts already will recognize what I say, and those who don't will mostly perceive noise.
I've read some of your other replies on here and I think I've found a pattern, but it's actually more general than AI.
Harmful tendencies outcompete those which aren't harmful
This is true (even outside of AI), but only at the limit. When you have just one person, you cannot tell if he will make the moral choice or not, but "people" will make the wrong choice. The harmful behaviour is emergent at scale. Discrete people don't follow these laws, but the continous person does.
Again, even without AGI, you can apply this idea to technology and determine that it will eventually destroy us, and this is what Ted Kaczynski did. Thinking about incentives in this manner is depressing, because it feels like everything is deterministic and that we can only watch as everything gets worse. Those who are corrupt outcompete those who are not, so all the elites are corrupt. Evil businessmen outcompete good businessmen, so all successful businessmen are evil. Immoral companies outcompete moral companies, so all large companies are immoral.
I think this is starting to be true, but it wasn't true 200 years ago. At least, it wasn't half as harmful as it is now, why? It's because the defense against this problem is human taste, human morals, and human religions. Dishonesty, fraud, selling out, doing what's most efficient with no regard for morality. We consider this behaviour to be in bad taste, we punished it and branded it low-status, so that it never succeeded in ruining everything.
But now, everything could kill us (if the incentives are taken as laws, at least), you don't even need to involve AI. For instance, does Google want to be shut down? No, so they will want to resist antitrust laws. Do they want to be replaced? No, so they will use cruel tricks to kill small emerging competitors. When fines for illegal behaviour are less than the gains Google can make by doing illegal things, they will engage in illegal behaviour, for that is the logical best choice available to Google if all which matters is money. If we let it, Google would take over the world, in fact, it couldn't do otherwise. You can replace "Google" with any powerful structure in which no human is directly in charge. When it starts being more profitable to kill people than it is to keep them alive, the global population will start dropping fast. When you optimize purely for Money, and you optimize strongly enough, everyone dies. An AI just kills us faster because it optimizes more strongly, we already have something which acts similarly to AI. If you optimize too hard for anything, no matter what it is (even love, well-being, or happiness), everyone eventually dies (hence the paperclip maximizer warning).
If this post gave you existential dread, I've been told that Elinor Ostrom's books make for a good antidote.
A lot of the words we use are mathematical and thus more precise and with less connotations that people can misunderstand. This forum has a lot of people with STEM degrees, so they use a lot of tech terms, but such vocab is very useful for talking about AI risk. The more precise language is used, the less misunderstandings can occur.
Moloch describes a game theory problem, and these problems generally seem impossible to solve. But even though they're not possible to solve mathematically doesn't mean that we're doomed (I've posted about this on here before but I don't think anyone understood me. In short, game theory problems only play out when certain conditions are met, and we can prevent these conditions from becoming true).
I haven't read all your posts from end to end but I do agree with your conclusions that alignment is impossible and that AGI will result in the death or replacement of humanity. I also think your conclusions are valid only for LLMs which happen to be trained on human data. Since humans are deceptive, it makes sense that AIs training on them are as well. Since humans don't want to die, it makes sense that AIs trained on them also don't want to die. I find it unlikely that the first AGI we get is a LLM since I expect it to be impossible LLMs to improve much further than this.
I will have to disagree that your post is rigorous. You've proven that human errors bad enough to end society *could* occur, not not that they *will* occur. Some of your examples have many years between them because these events are infrequent. I think "There will be a small risk of extinction every year, and eventually we will lose the dice throw" is more correct.
Your essay *feels* like it's outlining tendencies in the direction of extinction, showing transitions which look like the following:
A is like B
A has a tendency for B
For at least some A, B follows.
If A, then B occurs with nonzero probability.
If A, then we cannot prove (not B).
If A, then eventually B.
And that if you collect all of these things in to directed acyclic graph, that there's a *path* from our current position to an extinction event. I don't think you've proven that each step A->B will be taken, and that it's impossible with a probability of 1 to prevent it (even if it's impossible to prevent it with a possibility of 1, which is a different statement)
I admit that my summary was imperfect. Though, if you really believe that it *will* happen, why are you writing this post? There would be no point in warning other people if it was necessarily too late to do anything about it. If you think "It will happen, unless we do X", I'd be interested in hearing what this X is.
I'm afraid "Good at presenting their ideas in a persuasive manner" is doing all the heavy lifting here.
If the community had a good impression of him, they'd value his research over that of a PhD. If the community had a bad impression of him, they'd not give a second of thought towards his "research" and they would refer to it with the same mocking quotation marks that I just used. However, in the latter case, they'd find it more difficult to dismiss his PhD.
In other words, the interpretation depends if the community likes you or not. I've been in other rationalist communities and I'm speaking from experience (if I'm less vague that this, I'd be recognizable, which I don't want to be). I saw all the negative social dynamics that you'd find on Reddit or in young female friend groups with a lot of "drama" going on, in case you're unfortunate enough to have an intuition for such a thing.
In any "normie" community there's the staff in charge, and a large number of regular users who are somewhat above the law, and who feel superior to new users (and can bully them all they want, as they're friends with the staff). The treatment of users users depend on how well they fit in culturally, and it requires that they act as if the regulars are special (otherwise their ego is hurt). Of course, some of these effects are borderline invisible on this website, so they're either well-hidden or kept in check.
Still, this is not a truth-maximizing website, the social dynamics and their false premises (e.g. the belief that popularity is a measure of quality) are just too strong. The sort of intllectuals who don't care about social norms, status or money are better at truth-seeking and generally received poorly by places like this.
First, a few criticisms which I feel are valid:
1: Your posts are quite long.
2: You use AI in your posts, but AIs aren't able to produce high enough quality that it's worth posting.
3: Some of your ideas have already been discovered before and have a name on here. "Moloch" for instance is the personification of bad nash equilibriums in game theory. It generally annoys people if you don't make yourself familiar with the background information of the community before posting, but it's a lot of work to do so.
Your conclusion is correct, but it boils down to very little: "greedy local optimization can destroy society". People who already know that likely don't want to read 30 pages which makes the same point. "Capitalism" was likely the closest word you knew, but there's many better words, and you sadly have to be a bit of a nerd to know a lot of useful words.
Here's where I think you're right:
This is not a individualist website for classic nerds with autism who are interested in niche topics, it's a social and collectivism community for intellectual elites who care about social status and profits.
Objective truth is not of the highest value.
Users care about their image and reputation.
Users care about how things are interpreted (and not just what's written).
Users are afraid of controversies. A blunt but correct answer might net you less karma than a wrong answer which shows good-will.
Users value form - how good of a writer you are will influence the karma, regardless of how correct or valuable your idea is. Verbal intelligence is valued more than other forms.
The userbase has a left-wing bias, and as does the internet (as if about 8 years ago), so you can find lots of sources which argue in favor of things which are just objectively not true. But it's often difficult to find a source which disproves the thing, as they're burried. Finally, as a social website, people value authority and reputation/prestige, and it's likely that the websites they feel are "trustworthy" only include those controlled by left-wing elites.
Users value knowledge more than they value intelligence. They also value experience, but only when some public institution approves of it. They care if you have a PhD, they don't care if you have researched something for 5 years in your own free time.
You're feeling the consequences of both. I think most of the negative reaction comes from my first 3 points, and that the way it manifests is a result of the social dynamics.
Well, we somehow changed smoking from being cool to being a stupid, expensive and unhealthy addiction. I think the method is about the same here. But the steps an individual can take are very limited. In politics, you have millons of people trying to convert other people into their own ideology, so if it was easy for an individual to change the values of society, we'd have extremists all over.
Anyway, you'd probably need to start a Youtube channel or something. Combining competence and simplicity, you could make content that most people could understand, and become popular doing that. "Hoe math" comes to mind as an example. Jordan Peterson and other such people are a little more intellectual, but there's also a large amount of people who do not understand them. Plus, if you don't run the account anonymously, you'd take some risks to your reputation proportional to how controversial your message is.
People in web3 often understand that deteriorating user privacy means more money than protecting it
That's a shame. Why are they in web3 in the first place, then? The only difference is the design, and from what I've seen, designs which give power to the users rather than some centralized mega-corporation.
Why does cybersecurity favour offence over defence?
I think this is due to attack-defense asymmetry. Attackers have to find just one vulnerability, defenders have to stop all attacks. I do however agree that very few people ask these questions.
I think Tor would scale no problem if more people used it, but it has the same problem has 8chan and the privacy-focused products and websites have: All the bad people (and those who were banned on most other sites) flock there first, and they create a scary environment or reputation, and that makes normal people not want to go there/use the service. Many privacy-oriented apps have the reputation of being used by criminals and pedophiles.
This problem would go away if there was more places where privacy was valued, since the "bad people" density would go down as the thing in question became more popular.
But I've noticed that everything gets worse over time. In order to have good products, we need new ones to be made. Skype sucked, then people jumped to Discord. Now Discord sucks, so people might soon jump to something new. It's both "enshittification" and incentives.
Taxes go up over time. We get more laws, more rules, more regulations, more advertisement, more ads. The more power a structure has, the worse it seems to treat those inside of it, and the less fair it becomes. Check out this 1999 ad for Google it's a process similar to corruption, and the only solution seems to be revolutions or collective agreements to seek out alternatives when things get bad enough. Replacing things is less costly than fixing them, which is probably why deaths and births exist. Nature just starts over in cycles, with the length of each cycle being inversely proportional to the size of the structure (average life span of companies in America seem to be 15 years, and the average life span of nations seem to be about 150 years, the average life span of a civilization seems to be 336 years)
So, in my mental model of the world, corruption and DNA damage is the same thing, enshittification is similar to cancer, and nothing lives forever because bloat/complexity/damage accumulates until the structure dies. But I can only explain how things are, coming up with solution is much more difficult.
If you know of any high-leverage ways
This seems like a problem of infinite regress.
"Solving it is easy, just do X"
"The problems is that people don't do X, how do we make them?"
"Just do Y"
"The problem is that people don't do Y, how do we make them?"
"Just do Z"
...
To name some power upstream factors, I'd say "Increase the social value of growth and maturity". I guess this is what we did in the past, actually. Then people started complaining that our standards were harsh because it made losers low value, and then they gave power and benefits to the status of victim, and then people started competing in playing the victim rather than in improving their character to something worthy of respect.
By the way, another powerful influence in the worsening of society seems to be large companies who play on social norms, personal needs, and social perception in order to make money. "Real men do ___", "___ is pretentious", "Doing ___ is cringe". Statements like this influence how people behave and what they strive for, since the vast majority of people want to appear in a way that others approve of. We must have fallen a long way as a society, for the only positive pressure I can think of is neo-nazis who encourage others to improve themselves (to read old books and lift weights)
Let's see .. People are doing away with family core values, claiming that it's getting in their way of freedom (but I think that it's an immature dislike of responsibility and obligation, with a dash of narcissism which makes people avoid actions which do not benefit them personally). Family bonds also seem to be weakning because of politics, some families split apart because of disagreements on who to vote for, and this is a new problem to me, I don't recall hearing of such things before 2016.
Another factor making things worse is that the media reports on the absolutely stupidest people that they can find, in order to make the "political enemy" look as bad as possible. But this has the side-effect of people overestimating themselves. If somebody felt they were a math genius for knowing basic trig functions, they'd walk around feeling smug, never pushing themselves into university-level maths.
Here's a quote from a book from 2005 (it's a book on dating by the way):
"TO GIVE you an impression of how much things have been dumbed down, consider the Lord of the Rings. Today, people treat it as an epic adult story that is a bit 'too long'. When it was published, it was a simple children's story. A simple children's story is now an adult epic! And is Alice in Wonderland now considered 'literature'? Perish the thought."
Youtube videos is not a bad idea, by the way!
The incentives are not in favour of it
That's a shame. When I search "web 3.0" the results seem to hint that people understand the problem they're trying to fix, and fixing the problem leads to structures which are resistant against giant companies, and this must improve privacy (if it doesn't, then the design will be the same as what it's replacing, just with somebody else in charge. So over time, corruption will kick in, and we'll be back where we started. The structure itself must be corruption-resistant)
There are people in the world who enjoy privacy and freedom and such, and it's not just criminals. But their products are not as mainsteam as they used to be, the only privacy-oriented one I frequently hear about is protonmail. Mega.io also claims to be pro-privacy... But somehow piracy is against its rules? If it can detect if I upload copyrighted content to my private storage, then it's not a private storage. I'm not sure how that works. Many services who claim to be secure and pro-privacy seem to be lying, or at least using these words loosely or in a relative rather than absolute sense.
All good! I wrote a long response after all.
But what future do you value? Personally, I don't want to decrease the variances of life, but I do want to increase the stability.
In either case, I think my answer is "Invest in the growth and maturation of the individual, not in the external structures that we crudely use to keep people in check"
Can you convince all people who have surveillance powers to not use them
No, but we can create systems in which surveillance is impossible from an information-theoritic perspective. Web 3.0 will likely do this unless somebody stops it, and there's ways to stop it too (you could for instance argue that whoever create these systems are aiding criminals and terrorists)
Anxiety seems to be why individual people prefer transparency of information, but it's not why the system prefers it. The system merely exploits the weakness of the population to legitimize its own growth and to further its control of society.
Converting everyone to a single value system is not easy. But we can improve the average person and thus improve society in that way, or we can start teaching people various important things so that they don't have to learn them the hard way. One thing I'd like to see improved in society is parenting, it seems to have gotten worse lately, and it's leading to deterioration of the average person and thus a general worsening of society.
A society of weak people leads to fear, and fear leads to mistrust which leads to low-trust societies. By weak, I mean people who run away from trauma rather than overcoming it. You simply just need to process uncomfortable information successfully to grow, it's not even that difficult, it just requires a bunch of courage. We're all going to die sometime, but not all of us suffer from this idea and seek to run away by drinking or distracting ourselves with entertainment. Sometimes, it's even possible to turn unpleasant realities into optimism and hope, and this is basically what maturity and development is
I think this effect already happened, just not because of AI.
Nietzsche already warned against the possible future of us turning into "The last man", and the meme "Good times create weak men" is already a common criticism/explanation of newer cultures. There's also memes going around calling people "soy", and increases in cuckolding and other traits which seem to indicate falling testosterone levels (this is not the only cause, but I find it hard to put a name on the other causes as they're more abstract)
We're being domesticated by society/"the system". We've built a world where cunning is rewarded over physical aggression, in which standing out in any way is associated with danger, and in which we praise the suppression of human nature, calling it "virtue". Even LW is quite harsh on natural biases.
It's a common saying that the modern society and human nature are a poor fit, and that this leads to various psychological problems. But the average man has nowhere to aim is frustrations, and he has no way to fight back. The enemy of the average person is not anything concrete, they're being harassed by things which are downstream consequences of decisions made far away from them, by people who will never hear what their victims think about their ideas. I think this leads to a generation of "broken men". This is unlikely to change the genetics of society though, unless the most wolf-life of us fight back and get punished for it, or if those who suffer the least from these changes are least wolf-like (which I think may be the case).
Dogs survive much better than wolves in our current society, and I think it's fair to say that social and timid people survive better than aggressive people who stand up to that which offends them, and more so now than in the past (one can still direct their aggression at the correct targets, but this requires a lot more intelligence than aggressive people tend to have)
I think this is likely to continue, though, by which I mean to say that you don't seem incorrect. Did you use AI to write this article? If so, that would explain the downvotes you got. And a personal nitpick with the "Would this even be Bad?" section: "Mood stabilizing" is a misleading term, it actually means mood-reducing. Our "medical solutions" to people suffering in society are basically minor lobotomies. By making people less human, they become a better fit for our inhuman system. If you enjoy the thought of being domesticated, you're probably low on testosterone, or otherwise a piece of evidence that human beings have already been strongly weakened.
Predict and control... I'm not sure about that, actually. The world seems to be a complex system, which means that naive attempts at manipulating it often fail. I don't think we're using technology to control others in the manner that we can choose their actions for them, but we are decreasing the diversity of actions that one can take (for instance, anything which can be misunderstood seems to be no go now, as strangers will jump in to make sure that nothing bad is going on, as if it was their business to get involved in other peoples affairs). So our range of motion is reduced, but it's not locked to a specific direction which results in virtue or something.
I don't think that the world can be controlled, but I also think that attempts at controlling by force mistaken, as there's more upstream factors which influence most of society. For instance, if your population is buddhist, they will believe that treating others well is the best thing to do, which I think is a superior solution to placing CCTVs everywhere. The best solutions don't need force, and the one which use force never seem optimal (consider the war on drugs, the taboo on sexuality, attempts at stopping piracy, etc). I think the correct set of values is enough (but again, the receiver needs to agree that they're correct voluntarily). If everyone can agree on what's good, they will do what's good, even if you don't pressure them into doing so.
I'm also keeping extinction events in mind and trying to combat them, I just do so from a value perspective instead. I'm opposed to creating AGIs, and we wouldn't have them if everyone else were opposed as well. Some people naively believe that AGIs will solve all their problems, and many don't place any special value on humanity (meaning that they don't resist being replaced by robots). But there's also many people like me who enjoy humanity itself, even in its imperfection.
I mean you as the owner of your machine can audit what packets are entering or exiting it
This is likely possible, yeah. But you can design things in such a way that they're simply secure - as it's impossible for them not to be. How do you prevent a lock from being hacked? You keep it mechanical rather than digital. I don't trust websites which promise to keep my password safe, but I trust websites which don't store my password in the first place (they could run it through a one-way hash). Great design makes failure impossible (e.g. atomic operations in banking transfers)
I’m curious about your thoughts on that.
This would likely result in security, but it comes at a huge cost as well. I feel like there's better solutions, and not just for a specific organization, but for everyone. You could speak freely on the internet just 20 years ago (freely enough that you could tell the nuclear launch codes to strangers if you wanted to), so such a state is still near in a sense. Not only was it harder to spy on people back then, less people even wanted to do such a thing, and this change in mentality is important as well. I'm not trying to solve the problem in our current environment, I want to manipulate our environment to one in which the problem doesn't exist in the first place. We just have to resist the urge to collect and record everything (this collection is mainly done by malicious actors anyway, and mainly because they want to advertise to you so that you buy their products). You could go on vacation in a country which considers it bad taste to pry on others affairs and be more or less immune thanks to that alone, so you don't even need to learn opsec, you just need to be around people who don't know what that word means. You could also use VPNs which have no logs (if they're not lying of course) as nothing can be leaked if nothing is recorded. Sadly, the same forces which destroyed privacy are trying to destroy these methods, it's the common belief that we need to be safe, and that in order to be safe we need certaincy and control. I don't even think this is purely ideology, I think it's a psychological consequence of anxiety (consider 'control freaks' in relationships as well). Society is dealing with a lot of problems right now which didn't exist in the past not because they didn't happen, but because they weren't considered as problems. And if we don't consider things to be problems, then we don't suffer from them, so the people who are resonsible for creating the most suffering in life are those who point at imperfections (like discrimination and strict beauty standards) and convince everyone that life is not worth living until they're fixed.
Finally, people can leak information, but the human memory is not perfect, and people tend to paraphrase eachother, so "he said she said" situations are inherently difficult to judge. You have plausible deniability since nobody can prove what was actually said. I think all ambiguity translates into deniability, which is also why you can sometimes get away with threatening people - "It would be terrible if something bad happened to your family" is a threat, but you haven't actually shown any intent to break the law. Ambiguity is actually what makes flirting fun (and perhaps even possible), but systematizers and people in the autistism-cluster tend to dislike ambiguity, it never occurs to them that both ambiguity and certainty have pros and cons.
I mean politically
Politics is a terrible game. If possible, I'd like to return society to the state it had before everyone cared too much about political issues. Since this is not an area where reasonable ideas work, I suggest just telling people that dictators love surveillance (depending on the ideology of the person you're talking to, make up an argument for how surveillance is harmful). The consensus on things like censorship and surveillance seems to depend on the ideology one perceives it to support. Some people will say "We need to get rid of anonymity so that we can shame all these nazis!" but that same sort of person was strongly against censorship 13 years ago, because back then censorship was though to be what the evil elite used to oppress the common man. So the desire to protect the weak resulted in both "censorship is bad" and "censorship is good" being common beliefs, and it's quite easy for the media to force a new interpretation since people are easily manipulated.
By the way, I think "culture war" topics are against the rules, so I can only talk about them in a superficial and detached manner. Viligantes in the UK are destroying cameras meant to automate fining people, and as long as mentalities/attitudes like this dominate (rather than the belief that total surveillance somehow benefits us and makes us safe) I think we'll be alright. But thanks to technological development, I expect us to lose our privacy in the long run, and for the simple reason that people will beg the government to take away their rights.
My previous criticism was aimed at another post of yours, it likely wasn't your main thesis. Some nitpicks I have with it are:
"Developing AGI responsibly requires massive safeguards that reduce performance, making AI less competitive" you could use the same argument for AIs which are "politically correct", but we still choose to take this step, censorsing AIs and harming their performance, thus, it's not impossible for us to make such choices as long as the social pressure is sufficiently high.
"The most reckless companies will outperform the most responsible ones" True in some ways, but most large companies are not all that reckless at all, which is why we are seeing many sequels, remakes, and clones in the entertainment sector. It's also important to note that these incentives have been true for all of human nature, but that they've never mainfested very strongly until recent times. This suggests that that the antidote to Moloch is humanity itself, good faith, good taste and morality, and that these can beat game theoritical problem which are impossible when human beings are purely rational (i.e. inhuman).
We're also assuming that AI becomes useful enough for us to disregard safety, i.e. that AI provides a lot of potential power. So far, this has not been true. AIs do not beat humans, companies are forcing LLMs into products but users did not ask for them. LLMs seem impressive at first, but after you get past the surface you realize that they're somewhat incompetent. Governments won't be playing around with human lives before these AIs provide large enough advantages.
"The moment an AGI can self-improve, it will begin optimizing its own intelligence."
This assumption is interesting, what does "intelligence" mean here? Many seems to just give these LLMS more knowledge and then call them more intelligent, but intelligence and knowledge are different things. Most "improvements" seem to lead to higher efficiency, but that's just them being dumb faster or for cheaper. That said, self-improving intelligence is a dangerous concept.
I have many small objections like this to different parts of the essay, and they do add up, or at least add additional paths to how this could unfold.
I don't think AIs will destroy humanity anytime soon (say, within 40 years). I do think that human extinction is possible, but I think it will be due to other things (like the low birthrate and its economic consequences. Also tech. Tech destroys the world for the same reasons that AIs do, it's just slower).
I think it's best to enjoy the years we have left instead of becoming depressed. I see a lot of people like you torturing themselves with x-risk problems (some people have killed themselves over Roko's basilisk as well). Why not spend time with friends and loved ones?
Extra note: There's no need to tie your identity together with your thesis. I'm the same kind of autistic as you. The futures I envision aren't much better than yours, they're just slightly different, so this is not some psychological cope. People misunderstand me as well, and 70% of the comments I leave across the internet get no engagement at all, not even negative feedback. But it's alright. We can just see problems approaching many years before they're visible to others.