I think the key is to realize that Chapman is trying to promote Buddhism, without saying it explicitly, pretending that it is somehow a spontaneous answer to questions that people are asking today.
In old texts, Buddha was promoting his way as a middle path between two (strawman?) alternatives. Chapman reiterates the same using modern words, calling those alternatives "eternalism" and "nihilism".
It is an old rhetorical trick. In Western philosophy, it is called "thesis - antithesis - synthesis", or as I call it: "strawman - opposite strawman - look I am the only reasonable person in this room".
Chapman seems very confident that he knows the answers to most if not all questions of ethics (and he contemptuously dismisses most moral philosophers and their work),
I would say that Chapman seems very confident that Buddha knew the answers to everything. And of course, Buddha couldn't know about the moral philosophy that happened later, so there is neither agreement, nor disagreement.
.
I think Less Wrong needs an emoji for "yet another sneaky attempt to promote Buddhist dogma".
One problem with winning, is that you need to be more specific: "winning at what?" And if you try to write down the list (for a human being, let's ignore the AI for a moment), it turns out to be quite long.
To win at life, you probably want to be rich, but you also want to be fit, you want to be smart... but the time you spend earning money is the time you can't spend exercising, and the time you spend exercising is the time you can't spend learning, and you should also spend some time socializing, thinking strategically about your plans, maybe meditating, you should definitely get enough sleep, if you want to eat healthy food that is not too expensive that probably means you should learn to cook... and soon the list is too long. The day only has 24 hours, so it takes a lot of discipline to accomplish all of this without burning out, even under optional conditions (physical health, mental health, supportive family, some safety network).
It is much easier to win at one specific thing, for example to be an excellent student, while your parents take care of the money and food and strategic planning. Then you can spend 8 hours on the project, and the remaining 8 hours having fun (which is important for your mental health, and makes it all sustainable).
Some people get great at one thing by sacrificing everything else. For example, they create big successful companies and make tons of money... but their partner divorces them, their kids don't talk to them, and at some moment their health collapses and they die. Or they spend their life in poverty, focusing obsessively on their art that will enter the textbooks one day... but again, their family suffers, etc.
Alternatively, you can try the middle way, where you try to get good-but-not-great at everything. That's kinda where I am: somewhat above average in most things, excellent in nothing. I am not even sure how I feel about it: when I look at all kinds of problems that people around me have, I am happy that I am not them; when I think about my ambitions, I feel like I wasted my entire life.
Now, instead of an individual human, consider a group. By the level of seriousness, there are two basic kinds of groups: hobbies and jobs. Hobbies are what people do in their free time, after they have spent most of their energy on their jobs, families, etc. Some people are obsessed with their hobbies, but that doesn't necessarily translate into quality; people who have both the obsession and the quality are rare. People with priorities other than their hobby often disappear from the group when something with a higher priority appears in their private life; and even before that, they often don't have enough energy left for the group activities, so the group productivity is low.
To succeed, most groups need to become jobs: at least some members need to get paid decent money for working for the group. (Not necessarily all members, not even most of them; some groups are okay with two or three paid people who coordinate dozens of volunteers.) This gives you members who can devote 8 hours a day to advancing the group goals, sustainably. On the other hand, in addition to the intrinsic group goals now you also have a new task, to secure money for these members (also, to do the accounting, etc.), which can actually cost you a large fraction of this extra time (applying for grant money, preparing documentation for the donors, even more complex accounting, etc.). You also need to recruit new members, solve problems between the existing members, take care of your reputation (PR), etc.
And this all doesn't happen in a vacuum: if you have goals, you probably also have enemies -- people whose goals oppose yours (no matter how good and prosocial your goals are; some people probably benefit from the existing problems and they'd hate to see them fixed), or simply people who compete for the same resources (apply for the same grants, recruit members from the same population), or even people who hate you for no good reason just because something about you rubs them the wrong way. (And this all optimistically assumes that you have never done nothing wrong; no mistake ever. Otherwise, also include people who want to punish you; some of them quite disproportionately.) Also, people who see that you have resources, and would like to take them away from you, by theft or blackmail.
The goal of the group can require many different tasks to be done: research what causes the problem, research how to fix the problem, do the things that you are allowed to do, lobby for changing the rules so that you can do more, explain the situation to people so that you get them on your side (while your enemies are trying to turn them against you). Short-term tasks vs long-term strategies. Again, your time and resources are limited, the more you spend on X, the less you can spend on Y.
...oh my, I make it sound so complicated as if nothing can ever succeed. That wouldn't be exactly true. But the filters along the way are brutal. You need to do many things right and you need to get lucky. Most projects fail. Most successful projects succeed small. Many good projects fall apart later, or get subverted.
I am trying to offer a "glass half full" or maybe even "glass 90% full" perspective here. Sure, nature doesn't grade you on a curve. The sperm that only gets 99.99% towards the egg is wasted. From that perspective, we probably lose, and then we probably all die. But I don't think that we are losing because we keep making obvious stupid mistakes. I think we are actually doing surprisingly many things right. It's just that the problem is so difficult that you can do many things right and still lose at the end. :( Because no matter how many filters you have already passed, the next filter still eliminates a majority of contestants. And we still get at least three more filters ahead of us: (1) the major players need to actually care about alignment, (2) they need to find a way how to cooperate, and eliminate those who don't, (3) and if they try to align the AI, they have to actually succeed. Each one of these alone sounds unlikely.
But also, for full perspective, let's look back and see how many filters we have already passed. A decade and half ago, you get one smart guy called Eliezer, worrying about a thing that no one else seems to care about. And his goal is to convince the entire planet to do it right, otherwise we all die (but at that moment, he seems to be the only one who believes that). At what odds would you bet your money that starting from there, a few years later there will be a global community, a blog that publishes research on that topic (and many other things, often unrelated) almost every day, there will be books, academic courses, and organizations focused on that idea, politicians will discuss it on TV... and the "only" remaining problem will be that the most advanced tech companies on the planet will only pay lip service to his ideas instead of seriously following them? Yep, even that last point is sufficient to kill us all, but still, isn't it impressive how we actually got here, despite the odds?
don't spend 3+ years on a PhD (cognitive rationality) but instead get 10 other people to work on the issue (winning rationality). And that 10x s your efficiency already.
This seems to assume that there is a pool of extremely smart and conscientious and rational people out there, with sufficient mathematical and technical skills, willing to bet their careers on your idea if you explain it to them the right way... and you only need to go there and recruit 10 of them for the cause.
I think that such people are rare, and I suspect that most of them have already heard about the cause. Workshops organized by CFAR (1, 2) are at least partially about recruiting for the cause. Books like Superintelligence can reach more people than individual recruitment. (Also, HP:MoR.)
I think that the pyramid strategy (don't work on the cause, instead recruit other people to work on the cause) would seem fishy to the people you are trying to recruit. Like, why would I bet my academic career on a field where no one wants to work... not even you, despite caring a lot and having the skills? Actually doing the PhD and writing a few papers will help to make the field seem legitimate.
To pick an extreme example, who do you think has more capacity to solve alignment, Paul Christiano, or Elon Musk?
Have you seen what Elon Musk does with Grok recently? He definitely has the resources, but I don't know if there is a person on this planet who can make Elon Musk listen to them and take alignment seriously. Especially now that his brain is drunk with politics.
(This is like discussing that e.g. Putin has enough money so that he could feed all the starving kids in Africa. Yeah, he probably does, but it's irrelevant, because this is never going to happen anyway.)
As far as I can tell cognitive rationality helps but winning seems to be mostly about agency and power really. So maybe LW should talk more about these (and how to use them for good)?
Sure, agency and power are good. If you think there is a low-hanging fruit we should pick, please explain more specifically. Agency, we have discussed a lot already (, , 3), but maybe there is an important angle we have missed, or something that needs repeating. Power is a zero-sum game that many people want to play, so I doubt there is a low-hanging fruit.
There is a guy called SBF who seemed to try this way really hard, and although many people admired him at that moment, it didn't end up well, and probably did a lot of harm. (Also, Zizians were quite agenty.)
tl;dr -- be specific; if you think we are making trivial mistakes, you are probably wrong
What if, instead of a flash of memories, the brain at death enters a recursive simulation of life
Excuse me, but is there actually any reason to consider this hypothesis? I don't have much experience with dying, but even the "flash of memories" despite being a popular meme seems to have little evidence (feel free to correct me if I am wrong). So maybe you are looking for an explanation of something that doesn't even exist in the first place.
Assuming that the memories are flashing, "recursive simulation" still seems like a hypothesis needlessly more complicated than "people remember stuff". Remembering stuff is... not exactly a miraculous experience that would require an unlikely explanation. Some situations can trigger vivid memories, e.g. sounds, smells, emotions. There may be a perfectly natural explanation why some(!) people would get their memories triggered in near-death situations.
Third, how would that recursive simulation even work, considering what we know about physics? Does the brain have enough energy to run a simulation of the entire life, even at a small resolution? What would it even mean to run a simulation: is it just remembering everything vividly as if it was happening right now, or do you get to make different choices and then watch decades of your life in a new timeline? Did anyone even report something like this happening to them?
tl;dr -- you propose an impossible explanation for something that possibly doesn't even exist. why?
The value of a generalist with shallow knowledge is reduced, but you get a chance to become a generalist with relatively deep knowledge of many things. You already know the basics, so you can start the conversation with LLMs to learn more (and knowing the basics will help you figure out when the LLM hallucinates).
If I understand it correctly, the argument against bio doom is that humans can defend themselves against viruses in the air using air filtering, etc.?
Well, in order for that to work, those humans would need to be prepared. Yes, there will be many preppers. Possibly many more than today, because if the technology and economy advance, prepping should be cheaper. Still, that would be less than 1% of population, I guess. I mean, it's still only 2027, right? Half of the population is probably still busy debating whether AI has a soul, or whether it is capable of creating real art. And the other half is sexting their digital boyfriends and girlfriends...
This seems to belong to the category of "problems that you could solve in 5 minutes of thinking, and yet it somehow seems plausible that a vastly superhuman intelligence capable of managing planetary economy and science would be unable to come up with a solution". The obvious solution is "strategic preparation + multiple lines of attack".
Strategic preparation includes:
Multiple lines of attack: if you can release the deadly virus all around the world at the same time, you might simultaneously also put poison in the drinking water, switch all domestic appliances to killer mode, etc. And immediately release the drones to kill the survivors.
If someone still survives, hidden somewhere in a bunker, that's no big deal. The moment they try to do anything, they will reveal themselves, and get a bomb thrown at them. If they somehow keep surviving underground, undetected, for decades... who cares. It's not like they can build a technology comparable to the one outside, without getting detected.
The most optimistic outcome is that a group of futuristic hyper-preppers survives; their bodies are covered by the latest defensive technology, they produce/recycle their own food and water and air, they even have a smaller aligned/obedient AI, etc. Well, if they are visible, they get a nuke. If they hide underground or fly to the Moon... good luck building an alternative stronger economy, because they will need it to win the war.
Similar here. For me, the greatest benefit is to have someone I can discuss the problem with. A rubber duck, Stack Exchange, peer programming -- all in one. As a consequence, not only I implement something, but I also understand what I did and why. (Yeah, in theory, as a senior developer, I should always understand what I do and why... but there is a tradeoff between deep understanding and time spent.)
So, from my perspective, this is similar to saying that writing automated tests only slows you down.
More precisely, I do find it surprising that developers were slowed down by using AI. I just think that in longer term it is worth using it anyway.
It is a curse of being a human (although for most humans the stakes are much lower). Also, one of the main objections against consequentialism as a practical guide to everyday action -- often, we have no idea how things will turn out. Even the drowning child you save may grow up to be the next Hitler.
I heard that when people are in therapy, their self adapts to the school of psychotherapy. For example you start getting Freudian dreams if you are in Freudian therapy, but you start getting Jungian dreams instead if you are in Jungian therapy.
This seems to support the hypothesis that when we think we have discovered something deep inside us, often we have actually constructed it to fit our preconceptions.
(I suspect that Buddhism also mostly works this way. When Buddhists say that they can verify the truth of all Buddha's words by introspection... on one hand, yes they can; on the other hand, if they instead believed in Jesus, they could verify that just as well. Asking yourself is like asking an LLM: whatever you believe is true, it will confirm.)
The rare part is the common knowledge and normalization
Trying to suggest that someone else's bad mood might be caused by their period would be considered by most people horribly sexist. So you can only hope that they might notice it themselves... or very gently and non-specifically point towards the general idea of hangriness and hope that they can connect the dots...
And this is more likely to work if the concept is a frequently used common knowledge.
The existing system -- rewards writing a lot of content that is barely worth reading.
Your proposal -- if you happen to write something extraordinarily good at your first attempt, such that you don't trust yourself to achieve such high bar again on average... it incentivizes you to never post anything again.
.
I spent some time thinking about this, writing, and deleting again... I think the key problem is that for an optimal equation you should also somehow include how many people saw that comment. Like: "three people saw that comment, three people upvoted" sounds great; "hundred people saw that comment, five upvoted" sounds like mostly a waste of time. You can't derive this from votes alone.