JohnBuridan

JohnBuridan's Comments

mingyuan's Shortform

I think about the mystery of spelling a lot. Part of it is that English is difficult, of course. But still why does my friend who reads several long books a year fail so badly at spelling, as he has always struggled since 2nd and 3rd grade when his mom would take extra time out just to ensure that he learned his spelling words enough to pass.

I have never really had a problem with spelling and seem to use many methods when I am thinking about spelling explicitly, sound it out, picture it, remember it as a chunk, recall the language of origin to figure out dipthongs. I notice that students who are bad at spelling frequently have trouble learning foreign languages, maybe the correlation points to a common cause?

Causal Abstraction Intro

Strongly agree causal models need lots of visuals. I liked the video, but I also realize I understood it because I know what Counterfactuals and Causal Inference is already. I think that is actually a fair assumption given your audience and the goals of this sequence. Nonetheless, I think you should provide some links to required background information.

I am not familiar with circuits or fluid dynamics so those examples weren't especially elucidating to me. But I think as long as a reader understands one or two of your examples it is fine. Part of making this judgment depends upon your own personal intuition about how labor should be divided between author and reader. I am fine with high labor, and making a video is, IMO, already quite difficult.

I think you should keep experimenting with the medium.

We run the Center for Applied Rationality, AMA

What have you learned about transfer in your experience at CFAR? Have you seen people gain the ability to transfer the methods of one domain into other domains? How do you make transfer more likely to occur?

We run the Center for Applied Rationality, AMA

I'm sure the methods of CFAR have wider application than to Machine Learning...

The Tails Coming Apart As Metaphor For Life

“The Tails Coming Apart as a Metaphor for Life” should be retitled “The Tails Coming Apart as a Metaphor for Earth since 1800.” Scott does three things, 1) he notices that happiness research is framing dependent, 2) he notices that happiness is a human level term, but not specific at the extremes, 3) he considers how this relates to deep seated divergences in moral intuitions becoming ever more apparent in our world.

He hints at why moral divergence occurs with his examples. His extreme case of hedonic utilitarianism, converting the entire mass of the universe into nervous tissue experiencing raw euphoria, represents a ludicrous extension of the realm of the possible: wireheading, methadone, subverting factory farming. Each of these is dependent upon technology and modern economies, and presents real ethical questions. None of these were live issues for people hundreds of years ago. The tails of their rival moralities didn’t come apart – or at least not very often or in fundamental ways. Back then Jesuits and Confucians could meet in China and agree on something like the “nature of the prudent man.” But in the words of Lonergan that version of the prudent man, Prudent Man 1.0, is obsolete: “We do not trust the prudent man’s memory but keep files and records and develop systems of information retrieval. We do not trust the prudent man’s ingenuity but call in efficiency experts or set problems for operations research. We do not trust the prudent man’s judgment but employ computers to forecast demand,” and he goes on. For from the moment VisiCalc primed the world for a future of data aggregation, Prudent Man 1.0 has been hiding in the bathroom bewildered by modern business efficiency and moon landings.

Let’s take Scott’s analogy of the Bay Area Transit system entirely literally, and ask the mathematical question: when do parallel lines come apart or converge? Recall Euclid’s Fifth Postulate, the one saying that parallel lines will never intersect. For almost a couple thousand years no one could figure out why it was true. But it wasn’t true, and it wasn’t false. Parallel lines come apart or converge in most spaces. Only, alas, only on a flat plane in a regular Euclidean space ℝ3 do they obey Euclid’s Fifth and stay equidistant.

So what is happening when the tails come apart in morality? Even simple technologies extend our capacities, and each technological step extending the reach of rational consciousness into the world transforms the shape of the moral landscape which we get to engage with. Technological progress requires new norms to develop around it. And so the normative rules of a 16th century Scottish barony don’t become false; they become incomparable.

Now the Fifth postulate was true in a local sense, being useful for building roads, cathedrals, and monuments. And reflective, conventional morality continues to be useful and of inestimable importance for dealing with work, friends, and community. However, it becomes the wrong tool to use when considering technology laws, factory farming, or existential risk. We have to develop new tools.

Scott’s concludes that we have mercifully stayed within the bounds where we are able to correlate the contents of rival moral ideas. But I think it likely that this is getting harder and harder to do each decade. Maybe we need another, albeit very different, Bernard Riemann to develop a new math demonstrating how to navigate all the wild moral landscapes we will come into contact with.

We Human-Sized Creatures access a surprisingly detailed reality with our superhuman tools. May the next decade be a good adventure, filled with new insights, syntheses, and surprising convergences in the tails.

Voluntourism

Well said, but there are some things I think must be added. I think it is right to compare voluntourism to regular tourism and to judge it on its goal of increasing "local" cooperation. By your account, voluntourism should have the twin effects of increasing GDP (or the general success and efficient cooperation) of the members of the church group by a few percentage points and increasing the level of donations over many years to the voluntoured location.

When doing the math to calculate the cost-benefit analysis of these voluntourism projects we should actually write off the cost of travel because in our "voluntourism" model, we assume the travel was going to happen anyway. If that's the case, then voluntourism is almost by definition a net-positive. So I agree we shouldn't be too negative about it.

Nonetheless, I don't think we should call voluntourism effective altruism. For something to be called effectively altruistic, we should be forced to take into account the costs of the program, and the cost of a week and a half in Haiti is $2000 per person. If we assume that a person experiences a financial gain of 2% per year because of the increased group cohesion in the States, that person would have to be making 100k per year to recoup the loss compared to inflation. If the person makes more money than that and donates additional gains to the poor of Haiti, then that pays off for both him and the people of Haiti positively.

I think under these assumptions, voluntourism only reaches the threshold of being effective, when very rich people are doing it. When you are giving tens to hundreds of thousands of dollars away per year anyway, voluntouring does not make a big percentage difference to your budget, but will likely help you give to more effective causes.

Tabletop Role Playing Game or interactive stories for my daughter

A friend gave me Kingdom http://www.lamemage.com/kingdom/ a few years ago, and I thought it was quite good for just this purpose. He used it for his kids; I'm planning on using it for mine and to generate stories for them.

I like that it is a roleplaying system in which mind modelling takes a high priority. It is story-driven, communication oriented, and based upon having a community. I really like it. Here are some examples of play. There are no dice.

https://www.meetup.com/Story-Games-Seattle/messages/boards/thread/48963126

https://web.archive.org/web/20190212225741/https://plus.google.com/116492722966699295346/posts/1ci5W1FnAQw

https://rpggeek.com/thread/1259330/cactus-flats


Is Rationalist Self-Improvement Real?

There are so many arguments trying to be had at once here that it's making my head spin.

Here's one. What do we mean by self-help?

I think by self-help Scott is thinking about becoming psychologically a well-adjusted person. But what I think Jacobian means by "rationalist self-help" is coming to a gears level understanding of how the world works to aid in becoming well-adapted. So while Scott is right that we shouldn't expect rationalist self-help to be significantly better than other self-help techniques for becoming a well-adjusted person, Jacobian is right that rationalist self-help is an attempt to become both a well-adjusted person AND a person who participates in developing an understanding of how the world works.

So perhaps you want to learn how to navigate the space of relationships, but you also have this added constraint that you want the theory of how to navigate relationships to be part of a larger understanding of the universe, and not just some hanging chad of random methods without satisfactory explanations of how or why they work. That is to say, you are not willing to settle for unexamined common sense. If that is the case, then rationalist self-help is useful in a way that standard self-help is not.

A little addendum. This is not a new idea. Socrates thought of philosophy as way of life, and tried to found a philosophy which would not only help people discover more truths, but also make them better, braver, and more just people. Stoics and Epicureans continued the tradition of philosophy as a way of life. Since then, there have always been people who have made a way of life out of applying the methods of rationality to normal human endeavors, and human society since then has pretty much always been complicated enough to marginally reward them for the effort.

Conversational Cultures: Combat vs Nurture (V2)

In the Less Wrong community, Anti-Nurture comments are afraid of laxity with respect to the mission, while Anti-Combat commenters are afraid of a narrow dogmatism infecting the community.

Conversational Cultures: Combat vs Nurture (V2)

I read this post when it initially came out. It resonated with me to such an extent that even three weeks ago, I found myself referencing it when counseling a colleague on how to deal with a student whose heterodoxy caused the colleague to make isolated demands for rigor from this student.

The author’s argument that Nurture Culture should be the default still resonates with me, but I think there are important amendments and caveats that should be made. The author said:

"To a fair extent, it doesn’t even matter if you believe that someone is truly, deeply mistaken. It is important foremost that you validate them and their contribution, show that whatever they think, you still respect and welcome them."

There is an immense amount of truth in this. Concede what you can when you can. Find a way to validate the aspects of a person’s point which you can agree with, especially with the person you tend to disagree with most or are more likely to consider an airhead, adversary, or smart-aleck. This has led me to a great amount of success in my organization. As Robin Hanson once asked pointedly, “Do you want to appear revolutionary, or be revolutionary?” Esse quam videri.

Combat Culture is a purer form of Socratic Method. When we have a proposer and a skeptic, we can call this the adversarial division of labor: You propose the idea, I tell you why you are wrong. You rephrase your idea. Rinse and repeat until the conversation reaches one of three stopping points: aporia, agreement, or an agreement to disagree. In the Talmud example, both are proposers of a position and both are skeptics of the other persons’ interpretation.

Nurture Culture does not bypass the adversarial division of labor, but it does put constraints on it - and for good reason. A healthy combat culture can only exist when a set of rare conditions are met. Ruby’s follow-up post outlined those conditions. But Nurture Culture is how we still make progress despite real world conditions like needing consensus, or not everyone being equal in status or knowledge, or some people having more skin in the game than others.

So here are some important things I would add more emphasis to from the original article after about a hundred iterations of important disagreements at work and in intellectual pursuits since 2018.

1. Nurture Culture assumes knowledge and status asymmetries.

2. Nurture Culture demands a lot of personal patience.

3. Nurture Culture invites you to consider what even the most wrong have right.

4. Sometimes you can overcome a disagreement at the round table by talking to your chief adversary privately and reaching a consensus, then returning for the next meeting on the same page.

While these might be two cultures, it’s helpful to remember that there are cultures to either side of these two: Inflexible Orthodoxy and Milquetoast Relativism. A Combat Culture can resolve into a dominant force with weaponized arguments owning the culture, while a Nurture Culture can become so limply committed to their nominal goals that no one speaks out against anything that runs counter to the mission.

Load More