All Comments

We run the Center for Applied Rationality, AMA

I don’t think this works.

A carpenter might say that his knowledge is trade knowledge and not scientific knowledge, and when challenged to provide some evidence that this supposed “trade knowledge” is real, and is worth something, may point to the chairs, tables, cabinets, etc., which he has made. The quality of these items may be easily examined, by someone with no knowledge of carpentry at all. “I am a trained and skilled carpenter, who can make various useful things for you out of wood” is a claim which is very, very easy to verify.

But as I understand it, CFAR has considerable difficulty providing, for examination, any equivalent of a beautifully-made oak cabinet. This makes claims of “trade knowledge” rather more dubious.

Propagating Facts into Aesthetics

In addition to modifying the perceived beauty or distastefulness of a given concept, there are knobs you can turn related to the concepts themselves: nudging, splitting, merging, or even destroying (and assigning all remaining aesthetic value to other, related concepts.

Vaccine... Help? Deprogramming? Something?

I'd recommend, for each argument, finding someone who makes that argument online, and posting it to skeptics stack exchange. I used to do that years ago and found people were very helpful in doing research and finding good sources on a wide variety of topics.

We need to revisit AI rewriting its source code

I'm confused. I read you as suggesting that self-modifying code has recently become possible, but I think that self-modifying code has been possible for about as long as we have had digital computers?

What specific things are possible to do now that weren't possible before, and what kind of AGI-relevant questions does that make testable?

Vaccine... Help? Deprogramming? Something?

Vaccines sometimes kill people. Several serious diseases that killed many more people, we're told, are a much smaller risk now. At some point, you'd think people would want to selfishly avoid vaccinating so much. And that's what we see happening. There's a lot of rationalization going on.


Vaccine... Help? Deprogramming? Something?

Yeah… that is not what I mean at all. You want a site, what about this one or SSC? I hardly think that you need any research paper, or meta-analyses (although, you can most certainly find them.)

Instead, if what you need is to "beat" your uncle by telling him, “You see… I've got this paper right here, Golden et al. Which Indicates that the aluminum and thimerosal content within vaccines is not harmful at all...” Then you need another thing. And that is not the solution to your problem. If what you are, is involved in a domination game right here, right now in the middle of Christmas then the solution is to pass! And of course, to vaccinate your children, and persuade everyone to vaccinate their children (Or you know… give them a pass on the genetic pool? — I joke, of course.)

For next year your uncle will come and say, “The earth? Yeah, it's flat.” You will get wide-eyed, you will shrug and say, “No uncle, not again!” And then, you will get at the right solution.

Vaccine... Help? Deprogramming? Something?

Google scholar + Sci Hub should get you 95% of what you need.

We run the Center for Applied Rationality, AMA

(I'm reminded of an old LW post, that I can't find, about Eliezer giving some young kid (who wants to be a writer) writing advice, while a bunch of bystanders signal that they don't regard Eliezer as trustworthy.)

You're thinking of You're Calling *Who* A Cult Leader?

And from that exploration, this IDC-process seems to work well, in the sense of getting good results.

An important clarification, at least from my experience of the metacognition, is that it's both getting good results and not triggering alarms (in the form of participant pushback or us feeling skeevy about doing it). Something that gets people to nod along (for the wrong reasons) or has some people really like it and other people really dislike it is often the sort of thing where we go "hmm, can we do better?"

Values Assimilation Premortem

Welcome!

Not sure how relevant can be my advice, because I was never in your position. I was never religious. I grew up in a communist country, which is kinda similar to growing up in a cult, but I wasn't a true believer of that either.

My prediction is that in the process of your change, you will fail to update on some points, and overcompensate on other points. Which is okay, because growing up happens in multiple iterations. What you do wrong in the first step, you can fix in the second one. As long as you keep some basic humility and admit that you still may be wrong, even after you got rid of your previous wrong ideas. Your currect position is the next step in your personal evolution; it does not have to be the final step.

Here are some potential mistakes to avoid:

  • package fallacy: "either the Christianity I grew up in is 100% correct, or the rationalism as I understand it today is 100% correct", or "either everything I believed in the past was 100% correct, or everything I believed in the past was 100% wrong". Belief packages are collection of statements, some of them dependent on each other, but most of them are independent. There is nothing wrong with choosing A and rejecting B from package 1, and choosing X and rejecting Y from package 2. Each statement is true or false individually. You can apply this to religious beliefs, political beliefs, beliefs of rationalists, etc. (This does not imply the fallacy of grey; some packages contain more true statements than others. You can still sometimes find a gem in a 90% wrong package, though.)
  • losing your cool. What is true is already true; and it all adds up to normality. Don't kill yourself after reading about quantum immortality, don't freak out after reading about basilisk, don't do anything crazy just because the latest article on LW or someone identifying as rationalist told you so. Don't burn bridges. Do reductionism properly: after learning that the apple is actually composed of atoms, you can still eat it and enjoy its taste. Evolution is a fact, but the goals of evolution are not your goals (for example, evolution doesn't give a fuck about your suffering).
  • identification and tribalism. "Rationalists" are a tribe; rationality is not. Rationality does not depend on what rationalists believe; the entire point of rationality is doing things the other way round: changing your beliefs to fit facts, not ignoring facts to belong better. What is true is true regardless of what rationalists believe.
There's also a larger meta-issue here. I have a lifelong wholeness project of fighting perfectionism. It's so ingrained in me that I'm pretty confident that fight will be lifelong for me. In that vein, this whole exercise could be seen as just another attempt to Do it Right The First Time™ and Never Make a Mistake®. So I do need to give myself a little freedom to screw this up, or I will really screw it up the way that I screwed up every relationship I never had before this. (Yes, I actually never dated anyone before this. I blame it on fear, shame & perfectionism + Evangelical sexual ethics taken a bit too far.)

Go one step more meta, and realize that perfectionism itself is imperfect (i.e. does not lead to optimal outcomes in life). Making conclusions before gathering data is a mistake. It is okay to do the right thing, as long as it is actually the right thing instead of something that merely feels right (such as following the perfectionist rituals even when they lead to suboptimal outcomes). Relax (relaxation improves your life, how dare you ignore that).

Copying your partner's opinions feels wrong, but hey, what can I do here? Offer you my opinion to copy instead? Heh.

If it's an issue that I don't have strong priors on and is not likely to significantly influence any major decisions I make with regard to her, I might as well just go with the flow and not complicate things unnecessarily.

You might also adopt the position "I don't know". It is a valid position if you really don't know. Also, the point of having opinions is that they may influence your decitions. If something is too abstract to have an impact on anything, ignoring it may be the smart choice.

Vaccine... Help? Deprogramming? Something?

What about the idea that claims without evidence can be dismissed without evidence?

Evidence for safety of vaccines? Billions of safe vaccinations in the past, widely accepted as safe, solid scientific basis, etc.

Evidence for your Uncle's claims?

Vaccine... Help? Deprogramming? Something?

Do you know any sites I can find research papers? (or at least the names and authors, libgen is a thing after all)

Funk-tunul's Legacy; Or, The Legend of the Extortion War

Based on the quote from Jessica Taylor, it seems like the FDT agents are trying to maximize their long-term share of the population, rather than their absolute payoffs in a single generation? If I understand the model correctly, that means the FDT agents should try to maximize the ratio of FDT payoff : 9-bot payoff (to maximize the ratio of FDT:9-bot in the next generation). The algebra then shows that they should refuse to submit to 9-bots once the population of 9-bots gets low enough (Wolfram|Alpha link), without needing to drop the random encounters assumption.


(It still seems like CDT agents would reason the same way, given the same goals, though?)

Internal empowerment, over internal alignment

I think this is great advice. I find in myself and others a common source of psychological shadow is the blocking out of parts of the self in a failed attempt to achieve an end that is ultimately counterproductive even if it occasionally works in limited circumstances.

Vaccine... Help? Deprogramming? Something?

Right. You are looking for why your uncle and his claims, such as:

My relative claims that aluminum and thimerosal content within vaccines can cause serious negative side effects...

are wrong, but here you're not going to find them, that is my point. What you are going to find is how to judge scientific consensus (and trust it) and if you read that article, then you understand. This is not even a trolley problem as Viliam has suggested, they do not happen in real life; we do not live in that inadequate world. There are inadequate parts in this world, but this is not one.

Vaccine... Help? Deprogramming? Something?

I do agree that separate vaccines may have different safety concerns.

The ones my relative seems to dislike most are the DTAP, measles, and Flu vaccine. They seem to think these ones in particular are more dangerous/less effective (especially concerning effectiveness and the flu vaccine).

This part is a value debate, not a factual debate. Vaccination is a form of trolley problem: we sacrifice the few people who get an adverse reaction to the vaccine, to save health and lives of the majority. Makes sense statistically; also makes you mad when it is your child thrown under the trolley. (The converse point is that when everyone else vaccinates their kids and you do not, you are free-riding on other people's sacrifice, and your ethical concerns seem to them like self-serving bullshit.)

This is true. I think my relative is partially mad at the whole trolley problem thing, partially mad that individuals maybe "could be saved" provided family history was taken into account, but aren't because of a "corrupt medical system"

(Because many babies have a minor reaction; they may be crying for a day or for a week. Are we talking about that, or about something more serious?)

My nephew had seizures after I think the DTAP, but I'm not sure. I'm not sure if that is statistically relevant anyways. The family member in question seems to think that minor reactions might be indicative of future major reactions from different shots or booster shots for same disease.


We run the Center for Applied Rationality, AMA

Actually, I think this touches on something that is useful to understand about CFAR in general.

Most of our "knowledge" (about rationality, about running workshops, about how people can react to x-risk, etc.) is what I might call "trade knowledge", it comes from having lots of personal experience in the domain, and building up good procedures via mostly-trial and error (plus metacognition and theorizing about noticed problems might be, and how to fix them).

This is distinct from scientific knowledge, which is build up from robustly verified premises, tested by explicit attempts at falsification.

(I'm reminded of an old LW post, that I can't find, about Eliezer giving some young kid (who wants to be a writer) writing advice, while a bunch of bystanders signal that they don't regard Eliezer as trustworthy.)

For instance, I might lead someone through an IDC like process at a CFAR workshop. This isn't because I've done rigorous tests (or I know of others who have done rigorous tests) of IDC, or because I've concluded from the neuroscience literature the IDC is the optimal process for arriving at true beliefs.

Rather, its that I (and other CFAR staff) have interacted with people who have a conflict between beliefs / models / urges / "parts", a lot, in addition to spending even more time engaging with those problems in ourselves. And from that exploration, this IDC-process seems to work well, in the sense of getting good results. So, I have a prior that it will be useful for the nth person. (Of course sometime this isn't the case, because people can be really different, and occasionally a tool will be ineffective, or even harmful, despite being extremely useful for most people.)

The same goes for, for instance, whatever conversational facilitation acumen I've acquired. I don't want to be making a claim that, say, "finding a Double Crux is the objectively correct process, or the optimal process, for resolving disagreements." Only that I've spent a lot of time resolving disagreements, and, at least sometimes, at least for me, this strategy seems to help substantially.

I can also give theoretical reasons why I think it works, but those theory reasons are not much of a crux: if a person can't seem to make something useful happen when they try to Double Crux, but something useful does happen when they do this other thing, I think they should do the other thing, theory be damned. It might be that that person is trying to apply the Double Crux pattern in a domain that its not suited for (but I don't know that, because I haven't tried to work in that domain yet), or it might be that they're missing a piece or doing it wrong, and we might be able to iron it out if I observed their process, or maybe they have some other skill that I don't have myself, and they're so good at that skill that trying to do the Double Crux thing is a step backwards (in the same way that there are different schools of martial arts).

The fact that my knowledge, and CFAR's knowledge, in these domains is trade knowledge has some important implications:

  • It means that our content is path dependent. There are probably dozens or hundreds of stable, skilled "ways of engaging with minds." If you're trying to build trade knowledge you will end up gravitating to one cluster, and build out skill and content there, even if that cluster is a local optimum, and another cluster is more effective overall.
  • It means that you're looking for skill, more than declarative third-person knowledge and that you're not trying to make things that are legible to other fields. A carpenter wants to have good techniques for working with wood, and in most cases doesn't care very much if his terminology or ontology lines up with that of botany.
    • For instance, maybe to the carpenter there are 3 kinds of knots in wood, and they need to be worked with in different ways, but he's actually conflating 2 kinds of biological structures in the first type, and the second and third type are actually the same biological structure, but flipped vertically (because sometimes the wood is "upside down" from the orientation of the tree). The carpenter, qua carpenter, doesn't care about this. He's just trying to get the job done. But that doesn't mean that bystanders should get confused and think that the carpenter thinks that he has discovered some new, superior framework of botany.
  • It means that a lot of content can only easily be conveyed tacitly, and in person, or at least, making it accessible via writing, etc. is an additional hard task.
    • Carpentry (I speculate) involves a bunch of subtle tacit, perceptual maneuvers, like (I'm making this up) learning to tell when the wood is "smooth to the grain" or "soft and flexible", and looking at a piece of wood and knowing that you should cut it up top near the knot, even though that seems like it it would be harder to work around, because of how "flat" it gets down the plank. (I am still totally making this up.) It is much easier to convey these things to a learner who is right there with you, so that you can watch their process, and, for instance, point out exactly what you mean by "soft and flexible" via iterated demonstration.
    • That's not to say that you couldn't figure out how to teach the subtle art of carpentry via blog post or book, but you would have to figure out how to do that (and it would still probably be worse than learning directly from someone skilled). This is related to why CFAR has historically been reluctant to share the handbook: the handbook sketches the techniques, and is a good reminder, but we don't think it conveys the techniques particularly well, because that's really hard.

Vaccine... Help? Deprogramming? Something?

I have read this article. And my default position right now, if no one replied to this post, is that my relative is crazy and vaccines are ridiculously safe. Based mostly on what everyone here and across the internet and all the medical professionals who know more than me or my relative think.

What I'm looking for now is why everyone I trust intellectually believes what they do, what are the knockdown arguments against the antivax crowd?

What spiritual experiences have you had?

Note that I would not usually describe this as a spiritual experience.

Vaccine... Help? Deprogramming? Something?

Given the plenty of debate, out there right now, on this very subject — I don't think it very wise to start laying out claims here left and right. Especially those about your relative (who cares about those, right?) I recommend you a particular article, about how to deal which such stuff:

The Control Group Is Out Of Control

Bayesian statistics, alone among these first eight, ought to be able to help with this problem. After all, a good Bayesian should be able to say “Well, I got some impressive results, but my prior for [parapsychology] is very low, so this raises my belief in [parapsychology] slightly, but raises my belief that the experiments were confounded a lot.”

You don't have to become an anti-vaxxer just by hearing about some convincing evidence (which may be right or not) but instead become a bit more skeptic on the subject, that is, until you become better informed. That is, also, until you can better differentiate anecdotal and scientific evidence. If we cannot take this into consideration, and if things have to be either white or black, then, we are in for a very wild ride.

What spiritual experiences have you had?

(note: on LessWrong I believe you should be able to move comments and answers back and forth yourself)

What spiritual experiences have you had?

What exactly constitutes a “spiritual experience” or “perception” or what have you? That is—what, specifically, are you asking about? (I don’t think I’ve ever had any “spiritual experience”, but perhaps this is a mere difference of terminology…?)

EDIT: Ah, I just realized this was a question and I posted this as an answer and not a comment. Is it possible for a moderator to change it?

Defining "Antimeme"

Words can't be defined arbitrarily, so I am going to examine your definition first.

First, I am not sure what exactly counts as "mainstream", and why is it even important. What you describe seems like a relationship between a meme and a culture, whether large or small. So you could have "anti-memes of antimemes" as Isnasene describes. Or you could have a polarized society with two approximately equally large cultures, each of them having their own "anti-memes". Or a small minority, such as cult, that strongly ignores the surrounding culture.

What did you mean by "mainstream knowledge"? It is something most people sincerely believe, or just something they profess? They may react differently. Sincerely believing people may listen to arguments when they have proper form; but you can't convince a person whose "belief" is simply an expression of belonging to a team.

A symbiotic war half-meme encourages you attack its parity inverse as "wrong". The meme in a meme-antimeme pair nudges you to dismiss its antimeme as "unimportant" or invisibly ignore it altogether.

I am thinking now about "culture wars" where attacking other people's opinion as wrong has gradually changed into "no-platforming". I wonder whether there is a spectrum where sufficiently "no-platformed" opinions change into "unimportant" when the side defending them is completely defeated.

.

Also, I am afraid that the actual usage of the word "anti-meme" would be to defend ideas from valid criticism. ("You only disagree with me because this is an anti-meme that threatens your ego!")

The example of Lisp is a good one here: we have a decades long holy war where one side shouts "Lisp is superior (and so am I by recognizing this fact)!", the other side goes "where are the libraries? where are the tools? where are solutions to problems X, Y, and Z?", but the former side goes "la la la, I can't hear you over the sound of how Lisp is superior!". Then suddenly someone with a good object-oriented background fixes the usual problems with Lisp, creating Clojure, and -- lo and behold! -- suddenly the mainstream is happy with the result.

That is, focusing too much on how your idea is an "anti-meme" makes you blind to its actual flaws.

Expected utility and repeated choices

The intuitive result you would expect only holds for utility function which are linear in x (I believe..), since we could then apply the utility function at each step and it would yield the same value as if applied to the whole amount.

Another case would be if you were to receive your utility immediately after playing each game (like in a reinforcement learning algorithm). In those cases is also applied to each outcome separately and would yield the result you would expect.

Also: (b) has a better EV in terms of raw $ and due to law of large numbers we would expect the actual amount of money won by repeatedly playing (b) to approach that EV. So for many games we should expect any monotonic increasing utility function to favor (b) over (a) as the number of games approaches infinity. The only reason your U favors (a) over (b) for a single game is that it is risk-averse, i.e. sub-linear in x. As the amount of games approaches infinity the risk of choosing to play b becomes less and less until it is the choice between (essentially) winning 0.5$ for sure or 0.67$ for sure in every game. If you think about it in these terms it becomes more intuitive why the behaviour observed by you is reasonable.

In other words: Yes! You do have to think about the amount of games you play if your utility function is not linear (or you have a strong discount factor).

Vaccine... Help? Deprogramming? Something?

This probably needs to be discussed for each vaccine separately. I am not an expert, but I can easily imagine a world where vaccine A contains harmful content, and vaccine B does not; or where vaccine C needs to be taken at very young age (e.g. because the disease is extra dangerous for the babies), but vaccine D does not. I can imagine some vaccines being harmful for people with specific genes.

Any of these claims about a specific vaccine can be right or wrong, and proving them right or wrong for a specific vaccine X does not tell us whether they are right or wrong for a different vaccine Y. So the claim of your relative about a specific vaccine can be correct, or can be complete bullshit, or anything in between (e.g. kinda true, but the risk in real life is negligible).

They also believe its an affront to freedom in general to force vaccinations.

This part is a value debate, not a factual debate. Vaccination is a form of trolley problem: we sacrifice the few people who get an adverse reaction to the vaccine, to save health and lives of the majority. Makes sense statistically; also makes you mad when it is your child thrown under the trolley. (The converse point is that when everyone else vaccinates their kids and you do not, you are free-riding on other people's sacrifice, and your ethical concerns seem to them like self-serving bullshit.)

So... which vaccine specifically are we talking about, and what specifically is the "history of reactions in a family"? (Because many babies have a minor reaction; they may be crying for a day or for a week. Are we talking about that, or about something more serious?)

Note: I am not an expert, so even if you give me these answers, I can't help you. But the data will probably be necessary for any expert who happens to join this debate.

What spiritual experiences have you had?

The closest experience that comes to mind was in an undergraduate tutoring session for a first-year mathematics module, where "just for fun" at the end of the session we were taken along a path of derivations from the subject matter we'd just covered, up into some more abstract math, and then back down into something more concrete and familiar that had (until that point) always seemed like an entirely separate area of mathematics.

For a brief moment it was like everything fell into place, and I was face to face with the infinite / eternal / perfect structure of the universe. But then the session ended and the spell broke, and I realised I couldn't quite remember it all well enough to recreate what had just happened.

But there's no experience I can report that ever made me suspect the involvement of the supernatural or the divine.

agai's Shortform

Trying to find Katja Grace's account which she mentions she has here for a PM conversation. If someone (or herself) would PM me that would be awesome.

[This comment is no longer endorsed by its author]Reply
NaiveTortoise's Short Form Feed

Hmm. It may actually be possible to regenerate the motor neurons (or repurpose the already existing ones somehow). I'm not sure on the exact differences between them.

Somehow the action I would expect to help is for the person's limbs to be moved by others/machines as if they are acting themselves, because I think the body can adapt somehow?

Difficult to be specific without reading a lot of biology here though.

agai's Shortform

I have two default questions when attempting to choose between potential actions: I ask both "why" and "why not?".

Five Planets In Search Of A Sci-Fi Story

" So another research program was started, and the result were fully immersive, fully life-supporting virtual reality capsules. Stacked in huge warehouses by the millions, the elderly sit in their virtual worlds, vague sunny fields and old gabled houses where it is always the Good Old Days and their grandchildren are always visiting. "


Is this a reference to the futurama episode with the death star type thing with all the old people in it?

Vaccine... Help? Deprogramming? Something?

My relative claims that aluminum and thimerosal content within vaccines can cause serious negative side effects (I think this is probably false)

They also claim that that vaccination schedule is to quick and seem to have some level of moral indignation at the speed and age of vaccination, and want a slower vaccination schedule at a higher age.

As well as family screening for vaccine related issues, i.e "If your family has a history of reactions they should wait until an older age and slow down the vaccination schedule. They REALLY don't like the number of vaccines given and consider it to be excessive.

They also believe its an affront to freedom in general to force vaccinations.

I think that's most of their arguments, I might edit in more if I can remember them.

We need to revisit AI rewriting its source code

In practice, self-modification is a special case of arbitrary code execution; it's just running a program that looks like yourself, with some changes. That means there are two routes to get there: either communicate with the internet (to, eg, pay Amazon EC2 to run the modified program), or use a security vulnerability. In the context of computer security, preventing arbitrary code execution is an extremely well-studied problem. Unfortunately, the outcome of all the study is that it's really hard, and multiple vulnerabilities are discovered every year with low probability of them ever stopping.

Vaccine... Help? Deprogramming? Something?

You could start by writing the exact argument(s) by your relative. How can we respond to a claim we never heard? (Or did you just want very general pro-vaccination statements? I am sure google can help with this.)

Propagating Facts into Aesthetics

Promoted to curated: I think this post is pointing at something that I expect will turn out to be obviously really important in a few years. I also think it's written in a really example-heavy way that allows people to engage with it, whereas most writing in this space usually stays abstract and as such often lacks grounding and concreteness. 

What are you reading?

What is your verdict?

I'm currently reading through his blog Metamoderna and feel like there are some similarities to rationalist thoughts on there (e.g. this post on what he calls "game change" and this post on what he calls proto-synthesis).

What spiritual experiences have you had?

Alas, no-one can see another's experiences, nor show them their own. All I can see is the words that they use, and "oneness with everything", "the presence of the divine", and "self falling away" are not words that I would use to describe any of my own experiences. Neither do any of my experiences seem to be the sort of thing that the OP asks for, but I thought it worth while adding the data point.

Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think

It seems to me like 'intent to inform' is worth thinking about in the context of its siblings; 'intent to misinform' and 'intent to conceal.' Cousins, like 'intent to aggrandize' or 'intent to seduce' or so on, I'll leave to another time, tho you're right to point out they're almost always present, if just by being replaced by their reaction (like self-deprecation, to be sure of not self-aggrandizement).

Quakers were long renowned for following four virtues: peace, equality, simplicity, and truth. Unlike wizards, they have the benefit of being real, and so we can get more out of their experience of having to actually implement those virtues in a sometimes hostile world that pushes for compromises. So I pulled out my copy of Some Fruits of Solitude by William Penn, and the sections on Truth and Secrecy are short enough to quote in full (including Justice, which is in the middle):

Truth

144. When you speak, be sure to speak the truth, for misleading is halfway to lying, and lying is the whole way to Hell.

Justice

145. Don't believe anything against another unless you have good grounds. And don't share anything that might hurt another, unless it could cause greater hurt to others to keep it secret.

Secrecy

146. It is wise not to try to find out a secret, and honest not to reveal one.

147. Only trust yourself, and no one will betray you.

148. Excessive openness has the mischief of treachery, though not the malice.

One of the bits that fascinated me when I first read it was 146, which pretty firmly separates 'intent to inform' from 'honesty'; there is such a thing as ownership of information, and honesty doesn't involve giving up on that, or giving up on having secrets yourself.

What's also interesting to me is that several of them embody bits of information about the social context. 145, for example, is good advice in general, but especially important if you have a reputation for telling the truth; then you become a target for rumor-starters, as people would take seriously stories you repeat even if they wouldn't take them seriously from the original source. It also covers situations like Viliam's, drawing a line that determines which negative beliefs should be broadcast. And in 146 again, being able to respond to questions about secrets with "I'd rather not say" relies on a social context where people think it wise to not press for further details (because otherwise you encourage a lie, instead of a straightforward "please direct your attention elsewhere.").

What spiritual experiences have you had?

Almost half a century ago, when I was 16 or 17 and still believed in God, I went to a synagogue with some people of my church for some kind of exchange thing. The service was quite boring though, as everything was in Hebrew and I didn't understand a thing. But there was nothing to be done except sitting still trying to be respectful. So I guess I fell into some kind of meditative state, and I don't remember anything about that, but just afterwards I felt that I had been in the presence of God, and a sense of great gratitude. I'm still surprised I didn't convert to Judaism on the spot; maybe I would have if I weren't so shy...

What spiritual experiences have you had?

Now that there are a couple other answers, I'll talk about one of my own, keeping in mind I now think of the kind of thing I'm going to describe as a part of my typical field of awareness that I can choose to pay attention to or not.

Once during sesshin (an multi-day period of dedicated practice in zen focused on meditation, literally a gathering of mind), after a particular rough morning where I was very restless and was expending a lot of effort just to stay on the cushion, my teacher gave a talk on the teaching poem "Trust In Mind". We had lunch, a one hour break period, then returned for afternoon meditation period around 1500. My body hurt, so I swallowed my pride and sat in a chair rather than on a cushion on the floor, the first time I had ever done that in the zendo.

Between surrendering to the pain in my body, my pride, and my teacher's encouragement to trust, I suddenly found myself giving everything over. My big notion of self fell away and my awareness opened in a deep way that is hard to put in words. This had happened to me before, but only in little flashes. This time it persisted, lasting for hours, giving me the opportunity to be with and explore my experience.

Eventually I "forgot" how to remain in that state and my big sense of self returned a few days later after the sesshin ended and I got tangled up in my "regular" life, but I was transformed in some subtle ways by the experience such that I now had a trust in the world to be just as it is in a way I didn't before.

Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think

Let me argue for intentionality in communication. If your intent is to inform and communicate a fact, do so. If your intend is to convince someone to undertake an action, do so. If your intent is to impress people with your knowledge and savvy, do so. If your intent is to elicit ideas and models to see where you differ, do so.

One size does not fit all situations or all people. Talking is an act. Choose the mechanisms that fit your goals, like you do in all actions.

Humans aren't perfect, and most humans are actually pretty bad at both giving and receiving "honest" communication. Attempting to communicate with them on the level you prefer, rather than the level they're ready for, is arrogant and unhelpful.

Humans are not naturally nor necessarily aligned with your goals (in fact, nobody is fully aligned, though many of us are compatible if you zoom out far enough). It's an important social fiction to pretend they are, in order to cooperate with them, but you don't have to actually believe this falsehood.

2019 AI Alignment Literature Review and Charity Comparison

See also My current thoughts on MIRI's "highly reliable agent design" work by Daniel Dewey (Open Phil lead on technical AI grant-making).

From the "What do I think of HRAD?" section:

... This reduces my credence in HRAD being very helpful to around 10%. I think this is the decision-relevant credence.
What spiritual experiences have you had?

Great question! I will just say that I had such an experience, but don’t know how to share it in a way that feels adequate for me.

Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think

I wouldn't mind removing hyperboles from socially accepted language. Don't say "everyone" if you don't mean literally everyone, duh. (I suppose that many General Semantic fans would agree with this.)

For me a complicated question is one that compares against an unspecified stardard, such as "is this cake sweet?" I don't know what kind of cakes you are used to eat, so maybe what's "quite sweet" to me is "only a bit sweet" for you. Telling literal truths, such as "yes, it has a nonzero amount of sugar, but also a nonzero amount of other things" will not help here. I don't know exactly how much sugar it contains. So, "it tastes quite sweet to me" is the best I can do here. Maybe that should be the norm.

I agree about the "nearest unblocked strategy". You make the rules; people maximize within the rules (or break them when you are not watching). People wanting to do X will do the thing closest to X that doesn't break the most literal interpretation of the anti-X rules (or break the rules in a deniable way). -- On the other hand, even trivial inconveniences can make a difference. We are not discussing superhuman AI trying to get out of the box, but humans with limited willpower who may at some level of difficulty simply give up.

The linked article "telling truth is social aggression" ignores the fact that even in competition, people make coalitions. And if you have large amounts of players, math is in favor of cooperation, at least on relatively small scale. If your school grades on a curve, it discourages helping your classmate without getting anything in return. But mutual cooperation with one classmate still helps you both against the rest of the class. The same is true about helping people create better models of the world, when the size of your group is tiny compared to the rest of the population.

The real danger these days usually isn't Gestapo, but thousands of Twitter celebrities trying to convert parts of your writing taken out of context into polarizing tweets, and journalists trying to convert those tweets into clickbait, where the damage caused to you and your family is just an externality no one cares about. This is the elephant in the room: "I personally don't disagree with X; or I disagree with X but I think there is no great harm in discussing it per se... but the social consequences of me being publicly known as 'person who talks about X' are huge, and I need to pick my battles. I have things to protect that are more important to me than my mere academic interest in X." Faced by: "But if you lie about X, how can I trust that you are not lying about Y, too?"

jacobjacob's Shortform Feed

I made a Foretold notebook for predicting which posts will end up in the Best of 2018 book, following the LessWrong review.

You can submit your own predictions as well.

At some point I might write a longer post explaining why I think having something like "futures markets" on these things can create a more "efficient market" for content.

We run the Center for Applied Rationality, AMA

I think he's bad at this.

You can see this in some aspects of his companies.

High micromanagement. High turnover. Disgruntled former employees.

(Feedback Request) Quadratic voting for the 2018 Review

The Hugos use EPH for nominating finalists, then IRV to choose winners from among those finalists. Those are entirely separate steps. I was talking about the former, which has no IRV involved.

What spiritual experiences have you had?

None. Not just "none that I would be willing to talk about in public", but no "spiritual" experiences at all.

The scare quotes are because I do not know what people are intending to point to when they use the expression, or elaborate upon it. Whatever they are pointing to within themselves, when I take hold of the words that they use, they do not point to anything within me.

Link: Does the following seem like a reasonable brief summary of the key disagreements regarding AI risk?

Valid. I was primarily summarizing the risk part though, rather than the solutions.

agai's Shortform

I actually think that 2020 could be the year of the Linux desktop

Linux has had the advantages it has for twenty years...so why now?

Meta-Honesty: Firming Up Honesty Around Its Edge-Cases

I don’t really stand by the last half of the points above, I.e. the last ~3rd of the longer review. I think there’s something important to say here about the relationship between common knowledge and deontology, but that I didn’t really say it and I said something else instead. I hope to get the time to try again to say it.

Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think

In situations where others can hurt you, clever solution like "no comment - because this is the situation where in some counterfactual world I would prefer to be silent" results in you getting hurt.

(A few weeks ago, everyone in the company I am working for got a questionaire from management where they were asked to list the strengths and weaknesses of their colleagues. Cleverly refusing to answer, beyond plausible excuses such as "this guy works on a different project so I haven't really interacted with him much", would probably cost me my job, which would be inconvenient in multiple ways. At the same time, I consider this type of request deeply repulsive -- on Monday I am supposed to be a good team member who enjoys cooperation and teambuilding, and on Tuesday I am asked to snitch on my coworkers -- from my perspective this would hurt my personal integrity much more than mere lying. Sorry, I am officially a dummy who never notices a non-trivial weakness in anyone, now go ahead and try proving that I do.)

Also, it seems to me that in real world, bulding a prestige of a person who never lies, is more tricky than just never lying and cleverly glomarizing. For example, the prestige you keep building for years can be ruined overnight by a third party lying about you having lied to them. (And conversely, you could actually have a strategy of never lying... except to a designated set of "victims", in situations when there is no record of what you said, and who are sufficiently lower-status that you, so if they choose to accuse you publicly, they will be percieved as liars.)

agai's Shortform
I...right now I'm kind of in disbelief that I am so far ahead of everyone else that I could *literally buy the entire planet for under $20USD,* and no one stopped me.

It's called progress. In my youth, we only had a bridge to sell you.

We run the Center for Applied Rationality, AMA

Do you think that Elon doesn't get his employees to do what's best for his companies?

Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think

Maybe I’m unusually honest—or possibly unusually bad at remembering when I’ve lied!?—but I’m not sure I even remember the last time I told an outright unambiguous lie. The kind of situation where I would need to do that just doesn’t come up that often.

I would say that you should consider yourself fortunate then, that you are living in a situation where most of the people surrounding you have your best interests in mind (or, at worst, are neutral towards your interests). For others in more adversarial situations, telling lies (or at least shading the truth to the extent that would be considered lying by the standards of this post) is a necessary survival skill.

Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think

First, some quick comments:

  1. Good post; I mostly agree with all specific points therein.

  2. I appreciate that this post has introduced me (via appropriate use of ‘Yudkowskian’ hyperlinking) to several interesting Arbital articles I’d never seen.

  3. Relevant old post by Paul Christiano: “If we can’t lie to others, we will lie to ourselves”.

All that having been said, I’d like to note that this entire project of “literal truth”, “wizard’s code”, “not technically lying”, etc., etc., seems to me to be quite wrongheaded. This is because I don’t think that any such approach is ethical in the first place. To the contrary: I think that there are some important categories of situations where lying is entirely permissible (i.e., ethically neutral at worst), and others where lying is, in fact, ethically mandatory (and where it is wrong not to lie). In my view, the virtue of honesty (which I take to be quite important indeed), and any commitment to any supposed “literal truth” or similar policy, are incompatible.

Clearly, this view is neither obvious nor likely to be uncontroversial. However, in lieu of (though also in the service of) further elaboration, let me present this ethical question or, if you like, puzzle:

Is it ethically mandatory always to behave as if you know all information which you do, in fact, know?

We run the Center for Applied Rationality, AMA

Ben just to check, before I respond—would a fair summary of your position here be, "CFAR should write more in public, e.g. on LessWrong, so that A) it can have better feedback loops, and B) more people can benefit from its ideas?"

Inadequate Equilibria vs. Governance of the Commons

This essay provides some fascinating case studies and insights about coordination problems and their solutions, from a book by Elinor Ostrom. Coordination problems are a major theme in LessWrongian thinking (for good reasons) and the essay is a valuable addition to the discussion. I especially liked the 8 features of sustainable governance systems (although I wish we got a little more explanation for "nested enterprises").

However, I think that the dichotomy between "absolutism (bad)" and "organically grown institutions (good)" that the essay creates needs more nuance or more explanation. What is the difference between "organic" and "inorganic" institutions? All institutions "grew" somehow. The relevant questions are e.g. how democratic is the institution, whether the scope of the institution is the right scope for this problem, whether the stakeholders have skin in the game (feature 3) et cetera. The 8 features address some of that, but I wish it was more explicit.

Also, It's notable that all examples focus on relatively small scale problems. While it makes perfect sense to start by studying small problems before trying to understand the big problems, it does make me wonder whether going to higher scales brings in qualitatively new issues and difficulties. Paying to officials with parcels in the tail end works for water conflicts, but what is the analogous approach to global warming or multinational arms races?

What spiritual experiences have you had?

I believe I've had kesho experiences too. This easily meets the criteria of "spiritual experience" and "mystical perception", though it has no hallucinatory component.

Defining "Antimeme"

I hadn't noticed utilitarianism and ethical vegetarianism check these boxes. I wrote this series hoping for exactly this kind of insight. Thanks!

Your comment on the cross-cultural application of utilitarianism makes this extra insightful. I have edited the original post to acknowledge that antimemes are not always culture-specific.

12020: a fine future for these holidays

Do you happen to be making a reference to the Holocene calendar? (Which was popularized by this Kurzgesagt video.) It advocates that we reset the zero-year to be 10k years older, thereby set before most of human civilization.

Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think

I think people quite frequently tell unambiguous lies of the form "I have read these terms and conditions".

Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think

'IIRC' because I remember being asked this question multiple times and lying once as an answer, but don't remember exactly who was around or who asked the time I remember lying, and am not certain that I actually lied as opposed to being very evasive or murmuring nonsensical syllables or something.

Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think

When's the last time you needed to consciously tell a bald-faced, unambiguous lie?—something that could realistically be outright proven false in front of your peers, rather than dismissed with a "reasonable" amount of language-lawyering.

I don't know about the last time I needed to do so, but the last time I did so was two days ago (Christmas Eve), when (IIRC) one of my grandparents asked me if I had brought board games to my aunt and uncle's house while in the presence of my aunt, uncle, and/or cousins. In fact I had, but didn't want to say that, because I had brought them as Christmas gifts for my aunt and uncle's family, and didn't want to reveal that fact, and didn't think I could get away with being evasive, so (again, IIRC) I lied about bringing them.

I have a pretty strong preference against literal/unambiguous lying, and usually I can get away with evasion when I want to conceal things, and I don't remember unambiguously lying previously, but I'm bad at remembering things and wouldn't be all that surprised if somebody showed me a recording of me telling a bald-faced lie at some other point during December.

What spiritual experiences have you had?

I once laid down on the floor of an empty bedroom, went through thinking of every thing and/or person and/or group of people I could think of, and thought about how excellent/beautiful/fitting they were, for something like an hour (not on purpose, it just sort of happened).

Values Assimilation Premortem

Thanks for the welcome!

This is super helpful. It sounds like you've lived the thing that I'm only hypothesizing about here. Hopefully "Can't wait for round three" isn't sarcastic. This first round for me was extremely painful, but it sounds like round 2 was possibly more pleasant for you.

I like the framework you're using now, and I'm gonna try to condense it into my own words to make sure I understand what you mean. Basically, you're trying to optimize around keeping the various and conflicting hopes, needs, fears, etc. within you at least relatively cool with your choices. It also seems like there might be an emphasis on choosing to pursue the things that you find most meaningful. Is that correct? I would actually love to hear more on this. Are there good posts / sequences on it?

Regarding examples: I'll need to spend some time brainstorming and collating, but I'll post some here when I get to it. I tend to do the lazy thing of using examples to derive a general principle and then discarding the examples. This is probably not good practice wrt: Rationality.

Values Assimilation Premortem

Thanks for the tips!

Learning how to critique arguments is a skill you can study.

I suppose that large portions of The Sequences are devoted to precisely the task of critiquing arguments without requiring a contrary position. It's kind of an extension of a logical syntax check, but the question isn't just whether it's complete and deductively sound, but also whether it's empirically sound and bayesianly sound.

It's gonna take me a while to master those techniques, but it's a worthy goal. Not 100% sure I can do it on the timeline I need, but I can at least practice and start developing the habits.

Reading about those who have taken Rationalist-style approaches to get to obviously crazy conclusions is also useful, for seeing where people are prone to going off the rails, so you can avoid the same mistakes, or recognize the signs when others do.

I love reading about failure modes! Not sure why I find it so fascinating. Maybe it's connected to the perfectionism? Speaking of...

if you aren't failing, you aren't taking big enough risks to find something new.

I consider my greatest failure in life to be that I haven't failed enough. I have too few experiences of what works and what doesn't, I failed to make critical course-corrections because they lay outside my info bubble, and I missed out on many positive life experiences along with the negative ones.

What spiritual experiences have you had?

Thread for mentioning past LessWrong posts that describe or mention what might qualify as spiritual experiences. One comes immediately to my mind: Val's "Kensho".

What are you reading?

Edited above comment with fuller details :)

(Feedback Request) Quadratic voting for the 2018 Review

The second paragraph in the linked post says:

Many people find the Hugo voting system (called “Instant Runoff Voting“) very complicated.
(Feedback Request) Quadratic voting for the 2018 Review
As I understand is, this just means that you sum the squares of the SV and QV votes, then linearly scale all the votes of one such that these two numbers are equal to one another.

... such that the average for each of these numbers are equal, yes. I think that the way you said it, you'd be upscaling whichever group had fewer voters, but I'm pretty sure you didn't mean that.

Instant Runoff seems to be optimising for outcomes about which the majority have consensus, which isn't something I care as much about in this situation. That said I don't fully understand how it would change the results.

E Pluribus Hugo, and more generally, proportional representation, have nothing to do with Instant Runoff, so I'm not sure what you're saying here.

TurnTrout's shortform feed

For what it's worth, I tried something like the "I won't let the world be destroyed"->"I want to make sure the world keeps doing awesome stuff" reframing back in the day and it broadly didn't work. This had less to do with cautious/uncautious behavior and more to do with status quo bias. Saying "I won't let the world be destroyed" treats "the world being destroyed" as an event that deviates from the status quo of the world existing. In contrast, saying "There's so much fun we could have" treats "having more fun" as the event that deviates from the status quo of us not continuing to have fun.

When I saw the world being destroyed as status quo, I cared a lot less about the world getting destroyed.



Defining "Antimeme"
Antimemes are a culture-specific phenomenon. Different cultures have different antimemes.

Because cultures are nested within one-another, it's interesting to posit that anti-memes can have their own anti-memes. For instance ethically-motivated vegetarianism is an anti-meme for (most) meat-eaters but wild animal suffering is an anti-meme for (most) ethically-motivated vegetarians.

Also note that the anti-meme of an anti-meme tends not to be a meme. This is a matter of dynamics. Since the meme culture is the default, a culture bonded to an anti-meme may only exist when the meme culture has not developed a way to dissolve the anti-meme. Thus, anti-memes for cultures bonded to anti-memes must be viewed as useless from the perspective of the meme-culture. Otherwise, the meme-culture would just use the anti-anti-meme to dissolve the anti-meme.

Wild animal suffering is a good example of this. Even though people periodically bring up wild animal suffering caused by plant farming as a talking point against ethical vegetarianism, actually taking wild animal suffering seriously would be far more corrosive to the meme-culture than ethical vegetarianism (the anti-meme culture) would be.


I also think some anti-memes might also be culture-generic. For instance, utilitarianism ideology looks a lot like the anti-meme for pro-social behavior. Even if utilitarianism is discussed relatively frequently (and periodically does get attacked as wrong), it checks all the boxes in practice:

Learning it threatens the egos and identities of adherants to the mainstream of a culture[1].

Utiliarianism, roughly speaking, equates saving the life of someone next door with saving the life of someone far away (which can easily be achieved relatively cheaply). This radically re-orients how moral virtue (ie egos and identities) would be assigned.

Learning the meme renders mainstream knowledge in the field unimportant by broadening the problem space of a knowledge domain, usually by increasing the dimensionality.

Utilitarianism dramatically reduces the moral importance of being involved in your local community by broadening the problem of morality to people far away who need way more help. Moral circle expansion (in the sense of considering animals more seriously as moral patients) also does this and even renders local communities unimportant depending on their complicity in factory farming and how much you care.

Mainstream wisdom considers detailed knowledge of the antimeme irrelevant, unimportant or low priority. Mainstream culture may just ignore the antimeme altogether instead.

Definitely true of factory farming. Pretty true of global poverty.

Humans Are Embedded Agents Too

Ooh, that is very insightful. The word-boundary problem around "values" feels fuzzy and ill-defined, but that doesn't mean that the thing we care about is actually fuzzy and ill-defined.

New paper: (When) is Truth-telling Favored in AI debate?

This looks really interesting to me. I remember when the Safety via Debate paper originally came out; I was quite curious to see more work around modeling debate environments and getting a better sense on how well we should expect it to perform in what kinds of situations. From what I can tell this does a rigorous attempt at 1-2 models.

I noticed that this is more intense mathematically than most other papers I'm used to in this area. I started going through it but was a bit intimidated. I was wondering if you may suggest tips for reading through it and understanding it. Do readers need to understand some of Measure Theory or other specific areas of math that may be a bit intense for what we're used to on LessWrong? Are there any other things we should read first or make sure we know to help prepare accordingly?

Defining "Antimeme"
The typical response to encountering a regular meme is to assign a truth value to it via rationality.

This seems...iffy.

We run the Center for Applied Rationality, AMA

Leadership (as for instance leadership retreats are trying to teach it) is the intersection between management and strategy.

Another way to put it, its' the discipline of getting people to do what's best for your organization.

ESC Process Notes: Claim Evaluation vs. Syntheses

Just realized the "it" in "I'm curious what it looks like." probably referred to "my DB", not "the feedback". I'd love to either user test my DB on you (you play with it while I watch) or have you beta test the description I'm writing, if you're interested.

How’s that Epistemic Spot Check Project Coming?

I don't immediately see how they're related. Are you thinking people participating in the markets are answering based on proxies rather than truly relevant information?

Link: Does the following seem like a reasonable brief summary of the key disagreements regarding AI risk?

There are disagreements over approach (e.g. provably friendly vs. boxed "tool" AI), which I don't see on your list.

How’s that Epistemic Spot Check Project Coming?

What's the difference between John's suggestion and amplifying ESCs with prediction markets? (not rhetorical)

How’s that Epistemic Spot Check Project Coming?

"No Gods, No Proxies, Just Digging For Truth" is a good tagline for your blog.

(Feedback Request) Quadratic voting for the 2018 Review

This all makes a lot of sense, I'm glad to hear you say it. I think that the option for 'score voting style' is quite good, we in fact were seriously considering doing something like that.

I really like the idea of producing a visualisation as the user makes their votes up. That sounds delightful.

I'd suggest scaling the SV votes so that their average euclidean norm is the same as that of the QV votes.

Yeah. As I understand is, this just means that you sum the squares of the SV and QV votes, then linearly scale all the votes of one such that these two numbers are equal to one another. And then you've got them on the same playing field. And this is a trivial bit of computation, so we can make it that if you're voting in SV but then want to move to QV to change the weights a little, when you change we can automatically show you what the score looks like in QV (er, rounded, there'll be tons of fractions by default).

If you did want a proportional method, I'd probably suggest something like E Pluribus Hugo with quadratically-scaled ballots behind the continuous part.

Instant Runoff seems to be optimising for outcomes about which the majority have consensus, which isn't something I care as much about in this situation. That said I don't fully understand how it would change the results.

Identity Isn't In Specific Atoms

This "explanation" leaves lingering doubt. It doesn't dissolve all the questions that I have about personal identity. Ok, I'm a factor in a subspace of an amplitude distribution: I get that and I'm okay with that. But there are still unresolved issues of anticipation.

Let's say I record in sufficient fidelity the amplitude distribution factor which represents "me" at this point in time. Then after I am dead some machine is used to recreate this amplitude distribution to sufficient fidelity as to re-create me, as I exist now. That person will come into being with all my memories and with a subjective feeling of actually being me. Furthermore, there is nothing about this "new instance of me" which experimentally differentiates it from the "original me" which is typing these words. (This is the quantum replicator/teleport thought experiment.)

So far, I'm onboard.

Now the quantum realist typified by Eliezer would argue that there is no difference between "new instance of me" and "original me," and I'm stupid for thinking that there is. Furthermore, since personal identity is thus shown to be a phantom of our mind's inner workings, the "new instance of me" objectively is me. I've thus defeated death and come back to life!

That's a pill I can't swallow. And the nagging doubt which keeps me from going along with that line of argument is: what experience to I anticipate in this scenario? If I'm being scanned now.. I anticipate my life to continue as "original me" at the end of the scanning process and not to suddenly find myself soul-swapped into future "new instance of me."

I've heard people argue that maybe both "original me" and "new instance of me" are entangled and I should expect a 50/50 probability of "ending up" in either manifestation. But that's defeated by further thought experiments: just imagine creating an endless number of replicants in the future. Is the probability evenly split among them all? That would require non-local effects on the probabilities in the present depending on future state, which is highly unlikely.

I've also heard that each time I'm copied or made manifest I should treat that as a 50/50 branch condition. So 50% probability of continuing as original me, 25% as the first copy, 12.5% as the second, etc. We've recovered locality at least, but how does my consciousness persist across a storage medium over the intervening years that doesn't represent a computation? This just substitutes one hard problem for another.

Furthermore, what should I expect to experience once I die of old age? Do I expect to "wake up" some years later in a younger version of myself, the copy of the earlier scan that was made? Unlikely; I'm not even remotely the same amplitude distribution at that point. This seems even more the product of sloppy thinking than the earlier options considered.

So if I have myself scanned, then die, knowing that someone will "restore" me from the backup copy, my expectation on my death bed is still that I will experience permanent death, oblivion. There will be a new entity constructed which has all my memories up to the point of being scanned, and perhaps that will help lessen the loss felt by those who loved me. But the fact remains: I expect to permanently die when this instance of me dies. The truth that we are all just factors in a subspace of an amplitude distribution doesn't resolve this problem at all.

And there are practical ramifications of this thought experiment: should I sign up for cryonics (preserve this instantiation of me) or brain preservation (destructive copy)? Should I volunteer for destructive mind uploading, once it is available? Etc.

Humans Are Embedded Agents Too

Yes and no.

I do think you're pointing to the right problems - basically the same problems Shminux was pointing at in his comment, and the same problems which I think are the most promising entry point to progress on embedded agency in general.

That said, I think "word boundaries" is a very misleading label for this class of problems. It suggests that the problem is something like "draw a boundary around points in thing-space which correspond to the word 'tree'", except for concepts like "values" or "person" rather than "tree". Drawing a boundary in thing-space isn't really the objective here; the problem is that we don't know what the right parameterization of thing-space is or whether that's even the right framework for grounding these concepts at all.

Here's how I'd pose it. Over the course of history, humans have figured out how to translate various human intuitions into formal (i.e. mathematical) models. For instance:

  • Game theory gave a framework for translating intuitions about "strategic behavior" into math
  • Information theory gave a framework for translating intuitions about information into math
  • More recently, work on causality gave a framework for translating intuitions about counterfactuals into math
  • In the early days, people like Galileo showed how to translate physical intuitions into math

A good heuristic: if a class of intuitive reasoning is useful and effective in practice, then there's probably some framework which would let us translate those intuitions into math. In the case of embedded-agency-related problems, we don't yet have the framework - just the intuitions.

With that in mind, I'd pose the problem as: build a framework for translating intuitions about "values", "people", etc into math. That's what we mean by the question "what is X?".



G Gordon Worley III's Shortform

Off-topic riff on "Humans are Embedded Agents Too"

One class of insights that come with Buddhist practice might be summarized as "determinism", as in, the universe does what it is going to do no matter what the illusory self predicts. Related to this is the larger Buddhist notion of "dependent origination", that everything (in the Hubble volume you find yourself in) is causally linked. This deep deterministic interdependence of the world is hard to appreciate from our subjective experience, because the creation of ontology creates a gulf that cuts us off from direct interaction, causing us to confuse map and territory. Much of the path of practice is learning to unlearn this useful confusion that allows us to do much by focusing on the map so we can make better predictions about the territory.

In AI alignment many difficulties and confusions arise from failing to understand what there is termed embeddedness, the insight that everything happens in the world not alongside it on the other side of a Cartesian veil. The trouble is that dualism is pernicious and intuitive to humans, even as we deny it, and unlearning it is not as simple as reasoning that the problem exists. Our thinking is so polluted with dualistic notions that we struggle to see the world any other way. I suspect if we are to succeed at building safe AI, we'll have to get a lot better at understanding and integrating the insight of embeddedness.

How’s that Epistemic Spot Check Project Coming?

I had a pretty visceral negative response to this, and it took me a bit to figure out why.

What I'm moving towards with ESCs is no gods no proxies. It's about digging in deeply to get to the truth. Throwing a million variables at a wall to see what sticks seems... dissociated? It's a search for things you do instead of dig for information you evaluate yourself.

Humans Are Embedded Agents Too

I agree and think this is an unappreciated idea, which is why I liberally link the embedded agency post in things I write. I'm not sure I'm doing a perfect job of not forgetting we are all embedded, but I consider it important and essential to not getting confused about, for example, human values, and think many of the confusions we have (especially the ones we fail to notice) are a result of incorrectly thinking, to put in another way, that the map does not also reside in the territory.

ESC Process Notes: Claim Evaluation vs. Syntheses

See here and here for responses. One of those was in response to a book that did better on the "having a thesis" axis than "having evidence", so I don't think that's the problem.

It seems plausible having a guide will help people, and that's on my list, but I'm aiming for a high level of polish so it's unfinished.

(Feedback Request) Quadratic voting for the 2018 Review

Include options to vote "score voting style" (bounded ratings) or "quadratic style" (ratings with bounded euclidean norm). I'd suggest scaling the SV votes so that their average euclidean norm is the same as that of the QV votes. (The strategy in this case is relatively obvious, but the strategic leverage isn't too high, and the stakes are relatively low, so I wouldn't worry too much.)

This is similar to what I was personally imagining, and what I think I'd personally want.

When I went through the 75 posts myself, imagining voting for them, what I found was that I basically wanted to put each post into one of a few buckets, something like:

  1. "no" – not a contender for book
  2. "decent" – a pretty neat idea, or a 'quite good' idea that wasn't well argued for
  3. "quite good" – some combination of "the idea is quite important; or, the conversation moved forward significantly; or, a neat idea was extraordinarily well argued for with excellent epistemics"
  4. "crucial" – this is a foundational piece that I hope one day becomes 'canon'

(I could imagine wanting to downvote posts, but in this case there weren't any I wanted to rank lower than 'no')

One additional thing I kinda wanted out of this the ability to flag (and aggregate data) about which posts had better or worse epistemic virtue. At first I thought of having two different voting scales, one for "value" and the other for "is this literally true, and/or did the author demonstrate thoughtfulness in how they considered the idea?"

I was worried about the obvious failure mode, where e.g OkCupid creates a "personality" and "attractiveness" scale, but it turns out the halo effect swamps any additional information you might have gleaned, and the two scales mapped perfectly.

When I attempted to rate each post myself, what I found was I almost always ranked epistemics and importance the same (or at least it wasn't obvious that they were more than "1 point" away from each other on a 1-10 scale), but that were a few specific posts I wanted to flag as "punching above or below their weight epistemically." 

I'm not quite sure if this is worth any additional complexity. A simple option is to leave a "comments" box for each post where people can explain their vote in plain english. I'm a little sad that doesn't give us the ability to aggregate information though. (A simple boolean, er, three-option radio radio button, with optional 'punches above its weight epistemically' or 'punches below its weight epistemically' might work)

T-Shaped Organizations

Indeed. A modern version of this is the "lean organization", which is a particular methodology for doing the sort of thing you are pointing at here. Alas business terminology is rarely generalized away from implementation methods, so I don't know of a general term to describe what you're pointing at that isn't tied up in implementation details, i.e is purely descriptive of all orgs having a shared property regardless of how it is achieved.

What are the open problems in Human Rationality?

I feel daunted by the question, "what are the big open questions the field of Human Rationality needs to answer, in order to help people have more accurate beliefs and/or make better decisions?", but I also think that it's the question at the heart of my research interests. So rather than trying to answer the original question directly, I'm going to share a sampling of my current research interests.

Over in the AMA, I wrote, "My way of investigating always pushes into what I can’t yet see or grasp or articulate. Thus, it has the unfortunate property of being quite difficult to communicate about directly until the research program is mostly complete. So I can say a lot about my earlier work on noticing, but talking coherently about what exactly CFAR’s been paying me for lately is much harder." This will not be a clean bulleted list that doubles as a map of rationality, sorry. It'll be more like sampling of snapshots from the parts of my mind that are trying to build rationality. Here's the collage, in no particular order:

There are things you’re subject to, and things you can take as object. For example, I used to do things like cry when an ambulance went by with its siren on, or say “ouch!” when I put a plate away and it went “clink”, yet I wasn’t aware that I was sensitive to sounds. If asked, “Are you sensitive to sounds?” I’d have said “No.” I did avoid certain sounds in local hill-climby ways, like making music playlists with lots of low strings but no trumpets, or not hanging out with people who speak loudly. But I didn’t “know” I was doing these things; I was *subject* to my sound sensitivity. I could not take it as *object*, so I couldn’t deliberately design my daily life to account for it. Now that I can take my sound sensitivity (and many related things) as object, I’m in a much more powerful position. And it *terrifies* me that I went a quarter of a century without recognizing these basic facts of my experience. It terrifies me even more when I imagine an AI researcher being subject to some similarly crucial thing about how agents work. I would very much like to know what other basic facts of my experience I remain unaware of. I would like to know how to find out what I am currently unable to take as object.

On a related note, you know how an awful lot of people in our community are autistic? It seem to me that our community is subject to this fact. (It also seems to me that many individual people in our community remain subject to most of their autistic patterns, and that this is more like the rule than the exception.) I would like to know what’s going on here, and whether some other state of affairs would be preferable, and how to instantiate that state of affairs.

Why do so many people seem to wait around for other people to teach them things, even when they seem to be trying very hard to learn? Do they think they need permission? Do they think they need authority? What are they protecting? Am I inadvertently destroying it when I try to figure things out for myself? What stops people from interrogating the world on their own terms?

I get an awful lot of use out of asking myself questions. I think I’m unusually good at doing this, and that I know a few other people with this property. I suspect that the really useful thing isn’t so much the questions, as whatever I’m doing with my mind most of the time that allows me to ask good questions. I’d like to know what other people are doing with their minds that prevents this, and whether there’s a different thing to do that’s better.

What is “quality”?

Suppose religion is symbiotic, and not just parasitic. What exactly is it doing for people? How is it doing those things? Are there specific problems it’s solving? What are the problems? How can we solve those problems without tolerating the damage religion causes?

[Some spoilers for bits of the premise of A Fire Upon The Deep and other stories in that sequence.] There’s this alien race in Verner Vinge books called the Tines. A “person” of the Tines species looks at first like a pack of several animals. The singleton members that make up a pack use high-frequency sound, rather than chemical neurotransmitters, to think as one mind. The singleton members of a pack age, so when one of your singletons dies, you adopt a new singleton. Since singletons are all slightly different and sort of have their own personalities, part of personal health and hygiene for Tines involves managing these transitions wisely. If you do a good job — never letting several members die in quick succession, never adopting a singleton that can’t harmonize with the rest of you, taking on new singletons before the oldest ones loose the ability to communicate — then you’re effectively immortal. You just keep amassing new skills and perspectives and thought styles, without drifting too far from your original intentions. If you manage the transitions poorly, though — choosing recklessly, not understanding the patterns an old member has been contributing, participating in a war where several of your singletons may die at once — then your mind could easily become suddenly very different, or disorganized and chaotic, or outright insane, in a way you’ve lost the ability to recover from. I think about the Tines a lot when I experiment with new ways of thinking and feeling. I think much of rationality poses a similar danger to the one faced by the Tines. So I’d like to know what practices constitute personal health and hygiene for cognitive growth and development in humans.

What is original seeing? How does it work? When is it most important? When is it the wrong move? How can I become better at it? How can people who are worse at it than I am become better at it?

In another thread, Adam made a comment that I thought was fantastic. I typed to him, “That comment is fantastic!” As I did so, I noticed that I had an option about how to relate to the comment, and to Adam, when I felt a bid from somewhere in my mind to re-phrase as, “I really like that comment,” or, “I enjoyed reading your comment,” or “I’m excited and impressed by your comment.” That bid came from a place that shares a lot of values with Lesswrong-style rationalists, and 20th century science, and really with liberalism in general. It values objectivity, respect, independence, autonomy, and consent, among other things. It holds map-territory distinctions and keeps its distance from the world, in an attempt to see all things clearly. But I decided to stand behind my claim that the “the comment is fantastic”. I did not “own my experience”, in this case, or highlight that my values are part of me rather than part of the world. I have a feeling that something really important is lost in the careful distance we keep all the time from the world and from each other. Something about the power to act, to affect each other in ways that create small-to-mid-sized superorganisms like teams and communities, something about tending our relationship to the world so that we don’t float off in bubbles of abstraction. Whatever that important thing is, I want to understand it. And I want to protect it, and to incorporate it into my patterns of thought, without loosing all I gain from cold clarity and distance.

I would like to think more clearly, especially when it seems important to do so. There are a lot of things that might affect how clearly you think, some of which are discussed in the Sequences. For example, one common pattern of muddy thought is rationalization, so one way to increase your cognitive clarity is to stop completely ignoring the existence of rationalization. I’ve lately been interested in a category of clarity-increasing thingies that might be sensibly described as “the relationship between a cognitive process and its environment”. By “environment”, I meant to include several things:

  • The internal mental environment: the cognitive and emotional situation in which a thought pattern finds itself. Example: When part of my mind is trying to tally up how much money I spent in the past month, and local mental processes desperately want the answer to be “very little” for some reason, my clarity of thought while tallying might not be so great. I expect that well maintained internal mental environments — ones that promote clear thinking — tend to have properties like abundance, spaciousness, and groundedness.
  • The internal physical environment: the physiological state of a body. For example, hydration seems to play a shockingly important role in how well I maintain my internal mental environment while I think. If I’m trying to solve a math problem and have had nothing to drink for two hours, it’s likely I’m trying to work in a state of frustration and impatience. Similar things are true of sleep and exercise.
  • The external physical environment: the sensory info coming in from the outside world, and the feedback patterns created by external objects and perceptual processes. When I’ve been having a conversation in one room, and then I move to another room, it often feels as though I’ve left half my thoughts behind. I think this is because I’m making extensive use of the walls and couches and such in my computations. I claim that one’s relationship to the external environment can make more or less use of the environment’s supportive potential, and that environments can be arranged in ways that promote clarity of thought.
  • The social environment: people, especially frequently encountered ones. The social environment is basically just part of the external physical environment, but it’s such an unusual part that I think it ought to be singled out. First of all, it has powerful effects on the internal mental environment. The phrase “politics is the mind killer” means something like “if you want to design the social environment to maximize muddiness of thought, have I got a deal for you”. Secondly, other minds have the remarkable property of containing complex cognitive processes, which are themselves situated in every level of environment. If you’ve ever confided in a close, reasonable friend who had some distance from your own internal turmoil, you know what I’m getting at here. I’ve thought a lot lately about how to build a “healthy community” in which to situate my thoughts. A good way to think about what I’m trying to do is that I want to cultivate the properties of interpersonal interaction that lead to the highest quality, best maintained internal mental environments for all involved.

What is "groundedness"?

I built a loft bed recently. Not from scratch, just Ikea-style. When I was about halfway through the process, I realized that I’d put one of the panels on backward. I’d made the mistake toward the beginning, so there were already many pieces screwed into that panel, and no way to flip it around without taking the whole bed apart again. At that point, I had a few thoughts in quick succession:

  • I really don’t want to take the whole bed apart and put it back together again.
  • Maybe I could unscrew the pieces connected to that panel, then carefully balance all of them while I flip the panel around? (Something would probably break if I did that.)
  • You know what, maybe I don’t want a dumb loft bed anyway.

It so happens that in this particular case, I sighed, took the bed apart, carefully noted where each bit was supposed to go, flipped the panel around, and put it all back together again perfectly. But I’ve certainly been in similar situations where for some reason, I let one mistake lead to more mistakes. I rushed, broke things, lost pieces, hurt other people, or gave up. I’d like to know what circumstances obtain when I get this right, and what circumstances obtain when I don’t. Where can I get patience, groundedness, clarity, gumption, and care?

I’ve developed a taste for reading books that I hate. I like to try on the perspective of one author after another, authors with whom I think I have really fundamental disagreements about how the world works, how one ought to think, and whether yellow is really such a bad color after all. There’s a generalized version of “reading books you hate” that I might call “perceptual dexterity”, or I might call “the ground of creativity”, which is something like having a thousand prehensile eye-stalks in your mind, and I think prehensile eye-stalks are pretty cool. But I also think it’s generally a good idea to avoid reading books you hate, because your hatred of them is often trying to protect you from “your self and worldview falling apart”, or something. I’d like to know whether my self and worldview are falling apart, or whatever. And if not, I’d like to know whether I’m doing something to prevent it that other people could learn to do, and whether they’d thereby gain access to a whole lot more perspectives from which they could triangulate reality.

What are you reading?

Goodreads reveals many books with the title "borrowed time." Who's the author?

TurnTrout's shortform feed

I’m realizing how much more risk-neutral I should be:

Paul Samuelson... offered a colleague a coin-toss gamble. If the colleague won the coin toss, he would receive $200, but if he lost, he would lose $100. Samuelson was offering his colleague a positive expected value with risk. The colleague, being risk-averse, refused the single bet, but said that he would be happy to toss the coin 100 times! The colleague understood that the bet had a positive expected value and that across lots of bets, the odds virtually guaranteed a profit. Yet with only one trial, he had a 50% chance of regretting taking the bet.

Notably, Samuelson‘s colleague doubtless faced many gambles in life… He would have fared better in the long run by maximizing his expected value on each decision... all of us encounter such “small gambles” in life, and we should try to follow the same strategy. Risk aversion is likely to tempt us to turn down each individual opportunity for gain. Yet the aggregated risk of all of the positive expected value gambles that we come across would eventually become infinitesimal, and potential profit quite large.

We run the Center for Applied Rationality, AMA

This isn’t a direct answer to, “What are the LessWrong posts that you wish you had the time to write?” It is a response to a near-by question, though, which is probably something along the lines of, “What problems are you particularly interested in right now?” which is the question that always drives my blogging. Here’s a sampling, in no particular order.

[edit: cross-posted to Ray's Open Problems post.]

There are things you’re subject to, and things you can take as object. For example, I used to do things like cry when an ambulance went by with its siren on, or say “ouch!” when I put a plate away and it went “clink”, yet I wasn’t aware that I was sensitive to sounds. If asked, “Are you sensitive to sounds?” I’d have said “No.” I did avoid certain sounds in local hill-climby ways, like making music playlists with lots of low strings but no trumpets, or not hanging out with people who speak loudly. But I didn’t “know” I was doing these things; I was *subject* to my sound sensitivity. I could not take it as *object*, so I couldn’t deliberately design my daily life to account for it. Now that I can take my sound sensitivity (and many related things) as object, I’m in a much more powerful position. And it *terrifies* me that I went a quarter of a century without recognizing these basic facts of my experience. It terrifies me even more when I imagine an AI researcher being subject to some similarly crucial thing about how agents work. I would very much like to know what other basic facts of my experience I remain unaware of. I would like to know how to find out what I am currently unable to take as object.

On a related note, you know how an awful lot of people in our community are autistic? It seem to me that our community is subject to this fact. (It also seems to me that many individual people in our community remain subject to most of their autistic patterns, and that this is more like the rule than the exception.) I would like to know what’s going on here, and whether some other state of affairs would be preferable, and how to instantiate that state of affairs.

Why do so many people seem to wait around for other people to teach them things, even when they seem to be trying very hard to learn? Do they think they need permission? Do they think they need authority? What are they protecting? Am I inadvertently destroying it when I try to figure things out for myself? What stops people from interrogating the world on their own terms?

I get an awful lot of use out of asking myself questions. I think I’m unusually good at doing this, and that I know a few other people with this property. I suspect that the really useful thing isn’t so much the questions, as whatever I’m doing with my mind most of the time that allows me to ask good questions. I’d like to know what other people are doing with their minds that prevents this, and whether there’s a different thing to do that’s better.

What is “quality”?

Suppose religion is symbiotic, and not just parasitic. What exactly is it doing for people? How is it doing those things? Are there specific problems it’s solving? What are the problems? How can we solve those problems without tolerating the damage religion causes?

[Some spoilers for bits of the premise of A Fire Upon The Deep and other stories in that sequence.] There’s this alien race in Verner Vinge books called the Tines. A “person” of the Tines species looks at first like a pack of several animals. The singleton members that make up a pack use high-frequency sound, rather than chemical neurotransmitters, to think as one mind. The singleton members of a pack age, so when one of your singletons dies, you adopt a new singleton. Since singletons are all slightly different and sort of have their own personalities, part of personal health and hygiene for Tines involves managing these transitions wisely. If you do a good job — never letting several members die in quick succession, never adopting a singleton that can’t harmonize with the rest of you, taking on new singletons before the oldest ones loose the ability to communicate — then you’re effectively immortal. You just keep amassing new skills and perspectives and thought styles, without drifting too far from your original intentions. If you manage the transitions poorly, though — choosing recklessly, not understanding the patterns an old member has been contributing, participating in a war where several of your singletons may die at once — then your mind could easily become suddenly very different, or disorganized and chaotic, or outright insane, in a way you’ve lost the ability to recover from. I think about the Tines a lot when I experiment with new ways of thinking and feeling. I think much of rationality poses a similar danger to the one faced by the Tines. So I’d like to know what practices constitute personal health and hygiene for cognitive growth and development in humans.

What is original seeing? How does it work? When is it most important? When is it the wrong move? How can I become better at it? How can people who are worse at it than I am become better at it?

In another thread, Adam made a comment that I thought was fantastic. I typed to him, “That comment is fantastic!” As I did so, I noticed that I had an option about how to relate to the comment, and to Adam, when I felt a bid from somewhere in my mind to re-phrase as, “I really like that comment,” or, “I enjoyed reading your comment,” or “I’m excited and impressed by your comment.” That bid came from a place that shares a lot of values with Lesswrong-style rationalists, and 20th century science, and really with liberalism in general. It values objectivity, respect, independence, autonomy, and consent, among other things. It holds map-territory distinctions and keeps its distance from the world, in an attempt to see all things clearly. But I decided to stand behind my claim that the “the comment is fantastic”. I did not “own my experience”, in this case, or highlight that my values are part of me rather than part of the world. I have a feeling that something really important is lost in the careful distance we keep all the time from the world and from each other. Something about the power to act, to affect each other in ways that create small-to-mid-sized superorganisms like teams and communities, something about tending our relationship to the world so that we don’t float off in bubbles of abstraction. Whatever that important thing is, I want to understand it. And I want to protect it, and to incorporate it into my patterns of thought, without loosing all I gain from cold clarity and distance.

I would like to think more clearly, especially when it seems important to do so. There are a lot of things that might affect how clearly you think, some of which are discussed in the Sequences. For example, one common pattern of muddy thought is rationalization, so one way to increase your cognitive clarity is to stop completely ignoring the existence of rationalization. I’ve lately been interested in a category of clarity-increasing thingies that might be sensibly described as “the relationship between a cognitive process and its environment”. By “environment”, I meant to include several things:

  • The internal mental environment: the cognitive and emotional situation in which a thought pattern finds itself. Example: When part of my mind is trying to tally up how much money I spent in the past month, and local mental processes desperately want the answer to be “very little” for some reason, my clarity of thought while tallying might not be so great. I expect that well maintained internal mental environments — ones that promote clear thinking — tend to have properties like abundance, spaciousness, and groundedness.
  • The internal physical environment: the physiological state of a body. For example, hydration seems to play a shockingly important role in how well I maintain my internal mental environment while I think. If I’m trying to solve a math problem and have had nothing to drink for two hours, it’s likely I’m trying to work in a state of frustration and impatience. Similar things are true of sleep and exercise.
  • The external physical environment: the sensory info coming in from the outside world, and the feedback patterns created by external objects and perceptual processes. When I’ve been having a conversation in one room, and then I move to another room, it often feels as though I’ve left half my thoughts behind. I think this is because I’m making extensive use of the walls and couches and such in my computations. I claim that one’s relationship to the external environment can make more or less use of the environment’s supportive potential, and that environments can be arranged in ways that promote clarity of thought (see Adam’s notes on the design of the CFAR venue, for instance).
  • The social environment: people, especially frequently encountered ones. The social environment is basically just part of the external physical environment, but it’s such an unusual part that I think it ought to be singled out. First of all, it has powerful effects on the internal mental environment. The phrase “politics is the mind killer” means something like “if you want to design the social environment to maximize muddiness of thought, have I got a deal for you”. Secondly, other minds have the remarkable property of containing complex cognitive processes, which are themselves situated in every level of environment. If you’ve ever confided in a close, reasonable friend who had some distance from your own internal turmoil, you know what I’m getting at here. I’ve thought a lot lately about how to build a “healthy community” in which to situate my thoughts. A good way to think about what I’m trying to do is that I want to cultivate the properties of interpersonal interaction that lead to the highest quality, best maintained internal mental environments for all involved.

I built a loft bed recently. Not from scratch, just Ikea-style. When I was about halfway through the process, I realized that I’d put one of the panels on backward. I’d made the mistake toward the beginning, so there were already many pieces screwed into that panel, and no way to flip it around without taking the whole bed apart again. At that point, I had a few thoughts in quick succession:

  • I really don’t want to take the whole bed apart and put it back together again.
  • Maybe I could unscrew the pieces connected to that panel, then carefully balance all of them while I flip the panel around? (Something would probably break if I did that.)
  • You know what, maybe I don’t want a dumb loft bed anyway.

It so happens that in this particular case, I sighed, took the bed apart, carefully noted where each bit was supposed to go, flipped the panel around, and put it all back together again perfectly. But I’ve certainly been in similar situations where for some reason, I let one mistake lead to more mistakes. I rushed, broke things, lost pieces, hurt other people, or gave up. I’d like to know what circumstances obtain when I have get this right, and what circumstances obtain when I don’t. Where can I get patience, groundedness, clarity, gumption, and care?

What is "groundedness"?

I’ve developed a taste for reading books that I hate. I like to try on the perspective of one author after another, authors with whom I think I have really fundamental disagreements about how the world works, how one ought to think, and whether yellow is really such a bad color after all. There’s a generalized version of “reading books you hate” that I might call “perceptual dexterity”, or I might call “the ground of creativity”, which is something like having a thousand prehensile eye-stalks in your mind, and I think prehensile eye-stalks are pretty cool. But I also think it’s generally a good idea to avoid reading books you hate, because your hatred of them is often trying to protect you from “your self and worldview falling apart”, or something. I’d like to know whether my self and worldview are falling apart, or whatever. And if not, I’d like to know whether I’m doing something to prevent it that other people could learn to do, and whether they’d thereby gain access to a whole lot more perspective from which they could triangulate reality.

What are you reading?

Borrowed time. Sue Armstrong. The Science of Why and How we Age

On aging. Very readable. Pretty throughout.

I would have loved Einstein to have written it etc. But it's very much good enough

Load More