Wiki Contributions

Comments

Thanks for the info PJ!

PCT looks very interesting and your EPIC goal framework strikes me as intuitively plausible. The current list of IGs that we reference is not so much part of CT as an empirical finding from our limited experience building CT charts. Neither Geoff nor I believe that all of them are actually intrinsic. It is entirely possible that we and our subjects are simply insufficiently experienced to penetrate below them. It looks like I've got a lot reading to do :-)

Hey Peter,

Thanks for writing this.

I’m the primary researcher working on Connection Theory at Leverage. I don’t have time to give an in-depth argument for why I consider CT to be worth investigating at the moment, but I will briefly respond to your post:

Objections I & II:

I think that your skeptical position is reasonable given your current state of knowledge. I agree that the existing CT documents do not make a persuasive case.

The CT research program has not yet begun. The evidence presented in the CT documents is from preliminary investigations carried out shortly after the theory’s creation when Geoff was working on his own.

My current plan is a follows: Come to understand CT and how to apply it well enough to design (and be able to carry out) a testing methodology that will provide high quality evidence. Perform some preliminary experiments. If the results are promising, create training material and programs that produce researchers who reliably create the same charts, predictions and recommendations from the same data. Recruit many aspiring researchers. Train many researchers. Begin large-scale testing.

Objection III:

I agree that a casual reading of CT suggests that it conflicts with existing science. I thought so as well and initially dismissed the theory for just that reason. Several extended conversations with Geoff and the experience of having my CT chart created convinced me otherwise. Very briefly:

The brain is complicated and the relationship between brain processes and our everyday experience of acting and updating is poorly understood. Since CT is trying to be a maximally elegant theory of just these things, CT does not attempt to say anything one way or the other about the brain and so strictly speaking does not predict that beliefs can be changed by modifying the brain. That said, it is easy to specify a theory, which we might call CT’, that is identical in every respect except that it allows beliefs to be modified directly by altering the brain.

“Elegant updating” is imprecisely defined in the current version of CT. This is definitely a problem with the theory. That said, I don’t think the concept is hopelessly imprecise. For one, elegant updating as defined by CT does not mean ideal Bayesian updating. One of the criteria of elegance is that the update involve the fewest changes from the previous set of beliefs. This means that a less globally-elegant theory may be favored over a more globally-elegant theory due to path-dependence. This introduces another source of less-than-optimally-rational beliefs. If we imagine a newly formed CT-compliant mind with a very minimal belief system updating in accordance with this conception of elegance and the constraints from their intrinsic goods (IGs), I think we should actually expects its beliefs to be totally insane, even more so than the H&B literature would suggest. Of course, we will need to do research in developmental psychology to confirm this suspicion.

It is surprisingly easy to explain many common biases within the CT framework. The first bias you mentioned, scope insensitivity, is an excellent example. Studies have shown that the amount people are willing to donate to save 2,000 birds is about the same as what they are willing to donate to save 200,000 birds. Why might that be?

According to CT, people only care about something if it is part of a path to one of their IGs. The IGs we’ve observed so far are mostly about particular relationships with other people, group membership, social acceptance, pleasure and sometimes ideal states of the world (world-scale IGs or WSIGs) such as world peace, universal harmony or universal human flourishing. Whether or not many birds (or even humans!) die in the short term is likely to be totally irrelevant to whether or not a person’s IGs are eventually fulfilled. Even WSIGs are unlikely to compel donation unless the person believes their donation action to be a necessary part of a strategy in which a very large number of people donate (and thus produce the desired state). It just isn’t very plausible that your individual attempt to save a small number of lives through donation will be critical to the eventual achievement of universal flourishing (for example). That leaves social acceptance as the next most likely explanation for donation. Since the number of social points people get from donating tends not to scale very well, there is no reason to expect the amount that they donate to scale. This is not the only possible CT-compliant explanation for scope insensitivity, but my guess is that it is the most commonly applicable.

I’ll close by saying that, like Geoff, I do not believe that CT is literally true. My current belief is that it is worthy of serious investigation and that the approach to psychology that it has inspired - that of mapping out individual beliefs and actions in a detailed and systematic manner, will be of great value even if the theory itself turns out to not be.

I'll chime in to agree with both lukeprog in pointing out that the interview is very outdated and with Holden in correcting Louie's account of the circumstances surrounding it.

Awesome, I'm very interested in sharing notes, particular since you've been practicing meditation a lot longer than I have.

I'd love to chat with you on Skype if you have the time. Feel free to send me an email at jasen@intelligence.org if you'd like to schedule a time.

First of all, thank you so much for posting this. I've been contemplating composing a similar post for a while now but haven't because I did not feel like my experience was sufficiently extensive or my understanding was sufficiently deep. I eagerly anticipate future posts.

That said, I'm a bit puzzled by your framing of this domain as "arational." Rationality, at least as LW has been using the word, refers to the art of obtaining true beliefs and making good decisions, not following any particular method. Your attitude and behavior with regard to your "mystical" experiences seems far more rational than both the hasty enthusiasm and reflexive dismissal that are more common. Most of what my brain does might as well be magic to me. The suggestion that ideas spoken to you by glowing spirit animals should be evaluated in much the same way as ideas that arise in less fantastic (though often no less mysterious) ways seems quite plausible and worthy of investigation. You seem to have done a good job at keeping your eye on the ball by focusing on the usefulness of these experiences without accepting poorly thought out explanations of their origins.

It may be the case that we have the normative, mathematical description of what rationality looks like down really well, but that doesn't mean we have a good handle on how best to approximate this using a human brain. My guess is that we've only scratched the surface. Peak or "mystical" experiences, much like AI and meta-ethics, seem to be a domain in which human reasoning fails more reliably than average. Applying the techniques of X-Rationality to this domain with the assumption that all of reality can be understood and integrated into a coherent model seems like a fun and potentially lucrative endeavor.

So now, in traditional LW style, I shall begin my own contribution with a quibble and then share some related thoughts:

Many of them come from spiritual, religious or occult sources, and it can be a little tricky to tease apart the techniques from the metaphysical beliefs (the best case, perhaps, is the Buddhist system, which holds (roughly) that the unenlightened mind can't truly understand reality anyway, so you'd best just shut up and meditate).

As far as I understand it, the Buddhist claim is that the unenlightened mind fails to understand the nature of one particular aspect of reality: it's own experience of the world and relationship to it. One important goal of what is typically called "insight meditation" seems to be to cause people to really grok that the map is not the territory when it comes to the category of "self." What follows is my own, very tentative, model of "enlightenment":

By striving to dissect your momentary experience in greater and greater detail, the process by which certain experiences are labeled "self" and others "not-self" becomes apparent. It also becomes apparent that the creation of this sense of a separate self is at least partially responsible for the rejection of or "flinching away" from certain aspects of your sensory experience and that this is one of the primary causes of suffering (which seems to be something like "mental conflict"). My understanding of "enlightenment" is as the final elimination (rather than just suppression) of this tendency to "shoot the messenger." This possibility is extremely intriguing to me because it seems like it should eliminate not only suffering but what might be the single most important source of "wireheading" behaviors in humans. People who claim to have achieved it say it's about as difficult as getting an M.D. Seems worthy of investigation.

Rather than go on and on here, I think it's about time I organized my experience and research into a top-level post.

Attention: Anyone still interested in attending the course must get their application in by midnight on Friday the 8th of April. I would like to make the final decision about who to accept by mid April and need to finish interviewing applicants before then.

But "produc[ing] formidable rationalists" sounds like it's meant to make the world better in a generalized way, by producing people who can shine the light of rationality into every dark corner, et cetera.

Precisely. The Singularity Institute was founded due to Eliezer's belief that trying to build FAI was the best strategy for making the world a better place. That is the goal. FAI is just a sub-goal. There is still consensus that FAI is the most promising route, but it does not seem wise to put all of our eggs in one basket. We can't do all of the work that needs to be done within one organization and we don't plan to try.

Through programs like Rationality Boot Camp, we expect to identify people who really care about improving the world and radically increase their chances of coming to correct conclusions about what needs to be done and then actually doing so. Not only will more highly-motivated, rational people improve the world at a much faster rate, they will also serve as checks on our sanity. I don't expect that we are sufficiently sane at the moment to reliably solve the world's problems and we're really going to need to step up our game if we hope to solve FAI. This program is just the beginning. The initial investment is relatively small and, if we can actually do what we think we can, the program should pay for itself in the future. We'd have to be crazy not to try this. It may well be too confusing from a PR perspective to run future versions of the program within SingInst, but if so we can just turn it into its own organization.

If you have concrete proposals for valuable projects that you think we're neglecting and would like to help out with I would be happy to have a Skype chat and then put you in contact with Michael Vassar.

Good question. I haven't quite figured this out yet, but one solution is to present everyone we are seriously considering with as much concrete information about the activities as we can and then give each of them a fixed number of "outs," each of which can be used to get out of one activity.

Definitely all-consuming.

Definitely apply, but please note your availability in your answer to the "why are you interested in the program?" question.

Load More