I’m an eccentric existentialist philosopher and education mindset user, specializing in applied metacognition. I’ve devoted most of my waking time to studying and addressing problems of the mind, influenced in large part by the aspiring rationality movement.

Right now I am determined to prove that Earth can do better than the status quo, that there is a way to unlock the vast collective human potential that is currently stuck in ignorance and pointless conflict.

Over the past decade or so, I've compiled a toolbox of foundational concepts to help people express in the simplest possible terms what matters most, so they can understand each others' values and frame situations constructively. Thus empowered with a starting point for effective collaboration, we can build a world we can all be proud of.

Best regards,

Extradimensional Cephalopod a.k.a. ExCeph, a.k.a. XF, a.k.a. “a handsome, brooding Cthulhu” (website: https://wordpress.com/view/ginnungagapfoundation.wordpress.com)

Wiki Contributions


(Made a few cosmetic tweaks to make some sentences less awkward.)  

This seems like a good analysis of how a person can use what I call the mindsets of reputation and clarification.  

Reputation mindset combines the mindsets of strategy and empathy, and it deals with fortifying impressions.  It can help one to be aware of emotional associations that others may have for things one may be planning to say or do.  That way one can avoid unintended associations where possible, or preemptively build up positive emotional associations to counteract negative ones that can't be avoided, such as by demonstrating one understands and supports someone's values before criticizing the methods they use to pursue those values.  

Clarification mindset combines strategy mindset and semantics mindset, and it deals with explicit information.  It helps people provide context and key details to circumvent unintended interpretations of labels and rules, or at least the most likely misinterpretations in a particular context.  

(Reputation and clarification make up the strategy side of presentation mindset.  Presentation deals with ambiguity in general, and the strategy side handles robust communication.)  

These are powerful tools, and it's helpful to have characterizations of them and examples of use cases.  Nicely done!  

1. Ah, now I see. Yes, removing assumptions is one good way to direct one's use of synthesis mindset. It helps with exploring the possibilities.

2. Organization can gather information efficiently, but integrating it all and catching contradictions is normally a job more suited for analysis. It's still possible to combine the two. That can end up forming strategy or something similar, or it could be viewed as using the mindsets separately to support each other.

Does that make sense?

Thanks for the input!

1. You mean we can fiddle with the explicit assumptions we use with synthesis mindset? That can help, but to get the full benefit of synthesis I find it's often better to let go of explicit assumptions, and then apply other mindsets with those explicit assumptions to the results yielded by synthesis.

Otherwise our explicit assumptions may cause synthesis to miss hypotheses that ultimately point us in a helpful direction, even though the hypothesis itself violates the explicit assumptions. Sometimes the issue is that we make too many assumptions and need to remove some of them, and practicing synthesis is a good way to do that. Does that address your point?

2. I'm not sure what you mean by replacing the goal of 'utility' with information. Can you please elaborate on that?

3. Fixed, thanks. Not sure how the goats got in there, but I'll check the latch on the gate.

4. That's encouraging. I'll stand by for more feedback. Glad you liked it!

I confess, your comment surprised me by calling for a different epistemic standard than I figured this article required. I had to unpack and address several issues, listed below.

  1. I can make a bibliography from the links I’ve already included, if it would help.
  2. Are there any specific assertions in this article that you think call for more evidence to support them over the alternatives?
  3. This article is meant to build the foundation for explaining the concepts that we'll be working with in the next article. After that article, we'll mostly be using those concepts instead. Those will be supported by your own observations of how people learn different skills with varying degrees of difficulty.
  4. I didn't know how much of the theory I was building on would be taken as a given in this community, so I decided to just post and see what wasn't already part of the general LW paradigm. I’d like to hear from more people before I make any judgment calls.
  5. These ideas at this point in the sequence are not intended to make new predictions that would require the introduction of new evidence. They are intended to help the reader more clearly and efficiently conceptualize the information they already have. This article asserts that some ideas are conceptually distinct from each other and others aren’t, which is not an empirical issue. The technical terms I introduce in the article are a condensation and consolidation of existing ideas, so that people can more easily process and apply new information. I predict that as I continue to explain the paradigms I’ve developed, they will be consistent with each other and with empirical evidence, and that the reader will develop a more elegant perspective which will allow them to apply their knowledge more effectively. It may be that I need to make that more clear in future articles.
  6. In order to think effectively, there are many concepts we can and must learn and apply without relying on the scientific establishment to do experiments for us.

Does that all make sense? I'll work on framing future articles so that it's clear when they are making empirical predictions from evidence and when they are presenting a concept as being better than other concepts at carving reality at its joints.

Practice with different example problems is indeed important for helping people internalize the principles behind the skills they're learning. However, just being exposed to these problems doesn't always mean a person figures out what those principles are. Lack of understanding of the principles usually means a person finds it difficult to learn the skill and even more difficult to branch out to similar skills.

However, if we can explicitly articulate those principles in a way people can understand, such as illustrating them with analogies or stories, then people have the foundation to actually get the benefits from the practice problems.

For example, let's say you see numbers being sorted into Category A or Category B. Even with a large set of data, if you have no mathematical education, you could spend a great deal of effort without figuring out what rule is being used to determine which category a number belongs in. You wouldn't be able to predict the category of a given number. To succeed, you would have to derive concepts like square numbers or prime numbers from scratch, which would take most people longer than they're willing or able to spend. However, if you're already educated on such concepts, you have tools to help you form hypotheses and mental models much more easily.

The objective here is to provide a basic conceptual framework for at least being aware of all aspects of all types of problems, not just easily quantifiable ones like math problems. If you can put bounds on them, you are better equipped to develop more advanced and specific skills to investigate and address them.

And yes, experiments on the method's effectiveness may be very difficult to design and run. I tend to measure effectiveness by whether people can grasp concepts they couldn't before, and whether they can apply those concepts with practice to solve problems they couldn't before. That's proof of concept enough for me to work on scaling it up.

Does that answer your question?

With finesse, it's possible to combine the techniques of truth-seeking with friendliness and empathy so that the techniques work even when the person you're talking to doesn't know them. That's a good way to demonstrate the effectiveness of truth-seeking techniques.

It's easiest to use such finesse on the individual level, but if you can identify general concepts which help you understand and create emotional safety for larger groups of people, you can scale it up. Values conversations require at least one of the parties involved to have an understanding of value-space, so they can recognize and show respect for how other people prioritize different values even as they introduce alternative priority ordering. Building a vocabulary for understanding value-space to enable productive values conversations on the global scale is one of my latest projects.

Yes, that's exactly what I meant, and that's a great clarification. I do prefer looking at the long-term expected utility of a decision, as a sort of Epicurean ideal. (I'm still working on being able to resist the motivation of relaxation, though.)

The specific attributes I was referring to in that sentence are three out of what I call the four primary attributes:

  • Initiative (describes how much one relies on environmental conditions to prompt one to start pursuing a goal)
  • Resilience (describes how much one relies on environmental conditions to allow one to continue pursuing a goal)
  • Mobility (describes how rapidly one can effectively change the parameters of one's efforts)
  • Intensity (describes how far one can continue pushing the effects of one's efforts)

I had only been using intensity since I didn't know about the others and didn't develop them naturally. Since combined they are stronger than the sum of them separately, I was stuck at less than 25% of my theoretical maximum effectiveness.

The deep differences in worldview that you refer to are something that I've noticed as well. The different mindsets people use inform what aspects of the world they are aware of, but when those awarenesses doesn't overlap enough, conflict seems almost inevitable.

I agree that knowing our utility functions is also important. For one thing, it helps with planning. For another, it lets us resist being controlled by our motivations, which can happen if we get too attached to them, or if we are only responsive to one or two of them. (That may have been what you meant by "exercising agency"?) "Eschatology" is an interesting way of phrasing that. It puts me in mind of the fundamental liabilities that threaten all goals. I wish we taught people growing up how to both accept and manage those liabilities.

I'll be writing a sequence elaborating on all of these concepts, which I've been applying in order to become more capable.

Load More