Remmelt Ellen

Comments

Some blindspots in rationality and effective altruism

I also noticed I was confused. Feels like we're at least disentangling cases and making better distinctions here.
BTW, just realised that a problem with my triangular prism example is that theoretically no will rectangular side can face up parallel to the floor at the same time, just two at 60º angles).

But on the other hand x is not sufficient to spot when we have a new type of die (see previous point) and if we knew more about the dice we could do better estimates which makes me think that it is epistemic uncertainty.

This is interesting. This seems to ask the question 'Is a change in the quality of x like colour actually causal to outcomes y?' Difficulty here is that you can never fully be certain empirically, just get closer to [change in roll probability] for [limit number of rolls -> infinity] = 0.

Some blindspots in rationality and effective altruism

Thank you! That was clarifying especially the explanation of epistemic uncertainty for y. 

1. I've been thinking about epistemic uncertainty more in terms of 'possible alternative qualities present', where 

  • you don't know the probability of a certain quality being present for x (e.g. what's the chance of the die having an extended three-sided base?).
  • or might not even be aware of some of the possible qualities that x might have (e.g. you don't know a triangular prism die can exist).

2. Your take on epistemic uncertainty for that figure seems to be

  • you know of x's possible quality dimensions (e.g. relative lengths and angles of sides at corners).
  • but given a set configuration of x (e.g. triangular prism with equilateral triangle sides = 1, rectangular lengths = 2 ), you don't know yet the probabilities of outcomes for y (what's the probability of landing face up for base1, base2, rect1, rect2, rect3?).
     

Both seem to fit the definition of epistemic uncertainty. Do correct me here!
 


Edit: Rough difference in focus: 
 1. Recognition and Representation
      vs. 
 2. Sampling and Prediction

Some blindspots in rationality and effective altruism

Well-written! Most of this definitely resonates for me.

Quick thoughts:

  • Some of the jargon I've heard sounded plain silly from a making-intellectual-progress-perspective (not just implicit aggrandising). Makes it harder to share our reasoning, even to each other, in a comprehensible, high-fidelity way. I like Rob Wiblin's guide on jargon.
  • Perhaps we put too much emphasis on making explicit communication comprehensible. Might be more fruitful to find ways to recognise how particular communities are set up to be good at understanding or making progress in particular problem niches, even if we struggle to comprehend what they're specifically saying or doing.

(I was skeptical about the claim 'majority of people are explicit utilitarians' – i.e. utilitarian not just consequentialist or some pluralistic mix of moral views  – but EA Survey responses seems to back it up: ~70% utilitarian)

Some blindspots in rationality and effective altruism

This is a good question hmm. Now I’m trying to come up with specific concrete cases, I actually feel less confident of this claim.

Examples that did come to mind:

  1. I recall reading somewhere about early LessWrong authors reinventing concepts that were already worked out before in philosophic disciplines (particularly in decision theory?). Can't find any post on this though.
     
  2. More subtly, we use a lot of jargon. Some terms were basically imported from academic research (say into cognitive biases) and given a shiny new nerdy name that appeals to our incrowd.  In the case of CFAR, I think they were very deliberate about renaming some concepts, also to make them more intuitive for workshops participants (eg. implementation intentions -> trigger action plans/patterns, pre-mortem -> murphijitsu). 

    (After thinking about this, I called with someone who is doing academic research on Buddhist religion. They independently mentioned LW posts on 'noticing', which basically is a new name for a mediation technique that has been practiced for millennia.)

    Renaming is not reinventing of course, but the new terms do make it harder to refer back to sources from established research literature.  Further, some smart amateur blog authors like to synthesise and intellectually innovate upon existing research (eg. see Scott Alexander's speculative posts, or my post above ^^). 

    The lack of referencing while building up innovations can cause us to misinterpret and write stuff that poorly reflects previous specialist research. We're building up our own separated literature database.

    A particular example is Robin Hanson 'near-far mode', from a concise and well-articulated review paper about psychological distance to the community, which spawned a lot of subsequent posts about implications for thinking in the community (but with little referencing to other academic studies or analyses). 
    E.g. Hanson's idea that people are hypocritical when they signal high-construal values but are more honest when they think concretely – a psychology researcher who seems rigorously minded said to me that he dug into Hanson's claim but that conclusions from other studies don’t support this.
     
  3. My impression from local/regional/national EA community building is that a many organisers (including me) either tried to work out how to run their group from first principles, or consulted with other more experienced organisers.  We could also have checked for good practices from and consulted with other established youth movements. I have seen plenty of write-ups that go through the former, but little or none of the other.
     

Definitely give me counter-examples!

Some blindspots in rationality and effective altruism

Looks cool, thanks! Checking if I understood it correctly:
- is x like the input data?
- could y correspond to something like the supervised (continuous) labels of a neural network, which inputs are matched too?
- does epistemic uncertainty here refer to that inputs for x could be much different from the current training dataset if sampled again (where new samples could turn out be outside of the current distribution)?
 

Some blindspots in rationality and effective altruism

How about 'disputed'?

Seems good. Let me adjust!

My impression is that gradual takeoff has gone from a minority to a majority position on LessWrong, primarily due to Paul Christiano, but not an overwhelming majority

This roughly corresponds with my impression actually. 
I know a group that has surveyed researchers that have permission to post on the AI Alignment Forum, but they haven't posted an analysis of the survey's answers yet.

Some blindspots in rationality and effective altruism

Yeah, seems awesome for us to figure out where we fit within that global portfolio! Especially in policy efforts, that could enable us to build a more accurate and broadly reflective consensus to help centralised institutions improve on larger-scale decisions they make (see a general case for not channeling our current efforts towards making EA the dominant approach to decision-making). 

To clarify, I hope this post helps readers become more aware of their brightspots (vs. blindspots) that they might hold in common with like-minded collaborators – ie. areas they notice (vs. miss) that map to relevant aspects of the underlying territory. 

I'm trying to encourage myself and the friends I collaborate with to build up an understanding of alternative approaches that outside groups take up (ie. to map and navigate their surrounding environment), and where those approaches might complement ours. Not necessarily for us to take up more simultaneous mental styles or to widen our mental focus or areas of specialisation. But to be able to hold outside groups' views so we get roughly where they are coming from, can communicate from their perspective, and form mutually beneficial partnerships.

More fundamentally, as human apes, our senses are exposed to an environment that is much more complex than just us. So we don't have the capacity to process our surroundings fully, nor to perceive all the relevant underlying aspects at once. To map the environment we are embedded in, we need robust constraints for encoding moment-to-moment observations, through layers of inductive biases, into stable representations.

Different-minded groups end up with different maps. But in order to learn from outside critics of EA, we need to be able to line up our map better with theirs. 

 



Let me throw an excerpt from an intro draft on the tool I'm developing. Curious for your thoughts!

Take two principles for a collaborative conversation in LessWrong and Effective Altruism:

  1. Your map is not the territory: 
    Your interlocutor may have surveyed a part of the bigger environment that you haven’t seen yet. Selfishly ask for their map, line up the pieces of their map with your map, and combine them to more accurately reflect the underlying territory.
  2. Seek alignment:
    Rewards can be hacked. Find a collaborator whose values align with your values so you can rely on them to make progress on the problems you care about.

When your interlocutor happens to have a compatible map and aligned values, such principles will guide you to learn missing information and collaborate smoothly. 

On the flipside, you will hit a dead end in your new conversation when:

  1. you can’t line up their map with yours to form a shared understanding of the territory. 
    Eg. you find their arguments inscrutable.
  2. you don’t converge on shared overarching aims for navigating the territory.
    Eg. double cruxes tend to bottom out at value disagreements.
     

You can resolve that tension with a mental shortcut: 
When you get confused about what they mean and fundamentally disagree on what they find important, just get out of their way. Why sink more of your time into a conversation that doesn’t reveal any new insights to you? Why risk fuelling a conflict?

This makes sense, and also omits a deeper question: why can’t you grasp their perspective? 
Maybe they don’t think the same things through as rigorously as you, and you pick up on that. Maybe they dishonestly express their beliefs or preferences, and you pick up on that. Maybe they honestly shared insights that you failed to pick up on.

Underlying each word you exchange is your perception of the surrounding territory ...
A word’s common definition masks our perceptual divide. Say you and I both look at the same thing and agree which defined term describes it. Then, we can mention this term as a pointer to what we both saw.  Yet, the environment I perceive that I point the term to may be very different from the environment you perceive.

Different-minded people can illuminate our blindspots.  Across the areas they chart and the paths they navigate lie nuggets – aspects of reality we don’t even know yet that we will come to care about.

Some blindspots in rationality and effective altruism

To disentangle what I had in mind when I wrote ‘later overturned by some applied ML researchers’:

Some applied ML researchers in the AI x-safety research community like Paul Christiano, Andrew Critch, David Krueger, and Ben Garfinkel have made solid arguments towards the conclusion that Eliezer’s past portrayal of a single self-recursively improving AGI had serious flaws.

In the post though, I was sloppy in writing about this particular example, in a way that served to support the broader claims I was making.

Some blindspots in rationality and effective altruism

This resonates, based on my very limited grasp of statistics. 

My impression is that sensitivity analysis aims more at reliably uncovering epistemic uncertainty (whereas Guesstimate as a tool seems to be designed more for working out aleatory uncertainty). 

Quote from interesting data science article on Silver-Taleb debate:

Predictions have two types of uncertainty; aleatory and epistemic.
Aleatory uncertainty is concerned with the fundamental system (probability of rolling a six on a standard die). Epistemic uncertainty is concerned with the uncertainty of the system (how many sides does a die have? And what is the probability of rolling a six?). With the latter, you have to guess the game and the outcome; like an election!

Load More