All of Remmelt Ellen's Comments + Replies

Some blindspots in rationality and effective altruism

This interview with Jacqueline Novogratz from Acumen Fund covers some practical approaches to attain skin in the game.

Some blindspots in rationality and effective altruism

Two people asked me to clarify this claim:

Going by projects I've coordinated, EAs often push for removing paper conflicts of interest over attaining actual skin in the game.

Copying over my responses:

re: Conflicts of interest:

My impression has been that a few people appraising my project work looked for ways to e.g. reduce Goodharting, or the risk that I might pay myself too much from the project budget. Also EA initiators sometimes post a fundraiser write-up for an official project with an official plan, that somewhat hides that they're actually seeking fu... (read more)

1Remmelt Ellen2dThis interview [ https://tim.blog/2021/05/06/jacqueline-novogratz-transcript/] with Jacqueline Novogratz from Acumen Fund covers some practical approaches to attain skin in the game.
Some blindspots in rationality and effective altruism

Some further clarification and speculation:

Edits interlude: 
People asked for examples of focussing on (interpreting & forecasting) 
processes vs. structures. 
See here.

This links to more speculative brightspot-blindspot distinctions:
7. Trading off sensory groundedness vs. representational stability of 
believed aspects
- in learning structure: a direct observation's recurrence vs. sorted identity
- in learning process: a transition of observed presence vs. analogised relation 

8. Trading off updating your interpretations vs. for

... (read more)
Some blindspots in rationality and effective altruism

I also noticed I was confused. Feels like we're at least disentangling cases and making better distinctions here.
BTW, just realised that a problem with my triangular prism example is that theoretically no will rectangular side can face up parallel to the floor at the same time, just two at 60º angles).

But on the other hand x is not sufficient to spot when we have a new type of die (see previous point) and if we knew more about the dice we could do better estimates which makes me think that it is epistemic uncertainty.

This is interesting. This seems to ask ... (read more)

2viktor.rehnberg1moTo disentangle the confusion I took [https://en.wikipedia.org/wiki/Uncertainty_quantification#Aleatoric_and_epistemic_uncertainty] a [https://link.springer.com/article/10.1007/s10994-021-05946-3#Sec1] look [https://www.sciencedirect.com/science/article/abs/pii/S0951832096000774] around [https://www.osti.gov/servlets/purl/1146749] about [https://towardsdatascience.com/my-deep-learning-model-says-sorry-i-dont-know-the-answer-that-s-absolutely-ok-50ffa562cb0b] a [https://towardsdatascience.com/why-you-should-care-about-the-nate-silver-vs-nassim-taleb-twitter-war-a581dce1f5fc] few [http://www.ce.memphis.edu/7137/PDFs/Abrahamson/C05.pdf] different [https://www.sciencedirect.com/science/article/abs/pii/S0167473008000556?via%3Dihub] definitions [https://www.lesswrong.com/posts/SwPQmJD2Davq3hQgn/the-role-of-epistemic-vs-aleatory-uncertainty-in-quantifying] of the concepts. The definitions were mostly the same kind of vague statement of the type: * Aleatoric uncertainty is from inherent stochasticity and does not reduce with more data. * Epistemic uncertainty is from lack of knowledge and/or data and can be further reduced by improving the model with more knowledge and/or data. However, I found some useful tidbits With this my updated view is that our confusion is probably because there is a free parameter in where to draw the line between aleatoric and epistemic uncertainty. This seems reasonable as more information can always lead to better estimates (at least down to considering wavefunctions I suppose) but in most cases having this kind of information and using it is infeasible and thus having the distinction between aleatoric and epistemic depend on the problem at hand seems reasonable.
1viktor.rehnberg1moGood catch Yes, I think you are right. Usually when modeling you can learn correlations that are useful for predictions but if the correlations are spurious they might disappear when the distributions changes. As such to know if p(y|x) changes from only observing x, then we would probably need that all causal relationships to y are captured in x?
Some blindspots in rationality and effective altruism

Thank you! That was clarifying especially the explanation of epistemic uncertainty for y. 

1. I've been thinking about epistemic uncertainty more in terms of 'possible alternative qualities present', where 

  • you don't know the probability of a certain quality being present for x (e.g. what's the chance of the die having an extended three-sided base?).
  • or might not even be aware of some of the possible qualities that x might have (e.g. you don't know a triangular prism die can exist).

2. Your take on epistemic uncertainty for that figure seems to be

  • you
... (read more)
2viktor.rehnberg1moGood point, my example with the figure is lacking in regards to 1 simply because we are assuming that x is known completely and that the observed y are true instances of what we want to measure. And from this I realize that I am confused about when some uncertainties should be called aleagoric or epistemic. When I think I can correctly point out epistemic uncertainty: * If the y that are observed are not the ones that we actually want then I'd call this uncertainty epistemic. This could be if we are using tired undergrads to count the number of pips of each rolled die and they miscount for some fraction of the dice. * If you haven't seen similar x before then you have epistemic uncertainty because you have uncertainty about which model or model parameters to use when estimating y. (This is the one I wrote about previously and the one shown in the figure) My confusion from 1: * If the conditions of the experiment changes. Our undergrads start to pull dice from another bag with an entirely different distribution p(y|x), then we have insufficient knowledge to estimate y and I would call this epistemic uncertainty. * If x is lacking in some information to do good estimates of y. x is the color of the die and when we have thrown enough dice from our experimental distribution we get a good estimate of p(y|x) and our uncertainty doesn't increase with more rolls, which makes me think that it is aleatoric uncertainty. But on the other hand x is not sufficient to spot when we have a new type of die (see previous point) and if we knew more about the dice we could do better estimates which makes me think that it is epistemic uncertainty. You bring up a good point in 1 and I agree that this feels like it should be epistemic uncertainty, but at some point the boundary between inherent uncertainty in the process and uncertainty from knowing too little about the process becomes vague to me and I can't really tell when a pr
Some blindspots in rationality and effective altruism

Well-written! Most of this definitely resonates for me.

Quick thoughts:

  • Some of the jargon I've heard sounded plain silly from a making-intellectual-progress-perspective (not just implicit aggrandising). Makes it harder to share our reasoning, even to each other, in a comprehensible, high-fidelity way. I like Rob Wiblin's guide on jargon.
  • Perhaps we put too much emphasis on making explicit communication comprehensible. Might be more fruitful to find ways to recognise how particular communities are set up to be good at understanding or making progress in parti
... (read more)
Some blindspots in rationality and effective altruism

This is a good question hmm. Now I’m trying to come up with specific concrete cases, I actually feel less confident of this claim.

Examples that did come to mind:

  1. I recall reading somewhere about early LessWrong authors reinventing concepts that were already worked out before in philosophic disciplines (particularly in decision theory?). Can't find any post on this though.
     
  2. More subtly, we use a lot of jargon. Some terms were basically imported from academic research (say into cognitive biases) and given a shiny new nerdy name that appeals to our incrowd
... (read more)
Some blindspots in rationality and effective altruism

Looks cool, thanks! Checking if I understood it correctly:
- is x like the input data?
- could y correspond to something like the supervised (continuous) labels of a neural network, which inputs are matched too?
- does epistemic uncertainty here refer to that inputs for x could be much different from the current training dataset if sampled again (where new samples could turn out be outside of the current distribution)?
 

4viktor.rehnberg1moThanks, I realised that I provided zero context for the figure. I added some. Yes. The example is about estimating y given x where x is assumed to be known. Not quite, we are still thinking of uncertainty only as applied to y. Epistemic uncertainty here refers to regions where the knowledge and data is insufficient to give a good estimate y given x from these regions. To compare it with your dice example, consider x to be some quality of the die such that you think dies with similar x will give similar rolls y. Then aleatoric uncertainty is high for dies where you are uncertain for values of new rolls even after having rolled several similar dies and rolling more similar dies will not help. While epistemic uncertainty is high for dies with qualities you haven't seen enough of.
Some blindspots in rationality and effective altruism

How about 'disputed'?

Seems good. Let me adjust!

My impression is that gradual takeoff has gone from a minority to a majority position on LessWrong, primarily due to Paul Christiano, but not an overwhelming majority

This roughly corresponds with my impression actually. 
I know a group that has surveyed researchers that have permission to post on the AI Alignment Forum, but they haven't posted an analysis of the survey's answers yet.

Some blindspots in rationality and effective altruism

Yeah, seems awesome for us to figure out where we fit within that global portfolio! Especially in policy efforts, that could enable us to build a more accurate and broadly reflective consensus to help centralised institutions improve on larger-scale decisions they make (see a general case for not channeling our current efforts towards making EA the dominant approach to decision-making). 

To clarify, I hope this post helps readers become more aware of their brightspots (vs. blindspots) that they might hold in common with like-minded collaborators – ie. ... (read more)

Yeah, I really like this idea -- at least in principle. The idea of looking for value agreement and where do our maps (that likely are verbally extremely different) match is something that I think we don't do nearly enough.

To get at what worries me about some of the 'EA needs to consider other viewpoints discourse' (and not at all about what you just wrote, let me describe two positions:

  1. EA needs to get better at communicating with non EA people, and seeing the ways that they have important information, and often know things we do not, even if they speak in
... (read more)
Some blindspots in rationality and effective altruism

To disentangle what I had in mind when I wrote ‘later overturned by some applied ML researchers’:

Some applied ML researchers in the AI x-safety research community like Paul Christiano, Andrew Critch, David Krueger, and Ben Garfinkel have made solid arguments towards the conclusion that Eliezer’s past portrayal of a single self-recursively improving AGI had serious flaws.

In the post though, I was sloppy in writing about this particular example, in a way that served to support the broader claims I was making.

Some blindspots in rationality and effective altruism

This resonates, based on my very limited grasp of statistics. 

My impression is that sensitivity analysis aims more at reliably uncovering epistemic uncertainty (whereas Guesstimate as a tool seems to be designed more for working out aleatory uncertainty). 

Quote from interesting data science article on Silver-Taleb debate:

Predictions have two types of uncertainty; aleatory and epistemic.
Aleatory uncertainty is concerned with the fundamental system (probability of rolling a six on a standard die). Epistemic uncertainty is concerned with the uncerta

... (read more)
2viktor.rehnberg2moThis is my go-to figure when thinking about aleatoric vs epistemic uncertainty. Source: Michel Kana [https://towardsdatascience.com/my-deep-learning-model-says-sorry-i-dont-know-the-answer-that-s-absolutely-ok-50ffa562cb0b] through Olof Mogren [http://mogren.one/talks/slides/mogren-2020-11-05-uncertainty-in-deep-learning.pdf] Edit: Explaining two types of uncertainty through a simple example. x are inputs and y are labels to a model to be trained.Edit: In the context of the figure. The aleatoric uncertainty is high in the left cluster because the uncertainty of where a new data point will be is high and is not reduced by the number of training examples. The epistemic uncertainty is high in regions where there is insufficient data or knowledge to produce an accurate estimate of the output, this would go down with more training data in these regions.
Some blindspots in rationality and effective altruism

Interesting, I didn't know GiveDirectly ran unstructured focus groups, nor that JPAL does qualitative interviews at various stages of testing interventions.  Adds a bit more nuance to my thoughts, thanks! 

4Dan Weinand2moOne of GiveDirectly's blog posts on survey and focus group results, by the way. https://www.givedirectly.org/what-its-like-to-receive-a-basic-income/
Some blindspots in rationality and effective altruism

Sorry, I get how the bullet point example gave that impression. I'm keeping the summary brief, so let me see what I can do. 

I think the culprit is 'overturned'. That makes it sound like their counterarguments were a done deal or something. I'll reword that to 'rebutted and reframed in finer detail'. 

Note though that 'some applied ML researchers' hardly sounds like consensus. I did not mean to convey that, but I'm glad you picked it up.

As far as I can tell, it's a reasonable summary of the fast takeoff position that many people still hold today.

Pe... (read more)

2Rafael Harth2moYeah, I think overturned is the word I took issue with. How about 'disputed'? That seems to be the term that remains agnostic about whether there is something wrong with the original argument or not. My impression is that gradual takeoff has gone from a minority to a majority position on LessWrong, primarily due to Paul Christiano, but not an overwhelming majority. (I don't know how it differs among Alignment Researchers.) I believe the only data I've seen on this was in a thread where people were asked to make predictions about AI stuff, including takeoff speed and timelines, using the new interactive prediction feature. (I can't find this post -- maybe someone else remembers what it was called?) I believe that was roughly compatible with the sizeable minority summary, but I could be wrong.
Some blindspots in rationality and effective altruism

I appreciate your thoughtful comment too,  Dan.

You're right I think that I overstated EA's tendency to assume generalisability, particularly when it comes to testing interventions in global health and poverty (though much less so when it comes to research in other cause areas). Eva Vivalt's interview with 80K, and more recent EA Global sessions discussing the limitations of the randomista approach are examples. Some incubated charity interventions by GiveWell also seemed to take a targeted regional approach (e.g. No Lean Season). Also, Ben Kuhn's 'loc... (read more)

3Dan Weinand2moFair points! I don't know if I'd consider JPAL directly EA, but they at least claim to conduct regular qualitative fieldwork before/after/during their formal interventions (source from Poor Economics [https://www.amazon.com/dp/B06XCCVDNR/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1] , I've sadly forgotten the exact point but they mention it several times). Similarly, GiveDirectly regularly meets with program participants for both structured polls and unstructured focus groups if I recall correctly. Regardless, I agree with the concrete point that this is an important thing to do and EA/rationality folks are less inclined to collect unstructured qualitative feedback than its importance deserves.
Takeaways from the Intelligence Rising RPG

Do you mean the Game Master’s rules for world development? The basic gameplay rules for participants are outlined in the slides Ross posted above: https://docs.google.com/presentation/d/1ZKcMJZTLRp0tWixWqSW26ncDly9bJM7PMrfVMzdXihE/edit?usp=sharing

3Said Achmiz2moI mean the complete rules that define the game.
Delegated agents in practice: How companies might end up selling AI services that act on behalf of consumers and coalitions, and what this implies for safety research

I'm brainstorming ways this post may be off the mark. Curious if you have any :)

  • You can personalise an AI service across some dimensions that won’t make it more resemble an agent acting on a person’s behalf (or won't meet all criteria of 'agentiness')
    • not acting *over time* - more like a bespoke tool customised once to a customer’s preferred parameters, e.g. a website-builder like wix.com
    • an AI service personalising content according to a user’s likes/reads/don't show clicks isn't agent-like
    • efficient personalised services will be built on swappable modules a
... (read more)
The Values-to-Actions Decision Chain

Ah, I have the first diagram in your article as one of my desktop backgrounds. :-) It was a fascinating demonstration of how experiences can be built up into more complex frameworks (even though I feel I only half-understand it). It was one of several articles that inspired and moulded my thinking in this post.

I'd value having a half-an-hour Skype chat with you some time. If you're up for it, feel free to schedule one here.

The Values-to-Actions Decision Chain

So, I do find it fascinating to analyse how multi-layered networks of agents interact and how those interactions can be improved to better reach goals together. My impression is also that it’s hard to make progress in (otherwise several simple coordination problems would already have been solved) and I lack expertise in network science, complexity science, multi-agent systems or microeconomics. I haven’t set out a clear direction but I do find your idea of making this into a larger project inspiring.

I’ll probably work on gathering more emperical data ove... (read more)

3G Gordon Worley III3yAwesome! Part of what makes me ask is that this reminds me a lot of how I got moving on work I care about myself: I came up with a complex model to explain complex phenomena [https://mapandterritory.org/phenomenological-complexity-classes-8b41836437b9] and then from there explored ideas that eventually lead me to having a unique perspective to bring to the AI alignment discussion. I didn't know that was going to happen at the time, and I like you're thoughts on future work since they were much like mine. Looking forward to seeing where your thinking leads you!
The Values-to-Actions Decision Chain

Thanks for mentioning this!

Let me think about your question for a while. Will come back on it later.

AI Safety Research Camp - Project Proposal

Thanks for mentioning it.

If later you happen to see a blind spot or a failure mode we should work on covering, we'd like to learn about it!

AI Safety Research Camp - Project Proposal

Do you mean for the Gran Canaria camp?

We're also working towards a camp 2.0 in late July in the UK. I assume that's during summer break for you.

1TurnTrout3yI’d probably be able to make that! Depends how long I could get off work and whether I’m able to make a CFAR workshop like I planned
"Taking AI Risk Seriously" (thoughts by Critch)

Great, let me throw together a reply to your questions in reverse order. I've had a long day and lack the energy to do the rigorous, concise write-up that I'd want to do. But please comment with specific questions/criticisms that I can look into later.

What is the thought process behind their approach?

RAISE (copy-paste from slightly-promotional-looking wiki):

AI safety is a small field. It has only about 50 researchers. The field is mostly talent-constrained. Given the dangers of an uncontrolled intelligence explosion, increasing the amount of AIS ... (read more)

3David_Kristoffersson3yHere's the Less Wrong post for the AI Safety Camp [https://www.lesserwrong.com/posts/KgFrtaajjfSnBSZoH/ai-safety-research-camp-project-proposal] !
"Taking AI Risk Seriously" (thoughts by Critch)

If you're committed to studying AI safety but have little money, here are two projects you can join (do feel free to add other suggestions):

1) If you want to join a beginners or advanced study group on reinforcement learning, post here in the RAISE group.

2) If you want to write research in a group, apply for the AI Safety Camp in Gran Canaria on 12-22 April.

8Raemon3yCurious to know more about these (from the outside it's hard to tell if this is a good place to endorse people checking out. At risk of being Pat Modesto-like, who's running them and what's their background? And hopefully being not-Pat-Modesto-like, what material do these groups cover, and what is the thought process behind their approach?)