This interview with Jacqueline Novogratz from Acumen Fund covers some practical approaches to attain skin in the game.
Two people asked me to clarify this claim:
Going by projects I've coordinated, EAs often push for removing paper conflicts of interest over attaining actual skin in the game.
Copying over my responses:re: Conflicts of interest:
My impression has been that a few people appraising my project work looked for ways to e.g. reduce Goodharting, or the risk that I might pay myself too much from the project budget. Also EA initiators sometimes post a fundraiser write-up for an official project with an official plan, that somewhat hides that they're actually seeking fu... (read more)
Some further clarification and speculation:
Edits interlude: People asked for examples of focussing on (interpreting & forecasting) processes vs. structures. See here.This links to more speculative brightspot-blindspot distinctions:7. Trading off sensory groundedness vs. representational stability of believed aspects- in learning structure: a direct observation's recurrence vs. sorted identity- in learning process: a transition of observed presence vs. analogised relation 8. Trading off updating your interpretations vs. for
This is clarifying, thank you!
I also noticed I was confused. Feels like we're at least disentangling cases and making better distinctions here.BTW, just realised that a problem with my triangular prism example is that theoretically no will rectangular side can face up parallel to the floor at the same time, just two at 60º angles).
But on the other hand x is not sufficient to spot when we have a new type of die (see previous point) and if we knew more about the dice we could do better estimates which makes me think that it is epistemic uncertainty.
This is interesting. This seems to ask ... (read more)
Thank you! That was clarifying especially the explanation of epistemic uncertainty for y. 1. I've been thinking about epistemic uncertainty more in terms of 'possible alternative qualities present', where
2. Your take on epistemic uncertainty for that figure seems to be
Well-written! Most of this definitely resonates for me.Quick thoughts:
This is a good question hmm. Now I’m trying to come up with specific concrete cases, I actually feel less confident of this claim.
Examples that did come to mind:
Looks cool, thanks! Checking if I understood it correctly:- is x like the input data?- could y correspond to something like the supervised (continuous) labels of a neural network, which inputs are matched too?- does epistemic uncertainty here refer to that inputs for x could be much different from the current training dataset if sampled again (where new samples could turn out be outside of the current distribution)?
How about 'disputed'?
Seems good. Let me adjust!
My impression is that gradual takeoff has gone from a minority to a majority position on LessWrong, primarily due to Paul Christiano, but not an overwhelming majority
This roughly corresponds with my impression actually. I know a group that has surveyed researchers that have permission to post on the AI Alignment Forum, but they haven't posted an analysis of the survey's answers yet.
Yeah, seems awesome for us to figure out where we fit within that global portfolio! Especially in policy efforts, that could enable us to build a more accurate and broadly reflective consensus to help centralised institutions improve on larger-scale decisions they make (see a general case for not channeling our current efforts towards making EA the dominant approach to decision-making). To clarify, I hope this post helps readers become more aware of their brightspots (vs. blindspots) that they might hold in common with like-minded collaborators – ie. ... (read more)
Yeah, I really like this idea -- at least in principle. The idea of looking for value agreement and where do our maps (that likely are verbally extremely different) match is something that I think we don't do nearly enough.
To get at what worries me about some of the 'EA needs to consider other viewpoints discourse' (and not at all about what you just wrote, let me describe two positions:
To disentangle what I had in mind when I wrote ‘later overturned by some applied ML researchers’:
Some applied ML researchers in the AI x-safety research community like Paul Christiano, Andrew Critch, David Krueger, and Ben Garfinkel have made solid arguments towards the conclusion that Eliezer’s past portrayal of a single self-recursively improving AGI had serious flaws.
In the post though, I was sloppy in writing about this particular example, in a way that served to support the broader claims I was making.
This resonates, based on my very limited grasp of statistics.
My impression is that sensitivity analysis aims more at reliably uncovering epistemic uncertainty (whereas Guesstimate as a tool seems to be designed more for working out aleatory uncertainty). Quote from interesting data science article on Silver-Taleb debate:
Predictions have two types of uncertainty; aleatory and epistemic.Aleatory uncertainty is concerned with the fundamental system (probability of rolling a six on a standard die). Epistemic uncertainty is concerned with the uncerta
Interesting, I didn't know GiveDirectly ran unstructured focus groups, nor that JPAL does qualitative interviews at various stages of testing interventions. Adds a bit more nuance to my thoughts, thanks!
Sorry, I get how the bullet point example gave that impression. I'm keeping the summary brief, so let me see what I can do. I think the culprit is 'overturned'. That makes it sound like their counterarguments were a done deal or something. I'll reword that to 'rebutted and reframed in finer detail'. Note though that 'some applied ML researchers' hardly sounds like consensus. I did not mean to convey that, but I'm glad you picked it up.
As far as I can tell, it's a reasonable summary of the fast takeoff position that many people still hold today.
Pe... (read more)
I appreciate your thoughtful comment too, Dan.You're right I think that I overstated EA's tendency to assume generalisability, particularly when it comes to testing interventions in global health and poverty (though much less so when it comes to research in other cause areas). Eva Vivalt's interview with 80K, and more recent EA Global sessions discussing the limitations of the randomista approach are examples. Some incubated charity interventions by GiveWell also seemed to take a targeted regional approach (e.g. No Lean Season). Also, Ben Kuhn's 'loc... (read more)
Do you mean the Game Master’s rules for world development? The basic gameplay rules for participants are outlined in the slides Ross posted above: https://docs.google.com/presentation/d/1ZKcMJZTLRp0tWixWqSW26ncDly9bJM7PMrfVMzdXihE/edit?usp=sharing
Let me ask Ross.
Sure! I'm curious to hear any purposes you thought of that delegated agents could assist with.
I'm brainstorming ways this post may be off the mark. Curious if you have any :)
Ah, I have the first diagram in your article as one of my desktop backgrounds. :-) It was a fascinating demonstration of how experiences can be built up into more complex frameworks (even though I feel I only half-understand it). It was one of several articles that inspired and moulded my thinking in this post.
I'd value having a half-an-hour Skype chat with you some time. If you're up for it, feel free to schedule one here.
So, I do find it fascinating to analyse how multi-layered networks of agents interact and how those interactions can be improved to better reach goals together. My impression is also that it’s hard to make progress in (otherwise several simple coordination problems would already have been solved) and I lack expertise in network science, complexity science, multi-agent systems or microeconomics. I haven’t set out a clear direction but I do find your idea of making this into a larger project inspiring.
I’ll probably work on gathering more emperical data ove... (read more)
Thanks for mentioning this!
Let me think about your question for a while. Will come back on it later.
Thanks for mentioning it.
If later you happen to see a blind spot or a failure mode we should work on covering, we'd like to learn about it!
Do you mean for the Gran Canaria camp?
We're also working towards a camp 2.0 in late July in the UK. I assume that's during summer break for you.
Great, let me throw together a reply to your questions in reverse order. I've had a long day and lack the energy to do the rigorous, concise write-up that I'd want to do. But please comment with specific questions/criticisms that I can look into later.
What is the thought process behind their approach?
RAISE (copy-paste from slightly-promotional-looking wiki):
AI safety is a small field. It has only about 50 researchers. The field is mostly talent-constrained. Given the dangers of an uncontrolled intelligence explosion, increasing the amount of AIS ... (read more)
If you're committed to studying AI safety but have little money, here are two projects you can join (do feel free to add other suggestions):
1) If you want to join a beginners or advanced study group on reinforcement learning, post here in the RAISE group.
2) If you want to write research in a group, apply for the AI Safety Camp in Gran Canaria on 12-22 April.