To be a bit more explicit. I have some ideas of what it would look like to try to develop this meta-field or at least sub-elements of it, seperate from general rationality and am trying to get a feel for if they are worth pursuing personally. Or better yet, handing over to someone who doesn't feel they have any currently tractable ideas, but is better at getting things done.
This seems wrong but at least resembles a testable prediction.
That we have to get a bunch of key stuff right on the first try is where most of the lethality really and ultimately comes from; likewise the fact that no authority is here to tell us a list of what exactly is 'key' and will kill us if we get it wrong. (One remarks that most people are so absolutely and flatly unprepared by their 'scientific' educations to challenge pre-paradigmatic puzzles with no scholarly authoritative supervision, that they do not even realize how much harder that is, or how incredibly lethal it is to demand getting that right on the first critical try.)
Is anyone making a concerted effort to derive generalisable principles of how to get things right on the first try and/or work in the pre-paradigmatic mode? It seems like if we knew how to do that in general it would be a great boost to AI Safety research.
A bit tangential to the main point, but Woodridge is a misleading tertiary source and while some sphex species display looping behavior under some circumstances it is not as universal or irrational as the Hofstader makes it sound.
As far as I know this is the origin of the term, but it is worth noting that Woodridge is a misleading tertiary source and actual sphex are not that spexish.
As a heads up it looks like you need to be logged in to use the contact-newsroom page.
One benefit is that it is a good chance for training. We have a complicated real world question were the answer and even the best way of approaching the answer are currently unknown, but we will know the answer soon. Making predictions and recording the reasoning will allow for retrospectives.
it should be hard or impossible for participants to look up the answers during the study
I am unclear how you are going to enforce this in practice given that the study will be online and that you're expecting people to spend at least 30 minutes on conversation, which implies a large enough reward that it is worth hunting down answers that can't be found on the front page of Google. The only thing that comes to my mind is asking them to make predictions about the future. Related what will your policy be on participants looking up and/or sharing references relevant to steps in their reasoning? E.g. if one of the questions is about Trump being reelected will participants be allowed to visit 538 during step 1 and/or link their partner to it in step 4?
I didn't have enough time to properly evaluate the statistics portion, but at first glance it looks ok. Nothing seems wrong with the significance tests beyond them being significance tests. IIRC BEST addresses my main issues with them, particularly being able to indicated the absence of an effect in a way that isn't the case for mere non-significance, but I haven't used it in forever and don't have time to brush up on it at the moment.
Its one of the things Val taught. I honestly don't remember much of the details, but "deliberate practice but where you think hard about not Goodhart-ing and practicing the wrong thing"? actually sounds about right.
It was taught at CfAR during the period I think James attended.