A Bird's Eye View of the ML Field [Pragmatic AI Safety #2]
This is the second post in a sequence of posts that describe our models for Pragmatic AI Safety. The internal dynamics of the ML field are not immediately obvious to the casual observer. This post will present some important high-level points that are critical to beginning to understand the field, and is meant as background for our later posts. Driving dynamics of the ML field How is progress made in ML? While the exact dynamics of progress are not always predictable, we will present three basic properties of ML research that are important to understand. The importance of defining the problem A problem well-defined is a problem half solved. —John Dewey (apocryphal) The mere formulation of a problem is often more essential than its solution, which [...] requires creative imagination and marks real advances in science. —Albert Einstein I have been struck by how important measurement is... This may seem basic, but it is amazing how often it is not done and how hard it is to get right. —Bill Gates If you cannot measure it, you cannot improve it. —Lord Kelvin (paraphrase) For better or worse, benchmarks shape a field. —David Patterson, Turing award winner Progress in AI arises from objective evaluation metrics. —David McAllester Science requires that we clarify the question and then refine the answer: it is impossible to solve a problem until we know what it is. Empirical ML research, which is the majority of the field, progresses through well-defined metrics for progress towards well-defined goals. Once a goal is defined empirically, is tractable, and is incentivized properly, the ML field is well-equipped to make progress towards it. A variation on this model is that artists (writers, directors, etc.) come first. They help give ideas, and philosophers add more logical constraints to those ideas to come up with goals or questions, and finally scientists can help make iterative progress towards those goals. To give an example: golems, animate being