As a warming up exercise, try the tree story with something actually contentious:-
Alice: rainbows exist, I can see one over there.
Bob: rainbows don't exist, there is no arch in the sky, they are an illusion.
Charlie: A physical phenomenon produces rainbows, but they are not as they appear.
Etc, etc.
Alice: Qualia exist, I'm.having several right now.
Bob: there is no empirical evidence for qualia, because no instrument can detect them.
Keith: Qualia can't exist, because they are non physical by definition. They must be illusions.
Alice: illusions of qualia are qualia.
Etc, etc.
Alice: I have free will, because it seems to me that I can raise my arm or not as I please.
Bob: Thou Art physics, so thou hast no free will.
Charlie: You are still a cause.
Bob: I am a caused cause, so I am not free.
Etc, etc.
Alice: Ethics is objective because it seems to me that torturing kittens a wrong , and that's a basic fact about the universe.
Bob: Everything is physics , so there are no n in natural facts, so moral realism is false. You intuitions are irrelevant.
Mike: Science is based on intuition just as much as moral realism, it's intuitions all the way down, baby.
Etc, etc.
Alice: Quarks exist, we have scientific and therefore empirical evidence.
Bob: Actually, we have never made a direct observation of an isolated quark ... Quarks are kind of inferred from the nature and behaviour of nucleons ... they are in the nature of explanatory posits.
David: well, of course, science is all conjecture, there is no evidence of anything and empiricism and induction don't work!
Bob: that's a bit extreme, but we can agree that science uses explanatory conjectures as well as empiricism and induction.
Note the lack of agreement about what empirical evidence even is, and about how science works. That's the very beginnings of the problems with "Lol, just use science".
Except… how good are you, really, if you are stuck asking the same questions as a 12 years old child?
Probably not very, but philosophy isn't stuck there, since it has plenty of highly technical debates.
But the most ironic is that if one actually goes on a long philosophical journey in search for the answer to the question “How can we know things about the external world at all?”, then, in the end of this heroic quest, after the three headed dragon is slain, kingdom of reason is saved and skeptics are befriended along the way, on the diamond mural written in golden letter will be found the answer.
And this answer will be: “Pretty much by looking”.
Here's why not:-
The evolutionary argument guarantees that you can know just enough for your survival, and only that, things like which berries to eat and animals to avoid being eaten by. ( There's no evidence at all that it's "optimal").Typical philosophical problems, about ontology and epistemology, the real nature of things, have no relevance to survival, so the evolutionary argument doesn't tell you they are soluble. Not everything is on the same footing. Some things are visible, and subject to direct feedback, other things arent.
To shown that empiricism works, you need to show that empiricism*alone" works, and it works for everything , including the tricky edge cases. And you can't infer all that from the fact that it works in one , simple case.
The limitation doesn't just apply to philosophy, it also applied to science , because science does try to answer questions about the nature of things ... scientific realism ... as well as making reliable predictions, instrumentalism. Realism depends on the interpretation of observation, which in turn depends on conjecturing explanatory models ...models don't just drop out of the data. Conjecture is a conscious, creative and cognitive process, not something that happens automatically as part of perception. Scientific epistemology is complex, combining conjecture , deduction of predicted observations from models, confirmation/disinformation of models, incremental changes to models, and wholesale abandonment of models.
There are direct quantifiable tests for predictive accuracy , but no way of directly comparing a map to the territory. Scientific realists hope and assume that empirical adequacy adds up to ontological correctness .. but it isn't necessarily so.
One way of making this point is that ontologically wrong theories can be very accurate.
For instance, the Ptolemaic system can be made as accurate as you want for generating predictions, by adding extra epicycles ... although it is false, in the sense of lacking ontological accuracy, of failing to correspond to reality, since epicycles don't exist.
A further way to see this point to notice that ontological revolutions can make merely modest changes to predictive abilities. Relativity inverted the absolute space and time of Newtonian physics, but its predictions were so close that subtle experiments were required to distinguish the two, and so close that Newtonian physics is acceptable for many purposes. Moreover, we can't rule out a further revolution, replacing current scientific ontology.
Our inability to make direct comparisons between map and territory extends to an inability to tell how close we are to the ultimately accurate ontology,. Even probablistic reasoning can't tell us how likely our theories are in absolute terms. We only know that better theories are more probably correct than worse ones, but we don't really know whether current theories are 90% correct or 10% correct, from a God's eye point of view.
Moreover, we can't safely assume we are making steady, incremental progress towards the ultimately accurate ontological picture for just the reason already given -- sligh changes in predictive accuracy going from one theory to another can be accompanied by major changes in ontology.
and your brain that interpret and aggregate those signals, forming a mental model of the world, to be created by evolution through natural selection, optimizing for inclusive genetic fitness in this world, far from the case that philosophers are all Hegel style rationalists.
And Its also not the case that you have to make positive claims about apriori reasoning to point out the limitations of empiricism. And Its also not the case that noticing the limitations of empiricism is the same as refusing to use it at all.
Most of philosophy is going in the opposite direction, doing anything but adding up to normality
Examples, please.
Did we really expect that solving philosophy would invalidate the applied answers of the sciences
Have you solved philosophy? Has anybody?
There's not much evidence that the looking based approach is solving problems in practice. Rationalists don't have clear answers to consciousness or ethics -- and how would "looking" help in those areas, anyway? Some things are visible, and subject to direct feedback, other things arent.
On the Ptolemaic system's accuracy when you add epicycles: due to geocentrism turning out to be incorrect, the apparently high accuracy vanishes when you look from a vantage point not on Earth. Looking is still the ultimate arbiter between the different models. Philosophical and scientific progress tend to follow choices to look from (literal or figurative) different vantage points. The fact that in many cases the vantage point we'd like to look from is unreachable is unfortunate for us, and makes some questions unresolvable by us, either for now or forever.
On the Ptolemaic system’s accuracy when you add epicycles: due to geocentrism turning out to be incorrect, the apparently high accuracy vanishes when you look from a vantage point not on Earth
You can build a different , arbitrarily complex, system for a vantage point other than the Earth. Since geocentricism is wrong , it can't be any worse.
Looking is still the ultimate arbiter between the different models.
Is it? Don't we use simplicity as a criterion , as well? How about conscillience? Does into are arbiter mean only arbiter? Is empirical correctness necessary or sufficient?
What you and @Ape in the coat are saying is mostly just vague.
First: to be clear, I think we're both in agreement that the trilemma is real, and empirical observation can't resolve the fundamental problem of wanting to be absolutely certain of anything, yes? That empirical correctness is importantly not sufficient for that?
Second: I think you misunderstood what I meant by a vantage point other than Earth. Of course you can make such a model that works from the moon, and one that works from Saturn, and so on. I'm not sure you can build a model that does that from all vantage points (as opposed to being custom-designed for one in particular) without also being, in some deep sense, isomorphic the the relevant parts of reality in the same way that our current model of the solar system is. If you can, that seems like a pretty important epistemological result, and I'd like to learn more! I suppose you could create an arbitrarily large lookup table that specifies what you'll see from everywhere at every time, but to do that you still need to know how to generate that table, and the generator (I would argue) captures the relevant parts of the structure of our own model.
But within the class of relevantly isomorphic models, yes, you have a choice of which to use for what purpose, and when and how to switch among them. When you talk about simplicity being a criterion, usefulness is one of the reasons we do that. Another is that simplicity generally means fewer assumptions and implications liable to be proven wrong by future observation. It's great, though, to know which less simple models exist that capture current observations, in order to be able to revisit them when new data needs explaining.
As a warming up exercise, try the tree story with something actually contentious
It's a fine exercise for beginners, but I hope we are long past it, at this point.
All the "contentiousness" evaporates as soon as we've fixed the definitions and got rid of the semantic confusion.
Note the lack of agreement about what empirical evidence even is, and about how science works.
Granted. This disagreement is resolved the same way still. You gather evidence about interpreting evidence. You reflect on it with your mind. And so on.
Probably not very, but philosophy isn't stuck there, since it has plenty of highly technical debates.
The "technicality" of the debates is really not the issue here.
The evolutionary argument guarantees that you can know just enough for your survival, and only that, things like which berries to eat and animals to avoid being eaten by. ( There's no evidence at all that it's "optimal")
Are you not familiar with the notion of optimization process? When you are a result of successful replication of imperfect replicators in a competitive environment with limited resources there is quite a lot of evidence for some kind of "optimality".
Typical philosophical problems, about ontology and epistemology, the real nature of things, have no relevance to survival, so the evolutionary argument doesn't tell you they are soluble.
If the existed in some kind of separate magisterium where our common knowledge wouldn't be applicable than yes. Is it your stance?
Some things are visible, and subject to direct feedback, other things arent.
So you use indirect feedback building on top of knowledge of things that are directly visible. I'm rather sure you understand how it works
To shown that empiricism works, you need to show that empiricism*alone" works, and it works for everything , including the tricky edge cases. And you can't infer all that from the fact that it works in one , simple case.
I can keep applying it to the "tricky cases" and see how things work from there. And this way I can aggregate more and more evidence and so on and so forth. Never reaching absolute certainty but always refining my tools.
models don't just drop out of the data.
They kind of do, in a manner of speaking. There are several ways to aggregate data in a model. We can try multiple of them and see how these models predict new data, therefore collecting evidence about what kind of models are good at data prediction in general and so refining our tools of model construction.
Being already selected for intuitions related to surviving in the world and not starting from scratch helps quite a lot.
Conjecture is a conscious, creative and cognitive process, not something that happens automatically as part of perception.
Okay I think I understand what is going on here. Are you under impression that I'm trying to bring back the old empiricism vs rationalism debate, arguing on the side of empiricism?
If so, I want to stop you here, as this couldn't be further from the truth. I believe this whole debate was quite embarrassing and the whole distinction on pure observation and pure cognition doesn't make sense in the first place. Observation is cognition, cognition is observation. You can skip all the obvious points how one actually needs a brain to make observations - I've explicitly mentioned it myself in the post.
There are direct quantifiable tests for predictive accuracy , but no way of directly comparing a map to the territory. Scientific realists hope and assume that empirical adequacy adds up to ontological correctness .. but it isn't necessarily so.
I'll talk about it in a future post.
One way of making this point is that ontologically wrong theories can be very accurate.
Yes, that's totally fine. I don't think we have any disagreement here.
Our inability to make direct comparisons between map and territory extends to an inability to tell how close we are to the ultimately accurate ontology,. Even probablistic reasoning can't tell us how likely our theories are in absolute terms. We only know that better theories are more probably correct than worse ones, but we don't really know whether current theories are 90% correct or 10% correct, from a God's eye point of view.
[Half joking]
Thankfully, there there doesn't seem to be any God, so we might as well not care about his point of view too much.
[/half joking]
Yes, it's all probabilities all the way down, without perfect certainty. This is fine. We can come up with adversarial examples where it means that we were completely duped, and our views are completely disentangled from "true reality" and were simply describing an "illusion", but
Examples, please.
There will be plenty in the future posts. But generally, consider the fact that philosophy reasons in all direction and normality is only a relatively small space of all possible destinations.
Have you solved philosophy? Has anybody?
Well, I've went further than many. But it's not relevant to the point I'm making here.
All the “contentiousness” evaporates as soon as we’ve fixed the definitions and got rid of the semantic confusion.
Of course not. Having clear semantics is a necessary condition for understanding the world, not a sufficient one. You have to look. Among other things.
You gather evidence about interpreting evidence
You can only gather theories about interpreting evidence. You can't see how well such theories work by direct inspection. It isn't looking.
This would work much better if you thought about it concretely. Alice says evidence includes introspections, subjective seemings; Bob says it is only ever objective. What do you do next?
When you are a result of successful replication of imperfect replicators in a competitive environment with limited resources there is quite a lot of evidence for some kind of “optimality”.
I doing see why a slug or wallaby is optimising anything, so why should I be? What makes humans the pinnacle of creation?
If the existed in some kind of separate magisterium where our common knowledge wouldn’t be applicable than yes
They exist in a separate magisterium where direct, sensory evidence isn't applicable, because they are about the causes and meaning of whatever sensory evidence you happen to have. The interpretation of evidence.is a separate magisterium from.You gathering evidence , and not in a spooky way.
So you use indirect feedback
Which is what? Give concrete examples.
I can keep applying it to the “tricky cases
Applying what? You can't keep applying evidence-gathering to solve the problem of interpreting evidence. It's unclear whether you are talking about pure empiricism, or.some.kind of vaguely defined solution everything.
And this way I can aggregate more and more evidence
Which is not self interpreting, so you are just creating a bigger and bigger problem.
We can try multiple of them and see how these models predict new data
But they dont, in the trickiest. I've already addressed that point: the Ptolemaic model can by adjusted to fit any data.
Being already selected for intuitions related to surviving in the world
I've already addressed that point too: you don't need ontological understanding to survive. You don't get direct feedback about ontological understanding. So it's a separate magisterium.
Okay I think I understand what is going on here. Are you under impression that I’m trying to bring back the old empiricism vs rationalism debate, arguing on the side of empiricism?
What's "looking" if not empiricism?
I'm not arguing for rationalism over empiricism, or against never using empirucism. I'm arguing against pure empiricism as being able to solve all problems. Which is not to say there is something else that does. It's a mixture of pluralism -- there's more than one kind of epistemic problem and solution -- and scepticism -- theres no guarantee of solving anything even using more tools than "looking".
I already said that here :-
And Its also not the case that you have to make positive claims about apriori reasoning to point out the limitations of empiricism. And Its also not the case that noticing the limitations of empiricism is the same as refusing to use it at all.
Yes, it’s all probabilities all the way down, without perfect certainty
No, it's worse than that. Probabilities require quantification of how true or likely something is. But there is no way of objectively quantifying that for ontological interpretation. And subjective probability leads to perennial disagreement , not convergence.
We can come up with adversarial examples where it means that we were completely duped, and our views are completely disentangled from “true reality” and were simply describing an “illusion”, but
But, that only allows us to reject N false theories , not home in on a single true one. Convergence is a problem as well as certainty.
Renaming “reality” to “illusion” doesn’t actually change anything of substance
If your beliefs are illusory, they are false. That might not make an difference instrumentally, to what you can predict, but you are not assuming instrumentalism and neither is Yudkowsky.
But generally, consider the fact that philosophy reasons in all direction and normality is only a relatively small space of all possible destinations
What's normality? If you just mean "saving appearances", rather than predicting something that is empirically disproveable, then most philosophy does that. What doesn't? Illusionism? But that's quite popular around here!
I also thought the robot:s answer missed the point quite badly …because it reduced the ought all the way down to an is—or rather a bunch of isses.
If you dismiss any reduction of ought to is,
I don't. As I said:-
Reducing ethical normativity isn’t bad
Not to what one would. Your ethical module may not be directly connected to the behavioral one and so your decisions are based on other considerations, like desires unrelated to ethics.
Are you saying that's the only problem? That the action you would have taken absent those issues is the right action, in an ultimate sense?
This doesn’t change the fact that what you ought to do is the output (or a certain generalization of multiple outputs) of the ethical module,
It's not a fact. There are any number of ethical theories where what you should do is not necessarily what you would do. e.g. Utilitarianism, which is quite popular round here. When you think about maths , that's neural activity, but it doesn't follow that it defines mathematical correctness. Errors are neural activity as.well. The normative question is quite separate. Even if want to reduce it, it doesn't follow that they only way to do so is to have eight billion correct answers.
which is a computation taking place in the real world, which can be observed.
That's quite irrelevant. The fact that it takes neural activity to output an action tells you nothing about the ethics of the action. "Ought" and "ethical" aren't just vacuous labels for anything you do or want to do.
there are potentially eight billion answers to what one ought to do.
Potentially but not actually.
Nothing hinges on having exactly right billion right answers. More than one right answer is enough of a problem.
Once again, when you look, turns out individual ethical views of people are not *that* different
Yes they are. Political divisions reflect profound ethical divisions.
There’s a consistent theme in rationalist writing on ethics, where the idea that everyone has basically the same values , or “brain algorithms”, is just assumed … but it needs to be based on evidence as much as anything else.
Not basically the same, but somewhat similar. And it’s not just assumed, it’s quite observable.
The differences are observable. Fraught debates are people disagreeing about the value of freedom.versus equality, etc.
I'm any case, the problem of subjectivism is that there are potentially multiple right answers.
Human ethical disagreements are mostly about edge cases. Like what is your objective claim here, that human values are not correlated at all?
No. I don't accept that ethics "is" whether values you happen to have, or whatever decision you happen to make.
It’s social constructivism of morality. Which is rooted in our other knowledge about game theory and evolution.
If morality is socially constructed , the robot is wrong about metaethics. What the robot should do.is follow the social rules, and if its programming is something different, then it's actions are object level wrong. .
Yes, this is exactly my point. A lot of things, which are treated as “applied missing the point answers” are in fact legitimately philosophically potent. At the very least, we should be paying much more attention to them.
Is the robot missing the point or not?
Therefore it’s not just “by looking” but “pretty much by looking”. I completely agree about the necessity to abandon the notion of certainty
That's just the start. The tricky question is now much else we need to abandon. In particular , it's not clear whether convergence on a single most likely theory of everything is possible, even if you have abandoned certainty.
You talk about the philosophers not having much to add in the third comic, and the scientist getting it right. Seems to me like the engineer's/robot's answer in the first two comics are importantly misguided/non-helpful though.
The more sophisticated version of the first question would be something about whether you ought to care about copies of yourself, how you'd feel about stepping into a destroy-then-reassemble teleporter, etc. I think the engineer's answer suggests that he'd care about physical continuity when answering these questions, which I think is the wrong answer. (And philosophers have put in work here — see Parfit.)
In the second comic, the robot's answer is fine as far as predictive accuracy goes. But I'd interpret the human's question as a call for help in figuring out what they ought to do (or what their society ought to reward/punish, or something similar). I think there's totally helpful things you can say to someone in that situation beyond the robot's tautologies (even granting that there's no objective truth about ethics).
You talk about the philosophers not having much to add in the third comic, and the scientist getting it right.
That's not exactly the case. I'm trying to make a more nuanced point than "boo philosophy and hooray science".
I'm saying that the applied answer is much deeper than it initially appears. But to discover this deepness you still need to go on the philosophical journey. Most people who dismiss philosophical questions don't in fact understand the deepness of the "scientific answer" because they didn't go on this journey.
Seems to me like the engineer's/robot's answer in the first two comics are importantly misguided/non-helpful though.
I think they are similarly helpful and deep, highlighting some blindspots in conventional discussions on the matter.
The more sophisticated version of the first question would be something about whether you ought to care about copies of yourself
That would be a different question, though. The question of value, not ontology.
The ontological question is about the way to solve some probability theoretic questions like anthropic trilemma. And the insight is that causality, the physical nature of the particular experiment matters quite a lot, not whether people have "similar atoms". I'll be writing about it in details at some point in my PTF102 sequence.
In the second comic, the robot's answer is fine as far as predictive accuracy goes.
Robot answer's highlight the way to reduce the concept of "shouldness". How it's a property of mind's decision making algorithm and where you need to look to find an answer, which is one of the core insights of metaethics.
I also thought the robot:s answer missed the point quite badly ...because it reduced the ought all the way down to an is -- or rather a bunch of isses.
If what one ought to do reduces to what one would do, there are potentially eight billion answers to what one ought to do. That works for a decision theory, because decision theory only needs to guide individual actions -- but not for an ethical theory, because ethical theories need to identify bad people who need punishment, and no one is bad by their own values. Predictive accuracy is irrelevant to both, it's a different ball game.
There's a consistent theme in rationalist writing on ethics, where the idea that everyone has basically the same values , or "brain algorithms", is just assumed ... but it needs to be based on evidence as much as anything else. The Rationalist approach isn't some deep insight, it is quite flawed
Reducing ethical normativity isn't bad, but doing it in a way that leads to sweeping subjetivism is bad. If you accept subjectivism, you miss better answers.
In the first comic, the engineers answer is a also a plausible philosophical answer. The "philosopher" is assuming that identity has to be based on intrinsic properties, the "engineer" isnt. But there is actually no philosophical fact that identity should be intrinisc. In Bensinger's When is a Mind Me he takes the "philosopher's" perspective.
In the third comic, the philosopher is technically correct. You can't achieve certainty from a finite.chain of observations, and you can only make a finite chain. Modern empiricists have admitted this giving up on certainty.
I also thought the robot:s answer missed the point quite badly ...because it reduced the ought all the way down to an is -- or rather a bunch of isses.
If you dismiss any reduction of ought to is, you are essentially dogmatically certain that Hume's guillotine is true. Is it your stance?
If what one ought to do reduces to what one would do
Not to what one would. Your ethical module may not be directly connected to the behavioral one and so your decisions are based on other considerations, like desires unrelated to ethics. This doesn't change the fact that what you ought to do is the output (or a certain generalization of multiple outputs) of the ethical module, which is a computation taking place in the real world, which can be observed.
there are potentially eight billion answers to what one ought to do.
Potentially but not actually. Once again, when you look, turns out individual ethical views of people are not *that* different. That said, there is still room for disagreement and how exactly we aggregate individual ethical preferences into morality is still up to debate. But this is the next question, with the somewhat similar direction for a solution.
There's a consistent theme in rationalist writing on ethics, where the idea that everyone has basically the same values , or "brain algorithms", is just assumed ... but it needs to be based on evidence as much as anything else.
Not basically the same, but somewhat similar. And it's not just assumed, it's quite observable. Human ethical disagreements are mostly about edge cases. Like what is your objective claim here, that human values are not correlated at all?
Reducing ethical normativity isn't bad, but doing it in a way that leads to sweeping subjetivism is bad. If you accept subjectivism, you miss better answers.
I think calling it subjectivism is very misleading. The whole subjective/objective duality is quite horrible - I'll be dedicating a post about it at some point. It's social constructivism of morality. Which is rooted in our other knowledge about game theory and evolution.
In the first comic, the engineers answer is a also a plausible philosophical answer.
Yes, this is exactly my point. A lot of things, which are treated as "applied missing the point answers" are in fact legitimately philosophically potent. At the very least, we should be paying much more attention to them.
In the third comic, the philosopher is technically correct. You can't achieve certainty from a finite.chain of observations, and you can only make a finite chain. Modern empiricists have admitted this giving up on certainty.
Therefore it's not just "by looking" but "pretty much by looking". I completely agree about the necessity to abandon the notion of certainty. If you want to give philosophers some credit for this - I agree. The irony of the joke stays the same. When the question is refined so that we removed the problematic notion of "certainty", the naive answer turned out to be basically true.
I think you're missing something here. Science, and more specifically physics, is built on first theorizing or philosophizing, coming up with a lot of potential worlds a priori, and only looking to see which one you probably fall in after the philosophizing is done. How do you know a tree exists? Well, I bet a good philosopher from a different universe could come up with the concept of trees only knowing we live in 3+1 dimensions:
Looking isn't needed to know trees really exist, just to know that tree over there really exists. That involves a bit of cyclical reasoning, which is basically because you cannot prove a system consistent within the system. The best you can do is check that the current statement isn't inconsistent, such as if someone says, "how do you know that tree over there really exists," while pointing at empty air, and you say, "it doesn't."
I think my issue with empiricism is that it does not generalize, at all. A better way to compress sense data is to first come up with several theories, then use a few bits to point out which theory is correct and how it is being applied.
Science, and more specifically physics, is built on first theorizing or philosophizing, coming up with a lot of potential worlds a priori, and only looking to see which one you probably fall in after the philosophizing is done.
This is a mainstream position, which is sort of, kind of true in broad strokes, but misses very important context, therefore leaving people confused about all the interesting nuance, preventing further progress. So I push back against it.
The important context/interesting nuances are:
I bet a good philosopher from a different universe could come up with the concept of trees
Crucially, the other universe has to be similar enough to our universe, so that reasoning that evolved in the other universe applies to ours as well.
I think my issue with empiricism is that it does not generalize, at all.
Have you heard about unsupervised learning?
I think your main counterpoint to what I said is that people are doing an optimization process where they look at the data while simultaneously doing a search for a better theory. In fact, you cannot even disentangle their brain from the reality that created and runs it, so even a best attempt at theory first, observation second is doomed to fail.
I think the second, stronger sentence is mostly wrong. You do not need a universe similar enough to our universe to produce reasoning similar to ours, just one that can produce similar reasoning and has an incentive to. That incentive can be as little as, "I wonder what physics looks like in 3+1 dimensions?" just like our physicists wonder what it looks like in more or less dimensions, with different fundamental constants, with different laws of motion, with positive spacetime curvature, and so on. Or, we can just shove a bunch of data from our universe into theirs, and reward them for figuring it out (i.e. training LLMs).
As for the first, weaker sentence, yes this is true. Pretty much everyone has tight feedback loops, probably because the search space is too large to first categorize its entirety and then match the single branch you end up observing. I think the role of observation here is closer to moving attention to certain areas of the search space, rather than moving the search tree forward (see Richard Ngo's shortform on chess). The thing is, this process is unnecessary for simple things. You probably learned to solve TicTacToe by playing a bunch of games, but you could have just solved it. I think the concept of trees are relatively simple, though of course if you want a refined concept like its protein composition or DNA sequencing, yeah that space is too big and you probably have to just go out and observe it.
I don't really understand your point about unsupervised learning. With unsupervised learning, you can just run a bunch of data through your model until it learns something. That's the observation -> theory pipeline and it's astoundingly inefficient and bad at generalization. Humans could do the same with 100x fewer examples, which is the gap models need to clear to solve ARC-AGI. Humans are probably doing something closer to theory -> observation.
Presenting it as if they first came up with general principles which were then validated by the experience for the sake of readers comprehension, even though the casual process that led to her discovery of the theory is different.
I'm not sure what this was supposed to be. It's a noun phrase, not a sentence.
Interesting. I have thought about these questions a lot, and came to a possibly different conclusion about verifying the reality of a tree.
<spoiler?>
All you can be certain of is the existence and content of your own experience, in the present moment. You currently experience looking at a tree. Now you experience remembering looking at that tree as you ponder the question "How do I know if the tree is real?"
Your present experience comes with access to this think you seem to call "memory". In particular, some memory is "episodic". The type signature of "episodic memory" feels vaguely similar to that of an experience. This, and your intuitive sense of mostly-linear time, are enough to reasonably guess that past memories actually correspond to experiences, and that these experiences seem to form a linear order called "time". By observing memories of having observed memories, you conclude that an experience only has access to those before it on its time-curve, and that these memories are somewhat unreliable and imprecise.
You notice that the "visual sensory input" part of your experience contains data on multiple levels of abstraction, spanning from the texture of a red smudge on a ball to the fact that there is a ball in the first place. You call the latter type of data "object detection". You notice that nearby time-slices of experience mostly share detected objects (object permanence).
Based on memories of experiences containing observations of "humans" and on experiences of interacting with these humans and of observing your own physical body in various ways (e.g. via mirrors, generalizing their apparent effect on your viewing of other objects to yourself), you notice that your physical presence is of a similar type to these "humans", and conclude that the experience that is you is simply that of a human, and so other humans have similar experience. Thus, experiences interact with one another in at least two ways: first is through episodic memory, and second is through physical interaction between the carriers of said experience.
You notice that the medium through which both kinds of these interactions, along with most sensory observations, seem to occur, ("physical reality") very stably obeys certain rules, including a more general form of object permanence, which you refer to as "existence" of a particular object.
You note that the way you conceptualize and use language was pretty much entirely learned from interaction with other nearby humans, and conclude that, at least for the most basic things like object detection, humans mostly use the same words to describe the same things. Thus, the exact meaning of "existence" as it relates to the tree is pretty much agreed upon.
According to your memory you saw the tree. You also touched and felt the tree. You noticed that the tree grants you the ability to remain suspended above the ground for a lot longer than you otherwise could, via you climbing it. You talk to others and confirm that they also observed the tree through multiple channels. You conclude that the tree exists.
</probably-not-much-of-a-spoiler>
From inside it may even look similar to cyclical reasoning, after all you can only learn about evolution that justifies your learning ability, by using the learning ability created by evolution. But this is just a map-territory confusion. The actual causal process in the reality that makes our cognition engines work is straightforward and non-paradoxical. The cognition engine works even if it’s not certain about it.
Can you explain what you mean here? If you assume that out in the territory, you have a brain that has these properties, you can explain how you come to know things. But that's an assumption. Doesn't lead you to know things. "from the inside" is what we care about here, you knowledge is in the map too, at least to you.
I'll be talking more about Münchhausen trilemma in details in a separate post. But here are some highlights:
"from the inside" is what we care about here
There are two things that we care about
If our brains are created by natural selection and then they discover natural selection we have a straightforward non-paradoxical causal history for how our maps correlate to the territory:
Natural Selection -> Brain -> Natural Selection*
Where A* means map of A.
But that's an assumption. Doesn't lead you to know things.
In the map it is an assumption. But in the territory it's either true or false and it doesn't matter whether we've assumed it or not.
A map can correspond to the territory even if noone knows about it. We can have 1. without 2.
you knowledge is in the map too
You can map the map-territory correspondence as well. As soon as we considered it we are getting
(Natural Selection -> Brain -> Natural Selection*)*
This allows us to get a map of map-territory correspondence and then a map of map-territory correspondence of a map of map-territory correspondence and so on.
So, even though we do not have 2. and, in fact we can not have 2. in principle, as any reason for certainty will have to go through our mind.
We can have
2'. We are somewhat justifiably somewhat confident in 1
and
3'. We are somewhat justifiably somewhat confident in 2
and so on as long as we have enough data to construct a meta-model of the next level.
No offense. This sounds rather trivial, and like you're sidestepping rather than answering the question.
Of course (1) might be true. The problem is you can't know that. Like you said
So, even though we do not have 2. and, in fact we can not have 2. in principle, as any reason for certainty will have to go through our mind.
And yeah, you can bump it up meta levels. Which also doesn't solve the issue.
The point is to refine the wrong question about certainties 2 into a better question about probabilistic knowledge 2'. If you just want to get an answer to 2 - then this answer is 'no'. We can't be certain that our knowledge is true. Then again, neither we need to.
If your question is how it's different from cyclical reasoning then consider the difference:
Evolution produced my brain that discovered evolution. After I've considered this thought I'm now certain in both of them and so case closed. No more arguments or evidence can ever persuade me otherwise.
I can never be certain of anything but to the best of my knowledge the situation looks exactly the way it would looked if my brain was produced by evolution which allowed my brain to discover evolution. I'll be on lookout for counter-evidence, because maybe everything I know is a lie, but for now on a full reflection of my knowledge, including techniques of rationality and notions of simplicity and computational complexity, this seems to be the most plausible hypothesis.
From inside it may even look similar to cyclical reasoning, after all you can only learn about evolution that justifies your learning ability, by using the learning ability created by evolution. But this is just a map-territory confusion. The actual causal process in the reality that makes our cognition engines work is straightforward and non-paradoxical. The cognition engine works even if it’s not certain about it.
I don't think it is just a confusion between the map and the territory to claim that this is circular reasoning.
It is circular reasoning to think " My map tells me that the territory is likely to have given me a map which describes it accurately, which justifies me using it to conclude that my map describes the territory". The reason why this circular reasoning process is somewhat justified is a combination of Occam's razor and the fact that the conclusion is a 'fixed point' , while assuming that your map is inaccurate does not produce a similarly self-affirming reasoning loop. So there is an asymmetry which seems to suggest that your map is likely to be somewhat accurate. But this is not an obviously true conclusion. I don't actually know how well I could defend it.
I think it's not so much a circle as a spiral (not in the LLM sense, I don't think). Each level of justification pushes just a little deeper than the last, even though in words you keep asking some of the same fundamental questions.
The argument I referred to is indeed circular. A spiral is topologically just a line which terminates. It doesn't have the structure you're referring to. Maybe if you want to think of your progress as measured on a vertical axis and the steps used to get there as represented on a plane, then the curve plotted above the circle representing the circular argument on that plane would be helical, which has a spiral like shape. However, this is not the same as saying that the argument is a spiral.
There is a joke format which I find quite fascinating. Let’s call it Philosopher vs Engineer.
It goes like this: the Philosopher raises some complicated philosophical question, while the Engineer gives a very straightforward applied answer. Some back and forth between the two ensues, but they fail to cross the inferential gap and solve the misunderstanding.
It doesn’t have to be literal philosopher and engineer, though. Other versions may include philosopher vs scientist, philosopher vs economist, human vs AI, human vs alien and so on. For instance:
One thing that I love about it is that the joke is funny, regardless of whose side you are on. You can laugh at how much the engineers miss the point of the question. Or how much the philosophers are unable to see the answer that is right in from of their noses. Or you can contemplate on the nature of the inability of two intelligent agents to understand each other. This is a really interesting property of a joke, quite rare in our age of polarization.
But, what fascinates me the most, is that this joke captures my own intellectual journey. You see, I started from a position of a deep empathy to the philosopher. And now I’m much more in league with the engineer.
Let’s look at one more example. This time of philosophers being bullied by scientists.
When I first considered the question - I think I was about twelve back then - it was obvious that scientists are missing the point. Sure, if we assume that our organs of perception give us reliable information, then “looking” would be a valid justification. But how can we justify this assumption? Surely not by more looking - that would be circular reasoning. How can we justify anything in principle? What justifies the justification? And the justification of justification? And so on? Is it an infinite recursion? Or, if we are stopping at some point, therefore, leaving a certain step unjustified, how is it different from stopping on the first step and therefore not justifying anything at all?
It seemed to me, that the “scientific answer” is just the first obvious step. The beginning of philosophical exploration. And if someone refuses to follow through, that must be a sign of some deep lack of curiosity.
And so one may think, as I did, that science is good at answering the first obvious question. But deeper questions lies beyond it’s abilities - in the realm of philosophy.
Except... philosophy isn’t good at answering these questions either. It can conceptualize the Problem of induction or Münchhausen trilemma. But those are not answers - they are road blocks on the path to one.
One may say that philosophy is good at asking questions. Except... how good are you, really, if you are stuck asking the same questions as a 12 years old child? Oh sure, I might have been a bright one, but nevertheless when the aggregated knowledge of humankind in some discipline is on the level of some kid, it’s not a sign in favor of this discipline.
The really good questions can be asked on top of the existent answers. Thus we are pushing forward the horizon of the unknown. So if your discipline isn’t good at answering its own questions, it’s not really good at asking them either. And vice versa. After all, asking the right question, is a crucial towards getting an answer.
But the most ironic is that if one actually goes on a long philosophical journey in search for the answer to the question “How can we know things about the external world at all?”, then, in the end of this heroic quest, after the three headed dragon is slain, kingdom of reason is saved and skeptics are befriended along the way, on the diamond mural written in golden letter will be found the answer.
And this answer will be: “Pretty much by looking”.
Well, the expanded version, spoiler alert, is:
[spoiler]
[/spoiler]
But I think “Pretty much by looking” captures it about as good as any four word combination can.
Turns out that the “missing the point applied answer” isn’t just the beginning of the exploration. It encompasses the whole struggle, containing a much deeper wisdom. It simultaneously tells us what we should be doing to collect all the puzzle pieces and also what kind of entities can collect them in the first place.
On every step of the journey it’s crucial. To learn about all the necessary things you need to go into the world and look. Even if someone could’ve come up with the specific physical formulas for thermodynamics without ever interacting with the outside world, why would they be payed more attention to than literally any other formulas, any other ideas?
The answer is not achieved by first coming up with “a priori” justifications for why we could be certain in our observation and cognition before going to observe and cognize. We were observing and reflecting on these observations all the way. And from this we’ve arrived to the answer. In hindsight, the whole notion of “pure reasoning”, free from any constrains of reality is incoherent. Your mind is already entangled with the reality - it evolved within it.
“Looking” is the starting point of the journey, the description of the journey as a whole and also it’s finish line. The normality and common sense to which everything has to be adding up to.
And how else could it have been? Did we really expect that solving philosophy would invalidate the applied answers of the sciences, instead of proving them right? That it would turn out that we don’t need to look at the world to know things about it? Philosophy is the precursor of Science. Of course its results add up to it.
Somehow, this is still a controversial stance, however. Most of philosophy is going in the opposite direction, doing anything but adding up to normality. It’s constantly mislead by semantic arguments, speculates about metaphysics, does modal reasoning to absurdity and then congratulates itself, staying as confused as ever.
And while this is tragic in a lot of ways, I can’t help but notice that this makes the joke only funnier.