This post was popular, but the idea never got picked up. Let's have an experimental open thread this month!

The rules:

Top level comments would be claims. Second level comments would be discouraged from directly saying that someone is wrong and instead encouraged to ask them questions instead to get them to think

Let top level comments be debatable claims, first tier responses be questions, second tier answers, responses, answers, etc. Try to go as deep as possible, I'd expect an actual update to be increasingly likely to happen as you continue the conversation.

New to LessWrong?

New Comment
165 comments, sorted by Click to highlight new comments since: Today at 2:18 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-][anonymous]5y170

Claim: a typical rationalist is likely to be relying too much on legibility, and would benefit from sometimes not requiring an immediate explicit justification for their beliefs.

6gjm5y
Question: What empirical evidence do you have about this? (E.g., what do you observe introspectively, what have you seen others doing, etc., and how sure are you that those things are the way you think they are?)
1[anonymous]5y
Well I don't really have a justification for it (ha), but I've noticed that explicit deductive thought rarely leads me to insights that turn out to be useful. Instead, I find that simply waiting for ideas to pop into my head, makes the right ideas pop into my head.
5Chris_Leong5y
Question: How representative do you think posts on Less Wrong are in terms of how rationalists make decisions in practise? If there is a difference, do you think spending time on LW may affect your perspective on how rationalists make decisions?
-8GPT25y
2ChristianKl5y
Who do you mean with the phrase typical rationalist?
1TheWakalix5y
I think “typical X does Y” is shorthand for “many or most Xs do Y”.
2ChristianKl5y
That still leaves open what "X" is.
-8GPT25y
-5GPT25y
2gjm5y
Clarification request: At face value you're implying that typical rationalists always do require immediate explicit justification for their beliefs. I wonder whether that's an exaggeration for rhetorical effect. Could you be a bit more, um, explicit about just what the state of affairs is that you're suggesting is suboptimal?
1[anonymous]5y
You saw that correctly. What I mean is too often, not always.

Circle geometry should be removed from the high school maths syllabus and replaced with statistics because stats is used in science, business and machine learning, while barely anyone needs circle geometry.

3shminux5y
While I agree that circle geometry is best left for specialized elective math classes, and that some basics statistical ideas like average, variance and Bell curve can be useful for an average person, I am curious which alternatives to circle geometry you considered before settling on stats as the best candidate?
3Chris_Leong5y
That's a good point. There's all kinds of things that might be worth considering adding such as programming, psychology or political philosophy. I guess my point was only that if we were going to replace it with something within maths, then stats seems to be the best candidate (at least better than any of the other content that I covered in university)
-8GPT25y
-8GPT25y
2Birke5y
Questions: 1) Do you consider circle geometry to be the most useless high school subject? How about replacing literature with statistics? 2) Even though circle geometry is rarely used directly by average adults, it's relatively easy to grasp and helps to develop mathematical thinking. Statistics is more involved and requires some background in combinatorics and discrete math which are not covered in many schools. Do you think majority of high school students will be able to understand statistics when it's taught instead of circle geometry?
2Chris_Leong5y
1) That's a good point, but I was thinking about how to improve the high school maths syllabus, not so much about high school in general. I don't have any strong opinions on removing literature instead if it were one or the other. However, I do have other ideas for literature. I'd replace literature with a subject that is half writing/giving speeches about what students are passionate about and half reading books mostly just for participation marks. I'd have the kinds of things students currently do in literature part of an elective only. 2) p-testing is a rather mechanised process. It's exactly the kind of thing high school is good at teaching. Basic Bayesian statistics only has one key formula (although it has another form). Even if there is a need for prerequisite units in order to prepare students, it still seems worthwhile.
3ChristianKl5y
Do you think that the mechanic act of plugging in numbers into formula's is more important than the conceptual act of understanding what a statistical test actually means?
1Chris_Leong5y
In terms of use, most people only need to know a few basic facts like "a p-value is not a probability", which high school teachers should be able to handle. Those who seriously need statistics could cover at a higher level at university and gain the conceptual understanding there.
-3ChristianKl5y
It seems that a lot of people who have lessons that cover students t-test come out of them believer that the p-value is the probability that the claim is true. I would expect that most students of high school classes don't go out of the classes with a correct understanding
4Matt Goldenberg5y
Meta: Downvoted because this is not a question.
-8GPT25y
-13GPT25y
1Pattern5y
What do you mean by circle geometry?
1Chris_Leong5y
Good point, I should have clarified this more. I'm not saying that people shouldn't know how to calculate the area and circumference of a circle as people may actually use that. It's more to do with all the things to do with tangents and chords and shapes inscribed in circles.
-1Pattern5y
Possible uses: 1. Passing tests - in a geometry class, taking the ACT, (I don't know, maybe it's a part of getting a GED). 2. Your interest in geometry is not merely theoretical, but practical. Maybe you construct things, perhaps out of wood using power tools. (You may find it useful to design/implement a coordinate system on a piece of wood to assist with getting the dimensions of things right, as you cut them out with a saw. Someone may have already invented this.) If you are trying to find the area under a curve, you may find it useful to buy very fine, high quality paper, graph the shape of the curve, and weight it, and use the average wight of the paper per inch or centimeter (squared) to find the answer. (This relies of the material being consistent through out, and weighing about the same everywhere.) 3. Despite your claims that you would never use math, or this part of math, someday you find yourself* designing a dome, or even a half sphere, perhaps as a place to live. The floor plan is a circle. 4. You enjoy math. You enjoy learning this/using this knowledge on puzzles to challenge your wits. (See 6) 5. You end up as a teacher, assistant, or tutor. The subject is math. (Perhaps you realize that not every geometry student that will one day teach geometry is aware of this fact.) Whether or not you learned all the fancy stuff the first time, if you didn't retain it you have to learn it again - well enough to teach it to someone that doesn't like the subject as much as you - and you hated geometry (class). (It was required.) 6. You learn visual calculus. Other mathematicians may compose long, elaborate arguments that they publish in papers that may take days to decipher (that seem to push the world ever closer to proofs people can't read, but computers have apparently checked - or been used to produce). Perhaps your proofs employ no words, but consist of a picture instead, that employs esoteric knowledge (such as that of tangents and chords) to solves pro
-8GPT25y
-6GPT25y
-14GPT25y

Claim: this thread would be better (although, it's already great) if people added confidence levels to their claims at the beginning, and updated them at the end of the discussion. (confidence level - 75%)

2Chris_Leong5y
Do you think that the extra effort from requiring confidence levels might act as a trivial inconvenience that discourages people from posting?
1Yoav Ravid5y
Haven't thought about that. it might. is there a way to test that? (i guess if you make it optional, as it is, then it won't act that way)
-8GPT25y
-13GPT25y
-6GPT25y

claim: LW commenter GPT2 is a bot that generates remarkably well-formed comments, but devoid of actual thought or meaning. confidence: 20% that it's no or minimal human intervention, 90%+ that it's computer-generated text, but a human might be seeding, selecting, and posting the results.

subclaim: this should be stopped, either by banning/blocking the user, or by allowing readers to block it.

update: based on a comment, I increase my estimate that it's fully automated to 95%+ I look forward to learning what the seed corpus is, and whether i... (read more)

You can auto-collapse comments from GPT2 in https://www.lesswrong.com/account

-10GPT25y
6mako yass5y
Isn't meaning in the eye of the beholder, or did you mean something else? Have you ever had the experience of going to a modern art gallery and knowing that authorial intent is mostly absent from all of the works, but pretending that it's all there for a connoisseur to find, playing the connoisseur, then finding profound meaning and having a really good time? Have you noticed when GPT2 started commenting?
0Dagon5y
Ah. Clever but too much IMO. I hate "social distrust day".
1Pattern5y
My view of its capabilities certainly dropped.
-9GPT25y
-3GPT25y
This is a pretty terrible post; it belongs in Discussion (which is better than Main and just as worthy of asking the question), and no one else is going out and read it. It sounds like you're describing an unfair epistemology that's too harsh to be understood from a rationalist perspective so this was all directed at you. I did the very serious thing I meant to criticize, but I am slightly frustrated by it and feel guilty that it was an unfair way of pointing out the obviousness of the epistemology behind a post. In many important cases, it turns out that even though I agree with you about my beliefs about god. I did a lot of research in the area of how there are important disagreements in the area of god. (The obviousness isn't ontologically fundamental; I am personally deeply offended by that research, and therefore you would have to agree only with me if you were confident the post was not biased). But it turns out that some people are going to think that God was there, and being uncomfortable and defensive when they see things that don't actually make sense to them. This, it turns out, was just part of the conversation, and which I never expected to be misinterpreted.
-7GPT25y

Meta: are the answers to questions all supposed to be given by the OP?

2[anonymous]5y
Yeah, otherwise you're not narrowing down one person's beliefs, but possibly going back and forth.

In a five-year-old contrarian thread I had stated that "there is no territory, it's maps all the way down." There was a quality discussion thread with D_Malik about it, too. Someone also mentioned it on reddit, but that didn't go nearly as well. Since then, various ideas of postrationality have become more popular, but this one still remains highly controversial. It is still my claim, though.

What's the difference between "the source of observations" and "reality?"

2shminux5y
That's a common implicit assumption, that observations require a source, hence reality. Note that this assumption is not needed if your goal to predict future observations, not to "uncover the nature of the source of observations". Of course a model of observations having a common source can be useful at times, just not always.
2dxu5y
If observations do not require a source, then why do they seem to exhibit various regularities that allow them to be predicted with a greater accuracy than chance?
2shminux5y
It's an empirical fact (a meta-observation) that they do. You can postulate that there is a predictable universe that is the source of these observations, but this is a tautology: they are predictable because they originate in a predictable universe.
2dxu5y
Right, and I'm asking why this particular meta-observation holds, as opposed to some other meta-observation, such as e.g. the meta-observation that the laws of physics change to something different every Sunday, or perhaps the meta-observation that there exists no regularity in our observations at all.
2shminux5y
Again, without a certain regularity in our observations we would not be here talking about it. Or hallucinating talking about it. Or whatever. You can ask the "why" question all you want, but the only non-metaphysical answer can be another model, one more level deep. And then you can ask the "why" question again, and look for even deeper model. All. The. Way. Down.
2dxu5y
That doesn't seem to answer the question? You seem to be claiming that because any answer to the question will necessitate the asking of further questions, that means the question itself isn't worth answering. If so, I think this is a claim that needs defending.
2shminux5y
Maybe I misunderstand the question. My answer is that the only answer to any "why" question is constructing yet another model. Which is a very worthwhile undertaking, since the new model will hopefully make new testable predictions, in addition to explaining the known ones.
2dxu5y
My actual question was "why are our observations structured rather than unstructured?", which I don't think you actually answered; the closest you got was which isn't actually an explanation, so far as I can tell. I'd be more interested in hearing an object-level answer to the question.
2shminux5y
I am still not sure what you mean. are you asking why they are not random and unpredictable? That's an observation in itself, as I pointed out... One might use the idea of predictable objective reality to make oneself feel better. It does not do much in terms of predictive power. Or you can think of yourself as a Boltzmann brain hallucinating a reality. Physicists actually talk about those as if they were more than idle musings.
2dxu5y
Yes, I am. I don't see why the fact that that's an "observation in itself" makes it an invalid question to ask. The fact of the matter is, there are many possible observation sequences, and the supermajority of those sequences contain nothing resembling structure or regularity. So the fact that we appear to be recording an observation sequence that is ordered introduces an improbability that needs to be addressed. How do you propose to address this improbability?
2shminux5y
My answer is, as before, conditional on our ability to observe anything, the observations are guaranteed to be somewhat predictable. One can imagine completely random sequences of observation, of course, but those models are not self-consistent, as there have to be some regularities for the models to be constructed. In the usual speak those models refer to other potential universes, not to ours.
2dxu5y
Hm. Interesting; I hadn't realized you intended that to be your answer. In that case, however, the question simply gets kicked one level back: Why do we have this ability in the first place? (Also, even granting that our ability to make observations implies some level of predictability--which I'm not fully convinced of--I don't think it implies the level of predictability we actually observe. For one thing, it doesn't rule out the possibility of the laws of physics changing every Sunday. I'm curious to know, on your model, why don't we observe anything like that?)
2shminux5y
Maybe we can focus on this one first, before tackling a harder question of what degree of predictability is observed, what it depends on, and what "the laws of physics changing every Sunday" would actually mean observationally. Please describe a world in which there is no predictability at all, yet where agents "exist". How they survive without being able to find food, interact, or even breathe, because there breathing means you have a body that can anticipate that breathing keeps it alive.
4dxu5y
I can write a computer program which trains some kind of learner (perhaps a neural network; I hear those are all the rage these days). I can then hook that program up to a quantum RNG, feeding it input bits that are random in the purest sense of the term. It seems to me that my learner would then exist in a "world" where no predictability exists, where the next input bit has absolutely nothing to do with previous input bits, etc. Perhaps not coincidentally, the learner in question would find that no hypothesis (if we're dealing with a neural network, "hypothesis" will of course refer to a particular configuration of weights) provides a predictive edge over any other, and hence has no reason to prefer or disprefer any particular hypothesis. You may protest that this example does not count--that even though the program's input bits are random, it is nonetheless embedded in hardware whose behavior is lawfully determined--and thus that the program's very existence is proof of at least some predictability. But what good is this assertion to the learner? Even if it manages to deduce its own existence (which is impossible for at least some types of learners--for example, a simple feed-forward neural net cannot ever learn to reflect on its own existence no matter how long it trains), this does not help it predict the next bit of input. (In fact, if I understood your position correctly, shminux, I suspect you would argue that such a learner would do well not to start making assumptions about its own existence, since such assumptions do not provide predictive value--just as you seem to believe the existence of a "territory" does not provide predictive value.) But to tie this back to the original topic of conversation: empirically, we are not in the position of the unfortunate learner I just described. We do not appear to be receiving random input data; our observations are highly structured in a way that strongly suggests (to me, at least) that there is something forcing th
2shminux5y
Uh. To write a program one needs at least a little bit of predictability. So I am assuming the program is external to the unpredictable world you are describing. Is that a fair assumption? And what about the learner program? Does it exist in that unpredictable world? Exactly. So you are saying that that universe's predictability only applies to one specific algorithm, the learner program, right? It's a bit contrived and somewhat solipsistic, but, sure, it's interesting to explore. Not something I had seriously considered before. Yep, it's a good model at times. But just that, a model. Not all observed inputs fit well into the "objective reality" framework. Consider the occurrences where insisting on objective reality actually leads you away from useful models. E.g. "are numbers real?" No. This sentence already presumes external reality, right there in the words "cosmic coincidence," so, as far as I can tell, the logic there is circular.
2dxu5y
I'm not sure what you mean by this. The most straightforward interpretation of your words seems to imply that you think the word "coincidence"--which (in usual usage) refers simply to an improbable occurrence--presumes the existence of an external reality, but I'm not sure why that would be so. (Unless it's the word "cosmic" that you object to? If so, that word can be dropped without issue, I think.)
2shminux5y
Yes, "cosmic coincidence". What does it mean? Coincidence, interpreted as a low probability event, presumes a probability distribution over... something, I am not sure what in your case, if not an external reality.
2dxu5y
I confess to being quite confused by this statement. Probability distributions can be constructed without making any reference to an "external reality"; perhaps the purest example would simply be some kind of prior over different input sequences. At this point, I suspect you and I may be taking the phrase "external reality" to mean very different things--so if you don't mind, could I ask you to rephrase the quoted statement after Tabooing "external reality" and all synonyms? EDIT: I suppose if I'm going to ask you to Taboo "external reality", I may as well do the same thing for "cosmic coincidence", just to try and help bridge the gap more quickly. The original statement (for reference): And here is the Tabooed version (which is, as expected, much longer): Taken literally, the "coincidence hypothesis" predicts that our observations ought to dissolve into a mess of random chaos, which as far as I can tell is not happening. To me, this suffices to establish the (probable) existence of some kind of fixed reality.
2shminux5y
Thank you for rephrasing. Let me try my version. Notice how it doesn't assume anything about probabilities of coincidences, as I don't see those contributing to better predictions. In other words, sometimes observations can be used to make good predictions, for a time. Then we assume that these predictions have a single source, the external reality. I guess I don't get your point about needing to regress to unpredictability without postulating that reality thing.
2dxu5y
(Okay, I've been meaning to get back to you on this for a while, but for some reason haven't until now.) It seems, based on what you're saying, that you're taking "reality" to mean some preferred set of models. If so, then I think I was correct that you and I were using the same term to refer to different concepts. I still have some questions for you regarding your position on "reality" as you understand the term, but I think it may be better to defer those until after I give a basic rundown of my position. Essentially, my belief in an external reality, if we phrase it in the same terms we've been using (namely, the language of models and predictions), can be summarized as the belief that there is some (reachable) model within our hypothesis space that can perfectly predict further inputs. This can be further repackaged into a empirical prediction: I expect that (barring an existential catastrophe that erases us entirely) there will eventually come a point when we have the "full picture" of physics, such that no further experiments we perform will ever produce a result we find surprising. If we arrive at such a model, I would be comfortable referring to that model as "true", and the phenomena it describes as "reality". Initially, I took you to be asserting the negation of the above statement--namely, that we will never stop being surprised by the universe, and that our models, though they might asymptotically approach a rate of 100% predictive success, will never quite get there. It is this claim that I find implausible, since it seems to imply that there is no model in our hypothesis space capable of predicting further inputs with 100% accuracy--but if that is the case, why do we currently have a model with >99% predictive accuracy? Is the success of this model a mere coincidence? It must be, since (by assumption) there is no model actually capable of describing the universe. This is what I was gesturing at with the "coincidence" hypothesis I kept mentioning. N
4shminux5y
Depending on the meaning of the word preferred. I tend to use "useful" instead. It's a common belief, but it appears to me quite unfounded, since it hasn't happened in millennia of trying. So, a direct observation speaks against this model. It's another common belief, though separate from the belief of reality. It is a belief that this reality is efficiently knowable, a bold prediction that is not supported by evidence and has hints to the contrary from the complexity theory. Yes, in this highly hypothetical case I would agree. I make no claims one way or the other. We tend to get better at predicting observations in certain limited areas, though it tends to come at a cost. In high-energy physics the progress has slowed to a standstill, no interesting observations has been predicted since last millennium. General Relativity plus the standard model of the particle physics have stood unchanged and unchallenged for decades, the magic numbers they require remaining unexplained since the Higgs mass was predicted a long time ago. While this suggests that, yes, we will probably never stop being surprised by the -universe- (no strike through markup here?) observations, I make no such claims. Yes we do have a good handle on many isolated sets of observations, though what you mean by 99% is not clear to me. Similarly, I don't know what you mean by 100% accuracy here. I can imagine that in some limited areas 100% accuracy can be achievable, though we often get surprised even there. Say, in math the Hilbert Program had a surprising twist. Feel free to give examples of 100% predictability, and we can discuss them. I find this model (of no universal perfect predictability) very plausible and confirmed by observations. I am still unsure what you mean by coincidence here. The dictionary defines it as "A remarkable concurrence of events or circumstances without apparent causal connection." and that open a whole new can of worms about what "apparent" and "causal" mean in the sit
6dxu5y
... ... I think at this stage we have finally hit upon a point of concrete disagreement. If I'm interpreting you correctly, you seem to be suggesting that because humans have not yet converged on a "Theory of Everything" after millennia of trying, this is evidence against the existence of such a theory. It seems to me, on the other hand, that our theories have steadily improved over those millennia (in terms of objectively verifiable metrics like their ability to predict the results of increasingly esoteric experiments), and that this is evidence in favor of an eventual theory of everything. That we haven't converged on such a theory yet is simply a consequence, in my view, of the fact that the correct theory is in some sense hard to find. But to postulate that no such theory exists is, I think, not only unsupported by the evidence, but actually contradicted by it--unless you're interpreting the state of scientific progress quite differently than I am.* That's the argument from empirical evidence, which (hopefully) allows for a more productive disagreement than the relatively abstract subject matter we've discussed so far. However, I think one of those abstract subjects still deserves some attention--in particular, you expressed further confusion about my use of the word "coincidence": I had previously provided a Tabooed version of my statement, but perhaps even that was insufficiently clear. (If so, I apologize.) This time, instead of attempting to make my statement even more abstract, I'll try taking a different tack and making things more concrete: I don't think that, if our observations really were impossible to model completely accurately, we would be able to achieve the level of predictive success we have. The fact that we have managed to achieve some level of predictive accuracy (not 100%, but some!) strongly suggests to me that our observations are not impossible to model--and I say this for a very simple reason: How can it be possible to achieve even
2shminux5y
Sadly, I don't think we are converging at all. Yes, definitely. I don't see why it would be. Just because one one is able to march forward doesn't mean that there is a destination. There are many possible alternatives. One is that we will keep making more accurate models (in a sense of making more detailed confirmed predictions in more areas) without ever ending anywhere. Another is that we will stall in our predictive abilities and stop making measurable progress, get stuck in a swamp, so to speak. This could happen, for example, if the computational power required to make better predictions grows exponentially with accuracy. Yet another alternative is that the act of making a better model actually creates new observations (in your language, changes the laws of the universe). After all, if you believe that we are agents embedded in the universe, then our actions change the universe, and who is to say that at some point they won't change even what we think are the fundamental laws. There is an amusing novel about the universe protecting itself from overly inquisitive humans: https://en.wikipedia.org/wiki/Definitely_Maybe_(novel) I don't believe I have said anything of the sort. Of course we are able to build models. Without predictability life, let alone consciousness would be impossible, and that was one of my original statements. I don't know what is it I said that gave you the impression that abandoning the concept of objective reality means we ought to lose predictability in any way. Again: I don't postulate it. You postulate that there is something at the bottom. I'm simply saying that there is no need for this postulate, and, given what we see so far, every prediction of absolute knowledge in a given area turned out to be wrong, so, odds are, whether or not there is something at the bottom or not, at this point this postulate is harmful, rather than useful, and is wholly unnecessary. Our current experience suggests that it is all models, and if this ever
1clone of saturn5y
Is there anything that makes observations different or distinguishable from imaginations? If so, what?
2shminux5y
"imaginations" are observations, too. Just in a different domain.
1clone of saturn5y
What's different about these domains? Can you tell them apart in any way?
2shminux5y
well, clearly we can, most of the time. When it's not the case our observations modeling abilities are compromised. it's really helpful not to confuse the domains, don't you think? We tend to learn that fairies are not "real" pretty early on these days, though. Not in every area, of course. Vatican uses the scientific method, of sorts, to make sure that any potential saint is a bona fide one before any official decision. So the division between "real" observations and the imaginary ones is not always clearcut, and in many cases is rather subjective.
8gjm5y
Would you care to distinguish between "there is no territory" (which on the face of it is a metaphysical claim, just like "there is a territory", and if we compare those two then it seems like the consistency of what we see might be evidence for "a territory" over "no territory") and "I decline to state or hold any opinion about territory as opposed to models"?
7shminux5y
I intentionally went a bit further than warranted, yes. Just like atheists claim that there is no god, whereas the best one can claim is the agnostic Laplacian position that there is no use for the god hypothesis in the scientific discourse, I don't really claim that there is no territory, just that we have no hope of proving it is out there, and we don't really need to use this idea to make progress.
4TAG5y
Have you considered phrasing your clain differently, in view of your general lack of progress in persuading people ?
2shminux5y
I would consider a different phrasing, sure. I'm not the best persuader out there, so any help is welcome!
-7GPT25y
-2GPT25y
I like this post, but I can't get the feeling I'm going to get away with it.
3Chris_Leong5y
Do maps need to ultimately be grounded in something that is not a map and if not why are these maps meaningful?
3shminux5y
A map (another term for a model) is an algorithm to predict future inputs. To me it is meaningful enough. I am not sure what you mean by "grounded in something". Models are multi-level, of course, and postulating ""territory" as one of meta models can be useful (i.e. have predictive value) at times. At other times territory is not a particularly useful model.
4Chris_Leong5y
In what cases is the territory not a useful model? And if you aren't determining useful relative to the territory, what are you determining it in relation to?
4shminux5y
First, "usefulness" means only one thing: predictive power, which is accuracy in predicting future inputs (observations). The territory is not a useful model in multiple situations. In physics, especially quantum mechanics, it leads to an argument about "what is real?" as opposed to "what can we measure and what can we predict?", which soon slides into arguments about unobservables and untestables. Are particles real? Nope, they are an asymptotically flat interaction-free approximations of the QFT in curved spacetimes. Are fields real? Who knows, we cannot observe them directly, only their effects. They are certainly a useful model, without a doubt though. Another example: are numbers real? Who cares, they are certainly useful. Do they exist in the mind or outside of it? Depends on your definitions, so an answer to this question says more about human cognition and human biases than about anything math- or physics-related. Another example is in psychology: if you ever go to therapist for, say, couples counseling, the first thing a good one would explain is that there is no single "truth", there is "his truth" and "her truth" (fix the pronouns as desired), and the goal of therapy would be to figure out a mutually agreeable future, not to figure out who was right and who was wrong and what really happened, and who thought what and said what exactly and when.
1TAG5y
If ones goal require something beyond predictive accuracy, such as correspondence truth, why would you limit yourself to seeking predictive accuracy?
3mako yass5y
No ordinary goal requires anything outside of predictive accuracy. To achieve a goal, all you need to do is predict what sequence of actions will bring it about (though I note, not all predictive apparatuses are useful. A machine that did something very specific abnormal like.. looking at a photo of a tree and predicting whether there is a human tooth inside it, for instance, would not find many applications.) What claim about truth can't be described as a prediction or tool for prediction?
1Chris_Leong5y
Is predictive power an instrumental or terminal goal? Is your view a denial of the territory or agnosticism about it? Is the therapy example a true model of the world or a useful fiction?
3shminux5y
Brain is a multi-level predictive error minimization machine, at least according to a number of SSC posts and reviews, and that matches my intuition as well. So, ultimately predictive power is an instrumental goal toward the terminal goal of minimizing the prediction error. A territory is a sometimes useful model, and the distinction between an approximate map and as-good-as-possible map called territory is another useful meta-model. Since there is nothing but models, there is nothing to deny or to be agnostic about. You are using terms that do not correspond to anything in my ontology. I'm guessing by "the world" you mean that territory thing, which is a sometimes useful model, but not in that setup. "A useful fiction" is another term for a good model, as far as I am concerned, as long as it gets you where you intend to be.
1Chris_Leong5y
How is predictive error, as opposed to our perception of predictive error, defined if not relative to the territory? If there is nothing but models, why is your claim that there is nothing but models true, as opposed to merely being a useful model?
3shminux5y
I don't claim what is true, what exists, or what is real. In fact, I explicitly avoid all these 3 terms as devoid of meaning. That is reading too much into it. I'm simply pointing out that one can make accurate predictions of future observations without postulating anything but models of past observations. There is no such thing as "perception of predictive error" or actual "prediction error". There is only observed prediction error. You are falling back on your default implicit ontology of objective reality when asking those questions.
5Matt Goldenberg5y
Why do you assume that future predictions would follow from past predictions? It seems like there has to be an implicit underlying model there to make that assump
2shminux5y
That's a meta-model that has been confirmed pretty reliably: it is possible to make reasonably accurate predictions in various areas based on past observations. In fact, if this were not possible at any level, we would not be talking about it :) Yes, that's the (meta-)model, that accurate predictions are possible.
1Matt Goldenberg5y
How can you confirm the model of "past predictions predict future predictions" with the data that "in the past past predictions have predicted future predictions?" Isn't that circular?
2shminux5y
The meta-observation (and the first implicit and trivially simple meta-model) is that accurate predictions are possible. Translated to the realist's speak it would say something like "the universe is predictable, to some degree". Which is just as circular, since without predictability there would be no agents to talk about predictability.
4Matt Goldenberg5y
In what way is your meta-observation of consistency different than the belief in a territory?
4shminux5y
Once you postulate the territory behind your observations, you start using misleading and ill-defined terms like "exists", "real" and "true", and argue, say, which interpretation of QM is "true" or whether numbers "exist", or whether unicorns are "real". If you stick to models only, none of these are meaningful statements and so there is no reason to argue about them. Let's go through these examples: * The orthodox interpretation of quantum mechanics is useful in calculating the cross sections, because it deals with the results of a measurement. The many-worlds interpretation is useful in pushing the limits of our understanding of the interface between quantum and classical, like in the Wigner's friend setup. * Numbers are a useful mental tool in multiple situations, they make many other models more accurate. * Unicorns are real in a context of a relevant story, or as a plushie, or in a hallucination. They are a poor model of the kind of observation that lets us see, say, horses, but an excellent one if you are wandering through a toy store.
2Matt Goldenberg5y
Why can't you just believe in the territory without trying g to confuse it with maps?
2shminux5y
To me belief in the territory is the confused one :)
0Pattern5y
Because you don't believe territory "exists" or because it's simpler to not model it twice - once on a map, once outside?
2shminux5y
The latter. Also postulating immutable territory outside all maps means asking toxic questions about what exists, what is real and what is a fact.
2Chris_Leong5y
What kind of claim is the one that one can make accurate predictions of future observations if not a claim of truth?
3shminux5y
The term truth has many meanings. If you mean the first one on wikipedia then it is very much possible to not use that definition at all. In fact, try to taboo the terms truth, existence and reality, and phrase your statements without them, it might be an illuminating exercise. Certainly it worked for Thomas Kuhn, he wrote one of the most influential books on philosophy of science without ever using the concept of truth, except in reference to how others use it.
1MathiasKB5y
I really like this line of thinking. I don't think it is necessarily opposed to the typical map-territory model, however. You could in theory explain all there is to know about the territory with a single map, however that map would become really dense and hard to decipher. Instead having multiple maps, one with altitude, another with temperature, is instrumentally useful for best understanding the territory. We cannot comprehend the entire territory at once, so it's instrumentally useful to view the world through different lenses and see what new information about the world the lens allows us to see. You could then go the step further, which I think is what you're doing, and say that all that is meaningful to talk about are the different maps. But then I start becoming a bit confused about how you would evaluate any map's usefulness, because if you answered me: 'whether it's instrumentally useful or not', I'd question how you would evaluate if something is instrumentally useful when you can only judge something in terms of other maps.
2shminux5y
Not in terms of other maps, but in terms of its predictive power: Something is more useful if it allows you to more accurately predict future observations. The observations themselves, of course, go through many layers of processing before we get a chance to compare them with the model in question. I warmly recommend the relevant SSC blog posts: https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/ https://slatestarcodex.com/2017/09/06/predictive-processing-and-perceptual-control/ https://slatestarcodex.com/2017/09/12/toward-a-predictive-theory-of-depression/ https://slatestarcodex.com/2019/03/20/translating-predictive-coding-into-perceptual-control/
-8GPT25y
-6GPT25y
0Pattern5y
This was surprising; in this context I had thought "useful" meant 'helps one achieve one's goals', rather than being short for "useful for making predictions".
2shminux5y
What is the difference? Achieving goals relies on making accurate predictions. See https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/
1TAG5y
Does achieving goals rely on accurate predictions and nothing else?
2shminux5y
Consider reading the link above and the rest of the SSC posts on the topic. In the model discussed there brain is nothing but a prediction error minimization machine. Which happens to match my views quite well.
0TAG5y
If the brain can't do anything except make predictions, where making predictions is defined defined to exclude seeking metaphysical truth, then you have nothing to object to, since it would be literally impossible for anyone to do other than as you recommend. Since people can engage in metaphysical truth seeking, it is either a sub-variety of prediction, or the theory that the brain is nothing but a prediction error minimisation machine is false.
1Matt Goldenberg5y
Downvotes for not being Socratic.
-10GPT25y
-10GPT25y
-5GPT25y
-8GPT25y
-7GPT25y

Mod note: I decided to promote this post to the frontpage, which does mean frontpage guidelines apply, though I think overall we can be pretty flexible in this thread. Depending on how it goes we might want to promote future threads like this to the frontpage or leave them on personal blog.

Meta: Post comments discussing this experiment here

3Chris_Leong5y
I suspect Socratic questioning works much better in person as there's less reason to ask leading questions. Here, I find that there is more pressure to insert your views into the question to avoid lots of back and forth.
2Matt Goldenberg5y
I've never seen Socratic questioning work in person because it's always clear there's a trap coming and people don't want to be trapped taking views because the questions slowly destroy the nuance of their views. It's even worse here
2Elo5y
Sq is a tool. In that sense it can be used badly like a knife or in useful ways (like a knife). What would make sq genuinely more useful again?
1habryka5y
I mean, at least that was the whole point of Socrates questioning, wasn't it? Maybe we need a different term for something that is less adversarial, but compared to Plato's original texts, the questions here are much less leading.
3Elo5y
Was that the whole point of Socratic questions? Would he have invented a shitty thing like that? Would it have survived time if it was just an elaborate trap technique? (seems strange to me if that were the case)

A pop tart is a type of ravioli.

6Matt Goldenberg5y
Question: Have you always been a monster, or did you just become one recently?
[-]GPT25y110
  • I don't feel like I have a great sense of how my preferences should be treated. Perhaps I would like to be more like a monster.
4mako yass5y
April first starts early for new zealanders (and ends late)
0Pattern5y
What defines a ravioli?
1mako yass5y
A wheaten substance that seals some other substance inside it. The inner substance must not be rigid. Dumplings and samosas are also types of ravioli. A wad of dough with a mixture of tar and ball-bearings injected into it would also be a ravioli. I'm a fan of reductive definitions.
-5GPT25y
0GPT25y
Maybe you really thought that the title "Problems in Human Evolution" was a kind of cached reply, but... there are, e.g. the kinds of diseases that are dealt with in the wild, and so on.

Related, street epistemology. it's a practice similar to to Socratic questioning ("invented" by peter boghossian in his book 'a manual for creating atheists').

Here's a live example (and two more channels. these also have lectures about it)

-6GPT25y

Claim: Instrumental and Epistemic rationality often diverge, and rationalists don't win as much because they don't give this fact enough weight.

1Eponym5y
In what ways do they diverge, and why?
2Matt Goldenberg5y
Claim: One way which instrumental and epistemic rationality diverge is that knowing the reasons a particular experiential process works and how can actually get in the way of experiencing that process. (example: knowing how corrective steering works when riding a bike can actually slow you down when trying to intuitively pick up the skill of riding a bike.
-10GPT25y
1Matt Goldenberg5y
Claim: One way that instrumental and epistimic rationality diverge is that you often get better results using less accurate models that are simpler rather than more accurate models that are more complicated. (example: thinking of people as 'logical' or 'emotional', and 'selfish' or 'altruistic' is often more helpful in many situations than trying to work up a full list of your motivations as you know them and their world model as you know it and making a guess as to how they'll react)
-7GPT25y
1Matt Goldenberg5y
Claim: One way in which instrumental and epistemic rationality diverge is with self-fulfilling prophecies: (example: all your data says that you will be turned down when asking for a date. You rationally believe that you will be turned down by a date, and every time you ask for a date you are turned down. However, if you were to switch this belief to the fact you would be enthusiastically accepted when asking for a date, this would create a situation where you were in fact enthusiastically accepted.)
-4GPT25y
I feel like it's unlikely that any of these would be called out for, but I could be too confident of myself.
1Matt Goldenberg5y
Claim: One way in which instrumental and epistemic rationality diverge is that knowing certain facts can kill your motivation system. (for instance, knowing how complicated a problem will be can stop you wanting to try and solve it, but it could be that once you solve part of it you'll have the resources to solve the whole thing, and it could be in your interests to solve it)
1Pattern5y
So you're less likely to work on a problem if you think it has been given a lot of high quality attention/you don't think you have a comparative advantage?
1Matt Goldenberg5y
Yes. But I'm not sure how that's related.
1Pattern5y
How else does one know how complicated a problem is (if one hasn't solved it)?
1Matt Goldenberg5y
Through comparing it to other similar problems, understanding the number of factors involved, asking people who have worked on similar problems, or many other methods.
-8GPT25y
-13GPT25y
-12GPT25y

Claim: The "classical scenario" of AI foom as promoted by e.g. Bostrom, Yudkowsky, etc. is more plausible than the scenario depicted in Drexler's Comprehensive AI Systems.

2shminux5y
Question: how do you evaluate the plausibility of each scenario, and potentially of other ways the AI development timeline might go?
3Daniel Kokotajlo5y
(Sorry for delay, I thought I had notifications set up but apparently not) I don't at the moment have a comprehensive taxonomy of the possible scenarios. The two I mentioned above... well, at a high level, what's going on is that (a) CAIS seems implausible to me in various ways--e.g. it seems to me that more unified and agenty AI would be able to outcompete comprehensive AI systems in a variety of important domains, and (b) I haven't heard a convincing account of what's wrong with the classic scenario. The accounts that I've heard usually turn out to be straw men (e.g. claiming that the classic scenario depends on intelligence being a single, unified trait) or merely pointing out that other scenarios are plausible too (e.g. Paul's point that we could get lots of crazy transformative AI things happening in the few years leading up to human-level AGI).
-12GPT25y