All of Alex Flint's Comments + Replies

(And obviously you get to defend yourself on the first question too. I’m not having that conversation in public …)

Yeah I am also very pessimistic about having the core argument about sexual assault on the public internet so I agree with not trying to resolve that part right here.

Critiquing your non-linking was simply not the point of that flag. The structure of the main thing I was going for was: "you provide explanation A for observation X. But B would also explain X." And the reason I was saying this was something like: it's easy to see an explanati

... (read more)

Very very good question IMO. Thank you for this.

Consider a person who comes to a very clear understanding of the world, such that they are extremely capable of designing things, building things, fundraising, collaborating, and so on. Consider a moment where this person is just about to embark on a project but has not yet acquired any resources, perhaps has not even made any connections with anyone at all, yet is highly likely to succeed in their project when they do embark. Would you say this person has “resources”? If not, there is a kind of continuous tr... (read more)

I'm strongly disinclined to delve into the matter of consent in the sexual encounter, as it primarily pertains to (alleged) misconduct by Alex/Koshin (who I don't really know), whereas the accusations of organizational malfeasance (e.g. a cover-up) pertain to all of MAPLE/OAK/CEDAR (where I do know several people, and which I'm just going to call MAPLE going forward).

Yeah thank you for this.

In particular, I'm noticing that Koshin described having been asked to write a letter with Shekinah, describing their relationship status and intentions, while She

... (read more)

Well no I definitely did not rape Shekinah. I don't think even she accuses me of that in her post.

It's been quite a difficult few weeks at this end, which is why I haven't replied more to your comment. I see the following points in your comment:

  1. The paragraph that goes "So firstly I want to flag that this observation is consistent with the world you assert... But it's also consistent with a different world, where those things are straightforwardly revealing of failures on the part of yourself and/or Monastic Academy" where you critique my non-linking to

... (read more)
4philh4mo
Recall that the description in the original letter was: The thing Shekinah describes here is rape, legally and ethically, whether she uses the word or not. There is more I'd like to say here. There are questions that I don't really know how to navigate, around respecting Shekinah's agency and privacy and right to self-definition. But having that conversation with Alex seems disrespectful. So anyone who isn't Alex is welcome to PM me for further thoughts. You have not. In a previous comment I pointed out that you responded to an aside, in ways that made it easy for someone not paying attention to think you had responded to (1). Critiquing your non-linking was simply not the point of that flag. The structure of the main thing I was going for was: "you provide explanation A for observation X. But B would also explain X." And the reason I was saying this was something like: it's easy to see an explanation, check that it makes sense/is consistent with the available evidence, and then assume it's true. I think we more reliably arrive at true conclusions if we keep in mind that there are other possible explanations, and pointing out another possible explanation helps with that. I do think you're a rapist, but "definitely" is coming out of nowhere here. Probably not super interested. But, to be clear... this is only partly because I think you're a rapist? It's also because this is a frustrating conversation for me even completely ignoring that. I said, early on, that I wasn't directly talking about the accusations. That was true, and for the most part it's still true. I have now directly spoken about the accusations. But none of the things I flagged were directly about them; and the things I flagged are not primarily why I believe them. But like, I specifically said that you didn't address point (1). And then you said you thought you'd addressed it, without even acknowledging that I said you hadn't. So... ...combine that with the multiple other ways, in this threa

I understand that there are ways this can work really well for people but jesus christ the failure modes on that are numerous and devastating.

I really agree with this. The reason spiritual communities can go more quickly and more disastrously off the rails is because they are aiming to tinker with the rules by which we live at a really fundamental level, whereas most organizations generally opt to work on top of a kind of cultural base operating system.

I would generally find it unwise to tinker at all with one's operating system except that our cultural... (read more)

Yeah right. I actually spent quite a while considering this exact point (whether to link it) when writing the post. I was basically convinced that if I did link it, many people would jump straight to that link after reading the first ~paragraph of my post, then would return to read my post holding the huge number of triggering issues raised in Shekinah's post, and ultimately I'd fail to convey the basic thing I wanted to convey. Then I considered "yes but maybe it's still necessary to link it if my post won't make any sense without reading that other post"... (read more)

Thanks for taking the time to write this comment philh.

So firstly I want to flag that this observation is consistent with the world you assert, where Shekinah's writing and the associated commentary suggest things in a way that makes it hard to read them and maintain a grip on what is and isn't asserted, what is and isn't true, and similar things that it's important to keep a grip on.

Yup this is a good paraphrase of what I meant.

In that world, declining to link those things is... well, I don't love it; I prefer not to be protected from myself.

Yup. ... (read more)

4philh4mo
Hm. This feels like a different reason than you gave before though? That is, I think I understand the reason "I didn't link them because ... it’s very hard to read them and stay sane." And I think I understand the reason (paraphrased) "I didn't link them because they aren't prerequisites and I didn't want the reader to think they were". But I don't think they're the same reason, and it appears to me that you've switched from one to the other.

Yeah, as Ruby said, this is a community that I care about and publish in, and is where Shekinah linked and discussed her own post. I also want to stand for the truth! I've been in this org (Maple) for a while and I think it has a lot to offer the world, and I think it's been really badly mischaracterized, and I care about sharing what I know, and this community cares a lot about the truth in a way that is different to most of the rest of the world. IMO the comments on this post so far are impressively balanced, considerate, epistemically humble, and just generally intelligent. I can't think where else to have such a reasonable discussion about something like this!

(Good question btw!)

Should you have, say, stepped down and distanced yourself from the organization the moment the "monastic agreement" was broken...

Well just so you know, I actually did step down right after the incident. It was a bit of a mess because I stepped down informally the day after I told the community what had happened, then we decided that this action was hasty and hadn't given the board of directors time to make their own assessment, so we reversed it, then about a week later the board of directors agreed that I should step down and I did so. You can imagine ... (read more)

3philh4mo
I want to flag a few things here that I dislike about this comment. So let me say before I do that... like, I don't gel with what might be called "the meditation scene". I'm divided on whether that's more of a "y'all just don't communicate in the same way as me" thing or more of a "one of us is actually just wrong in a deep way" thing. So like, I'm about to be super critical of something you wrote, where you're defending yourself against accusations of malfeasance. I want to be clear that I'm not directly talking about the accusations. Which is not to pretend that this isn't some kind of attack. The criticisms I'm about to make do, I think, have some bearing on how I think we should think about the accusations. But I don't want anyone to come away thinking like "philh's criticisms seem valid, so I guess Alex must be in the wrong here". And I want to leave open the possibility that "the thing that caused you to write in such a way that philh wanted to critique" is standard meditation-scene stuff or something; that if we understand that thing, we'd decide that actually my criticisms should have approximately zero bearing on how we think about the accusations. I don't think I believe that, but I think there's at least a chance of it that's worth noting. -------------------------------------------------------------------------------- To disclaim my background knowledge. I'm pretty sure I read Shekinah's writing shortly after she first posted it here, as well as the comments here. I don't remember many details, and I don't remember reading a reddit thread about it. I haven't reread it since. I think it did lower my opinion of Monastic Academy, which I think was already not high. I'm trying not to let that flavor this comment, but I'd be surprised if I completely succeeded. -------------------------------------------------------------------------------- So firstly I want to flag that this observation is consistent with the world you assert, where Shekinah's writ

Yeah thank you for the note.

Just so you know though it was actually a 7-day meditation retreat within a one-month stay at the monastic community (and for the non-retreat weeks of the program we spend time meeting with each other, using computers, going shopping for groceries and such, in addition to 1-2 hours sitting each morning and evening). It's true that the residents did a long yaza on one night of the retreat but it wasn't required, though yes still quite a lot for someone who hasn't sat a retreat before.

It was an intense retreat, and it's true that ... (read more)

5Razied4mo
Ah, I see, if it was just 7 days of actual retreat then this is much more reasonable, I'm glad you clarified. Regarding the post-retreat crash into daily life, the thing that worked on me to help me deal with those crashes was to hear someone say "look, a retreat environment is a very special circumstance, you'll get to places in your practice that you couldn't get to with a daily 1 or 2 hours of practice, revelations that you are sure to be permanent will end, and once the retreat ends your practice will fall back down, but it will fall to a better level than pre-retreat. Over the years and the retreats, you'll eventually get to a place where daily life itself becomes the practice, and then you'll live your life from a place of grace." I can definitely see the immense benefits of a live-in spiritual community, but I think it might also create an artificial divide between "normal life" and the spiritual life. It might make people believe that they require a community to achieve insight, instead of the community merely being very supportive. You can perfectly well do walking meditation while shopping at walmart, and you can do metta while looking at your crazy boss. I remember crying of joy when I realized that queues, traffic jams, being put on hold on the phone, etc. no longer had the power to bother me, all these were simply opportunities for practice. Shinzen Young in particular is really great with this framing of "Life as Practice", and I think it's doing marvels to minimize the post-retreat crash, because, in effect, the retreat never ends, it just gets a bit more challenging. There's also the fact that people have much more free time than they believe, I've personally managed a 4h/day practice in normal daily life, it just required some sacrifices. So unless I'm misunderstanding your community, it might be that people are getting the impression that it's impossible to get awakened without renouncing their whole lives, yet impossible is very different from me
  1. That the predictor is perfectly accurate. [...]

Consider, for instance, if the predictor makes an incorrect decision once in 2^10 times. (I am well aware that the predictor is deterministic; I mean the predictor deterministically making an incorrect decision here.)

Yeah it is not very reasonable that the predictor/reporter pair would be absolutely accurate. We likely need to weaken the conservativeness requirement as per this comment

  1. That the iteration completes before the heat death of the universe.

Consider the last example, but with 500 actua

... (read more)
1TLW7mo
Ah sorry, I was somewhat sloppy with notation. Let me attempt to explain in a somewhat cleaned up form. For a given statespaceS(that is,Sis a set of all possible states in a particular problem), you're saying there exists a deterministic predictorPSthat fulfills certain properties: First, some auxiliary definitions: 1. TS⊆Sis the subset of the statespaceSwhere the 'true' answer is 'YES'. 2. FS⊂Sis the subset of the statespaceSwhere the 'true' answer is 'NO' 3. By definition from your question,TS∪FS=SandTS∩FS=∅ ThenPSis a deterministic function: PS(K)={Done,iffK=TS,Rotherwise. where: K⊆R⊆TS (And bothKandRare otherwise unconstrained.) Hopefully you follow thus far. So. 1. I choose a statespace,S={(0,0),(1,0),(0,1),(1,1)} 2. I assume there exists some deterministic predictorPSfor this statespace. 3. I choose a particular problem:TaS={(0,0),(1,0),(0,1)} 1. (That is, in this particular instance of the problem(1,1)is the only 'NO' state) 4. I runPS({(0,0),(1,0),(0,1)})and get a resultRa 1. That is, with the parameterK={(0,0),(1,0),(0,1)} 1. K⊆TaS, so this is correct of me to do. 5. There are three possibilities forRa: 1. Ra=Done 1. I choose a different problem:TbS={(0,0),(1,0),(0,1),(1,1)} 1. K⊆TbS, so this is correct of me to do. 2. I runPS({(0,0),(1,0),(0,1)})and get a resultRb 1. That is, with the parameterK={(0,0),(1,0),(0,1)} 1. K⊆TbS, so this is correct of me to do. 3. There are three possibilities forRb: 1. Rb=Done 1. K≠TbS, soPSdid not fulfill the contract 2. Hence contradiction. 2. Rb=S={(0,0),(1,0),(0,1),(1,1)} 1. In this case,PS(K)≠PS(K), asRa≠Rb. 2. Hence,PSis not deterministic. 3. HencePSdid not fulfill the contract. 4. Hence contradiction.

Yeah we certainly can't do better than the optimal Bayes update, and you're right that any scheme violating that law can't work. Also, I share your intuition that "iteration can't work" -- that intuition is the main driver of this write-up.

As far as I'm concerned, the central issue is: what actually is the extent of the optimal Bayes update in concept extrapolation? Is it possible that a training set drawn from some limited regime might contain enough information to extrapolate the relevant concept to situations that humans don't yet understand? The conser... (read more)

Well just so you know, the point of the write-up is that iteration makes no sense. We are saying "hey suppose you have an automated ontology identifier with a safety guarantee and a generalization guarantee, then uh oh it looks like this really counter-intuitive iteration thing becomes possible".

However it's not quite as simple as to rule out iteration as appealing to conservation of expected evidence, because it's not clear exactly how much evidence is in the training data. Perhaps there is enough information in the training data to extrapolate all the wa... (read more)

1Simon Skade7mo
True, not sure what I was thinking when I wrote the last sentence of my comment. For an automated ontology identifier with a possible safety guarantee (like 99.9% certainty), I don't agree with your intuition that iteration seems like it could work significantly better than just doing predictions with the original training set. Iteration simply doesn't seem promising to me, but maybe I'm overlooking something. If your intuition that iteration might work doesn't come from the sense that the new predicted training examples are basically certain (as I described in the main comment of that comment thread), then where does it come from? (I do still think that you are probably confused because of the reason I described, but maybe I'm wrong and there is another reason.) Actually, in the case that the training data includes enough information to extrapolate all the way to C (which I think is rarely the case for most applications), it does seem plausible to me that the iteration approach finds the perfect decision boundary, but in this case, it seems also plausible to me that a normal classifier that only uses extrapolation from the training set also finds the perfect boundary. I don't see a reason why a normal classifier should perform a lot worse than an optimal Bayes update from the training set. Do you think it does perform a lot worse, and if so, why? (If we don't think that it performs much worse than optimal, then it quite trivially follows that the iteration approach cannot be much better, since it cannot be better than the optimal Bayes error.)

Ah so I think what you're saying is that for a given outcome, we can ask whether there is a goal we can give to the system such that it steers towards that outcome. Then, as a system becomes more powerful, the range of outcomes that it can steer towards expands. That seems very reasonable to me, though the question that strikes me as most interesting is: what can be said about the internal structure of physical objects that have power in this sense?

The space of cases to consider can be large in many dimensions. The countable limit of a sequence of extensions needs not be a fixed point of the magical improvement oracle.

Indeed. We may need to put a measure on the set of cases and make a generalization guarantee that refers to solving X% of remaining cases. That would be a much stronger generalization guarantee.

The style of counter-example is to construct two settings ("models" in the lingo of logic) A and B with same labeled easy set (and context made available to the classifier), where the correc

... (read more)

Presumably, the finite narrow dataset did teach me something about your values? [...] "out-of-distribution detection."

Yeah right, I do actually think that "out of distribution detection" is what we want here. But it gets really subtle. Consider a model that learns that when answering "is the diamond in the vault?" it's okay for the physical diamond to be in different physical positions and orientations in the vault. So even though it has not seen the diamond in every possible position and orientation within the training set, it's still not "out of distr... (read more)

Well keep in mind that we are not proposing "iterated ontology identification" as a solution to the ELK problem, but rather as a reductio ad absurdum of the existence of any algorithm fulfilling the safety and generalization guarantees that we have given. Now here is why I don't think it's quite so easy to show a contradiction:

In the 99% safety guarantee, you can just train a bunch of separate predictor/reporter pairs on the same initial training data and take the intersection of their decision boundaries to get a 99.9% guarantee. Then you can sample more ... (read more)

1TLW7mo
Counterexample: here is an infinite set of unique predictors that each have a 99% safety guarantee that when combined together have a... 99% safety guarantee. Ground truth: 0≤x≤1,x∈R f(x)={YES,ifx≤0.5,NO,ifx>0.5. Predictor n: pn(x)=⎧⎨⎩YES,if(x≤0.50)∧(Random oracle queried on (n, x) returns True)YES,if( 0.50<x≤0.51)NO,otherwise (If you want to make this more rigorous, replace the Random oracle query with e.g. digits of Normal numbers.) (Analogous arguments apply in finite domains, so long as the number of possible predictors is relatively large compared to the number of actual predictors.) No two sets of sensor data are truly 'completely different'. Among many other things, the laws of Physics remain the same.
3P.7mo
I might just be repeating what he said, but he is right. Iteration doesn't work. Assuming that you have performed an optimal Bayesian update on the training set and therefore have a probability distribution over models, generating new data from those models can't improve your probability distribution over them, it's just the law of conservation of expected evidence, if you had that information you would have already updated on it. Any scheme that violates this law simply can't work.

I don't understand why a strong simplicity guarantee places most of the difficulty on the learning problem. In the diamond situation, a strong simplicity requirement on the reporter can mean that the direct translator gets ruled out, since it may have to translate from a very large and sophisticated AI predictor?

What we're actually doing is here is defining "automated ontology identification" as an algorithm that only has to work if the predictor computes intermediate results that are sufficiently "close" to what is needed to implement a conservative he... (read more)

8TurnTrout7mo
Thanks for your reply! (Flagging that I didn't understand this part of the reply, but don't have time to reload context and clarify my confusion right now) When you assume a true decision boundary, you're assuming a label-completion of our intuitions about e.g. diamonds. That's the whole ball game, no? But I don't see why the platonic "true" function has to be total. The solution does not have to be able to answer ambiguous cases like "the diamond is molecularly disassembled and reassembled", we can leave those unresolved, and let the reporter say "ambiguous." I might not be able to test for ambiguity-membership, but as long as the ELK solution can: 1. Know when the instance is easy, 2. Solve some unambiguous hard instances, 3. Say "ambiguous" to the rest, Then a planner—searching for a "Yes, the diamond is safe" plan—can reasonably still end up executing plans which keep the diamond safe. If we want to end up in realities where we're sure no one is burning in a volcano, that's fine, even if we can't label every possible configuration of molecules as a person or not. The planner can just steer into a reality where it unambiguously resolves the question, without worrying about undefined edge-cases.

My understanding of the argument: if we can always come up with a conservative reporter (one that answers yes only when the true answer is yes), and this reporter can label at least one additional data point that we couldn't label before, we can use this newly expanded dataset to pick a new reporter, feed this process back into itself ad infinitum to label more and more data, and the fixed point of iterating this process is the perfect oracle. This would imply an ability to solve arbitrary model splintering problems, which seems like it would need to eith

... (read more)
1leogao7mo
I agree that there will be cases where we have ontological crises where it's not clear what the answer is, i.e whether the mirrored dog counts as "healthy". However, I feel like the thing I'm pointing at is that there is some sort of closure of any given set of training examples where, for some fairly weak assumptions, we can know that everything in this expanded set is "definitely not going too far". As a trivial example, anything that is a direct logical consequence of anything in the training set would be part of the completion. I expect any ELK solutions to look something like that. This corresponds directly to the case where the ontology identification process converges to some set smaller than the entire set of all cases.

Yes, I think what you're saying is that there is (1) the set of all possible outcomes, (2) within that, the set of outcomes where the company succeeds with respect to any goal, and (3) within that, the set of outcomes where the company succeeds with respect to the operator's goal. The capability-increasing interventions, then, are things that concentrate probability mass onto (2), whereas the alignment-increasing interventions are things that concentrate probability mass onto (3). This is a very interesting way to say it and I think it explains why there i... (read more)

1Edouard Harris7mo
Yep, I'd say I intuitively agree with all of that, though I'd add that if you want to specify the set of "outcomes" differently from the set of "goals", then that must mean you're implicitly defining a mapping from outcomes to goals. One analogy could be that an outcome is like a thermodynamic microstate (in the sense that it's a complete description of all the features of the universe) while a goal is like a thermodynamic macrostate (in the sense that it's a complete description of the features of the universe that the system can perceive). This mapping from outcomes to goals won't be injective for any real embedded system. But in the unrealistic limit where your system is so capable that it has a "perfect ontology" — i.e., its perception apparatus can resolve every outcome / microstate from any other — then this mapping converges to the identity function, and the system's set of possible goals converges to its set of possible outcomes. (This is the dualistic case, e.g., AIXI and such. But plausibly, we also should expect a self-improving systems to improve its own perception apparatus such that its effective goal-set becomes finer and finer with each improvement cycle. So even this partition over goals can't be treated as constant in the general case.)

Thank you.

I was thinking of the incentive structure of a company (to focus on one example) as an affordance for aligning a company with a particular goal because if you set the incentive structure up right then you don’t have to keep track of everything that everyone does within the company, you can just (if you do it well) trust that the net effect of all those actions will optimize something that you want it to optimize (much like steering via the goals of an AI or steering via the taxes and regulations of a market).

But I think actually you are pointing ... (read more)

1Edouard Harris7mo
Gotcha. I definitely agree with what you're saying about the effectiveness of incentive structures. And to be clear, I also agree that some of the affordances in the quote reasonably fall under "alignment": e.g., if you explicitly set a specific mission statement, that's a good tactic for aligning your organization around that specific mission statement. But some of the other affordances aren't as clearly goal-dependent. For example, iterating quickly is an instrumentally effective strategy across a pretty broad set of goals a company might have. That (in my view) makes it closer to a capability technique than to an alignment technique. i.e., you could imagine a scenario where I succeeded in building a company that iterated quickly, but I failed to also align it around the mission statement I wanted it to have. In this scenario, my company was capable, but it wasn't aligned with the goal I wanted. Of course, this is a spectrum. Even setting a specific mission statement is an instrumentally effective strategy across all the goals that are plausible interpretations of that mission statement. And most real mission statements don't admit a unique interpretation. So you could also argue that setting a mission statement increases the company's capability to accomplish goals that are consistent with any interpretation of it. But as a heuristic, I tend to think of a capability as something that lowers the cost to the system of accomplishing any goal (averaged across the system's goal-space with a reasonable prior). Whereas I tend to think of alignment as something that increases the relative cost to the system of accomplishing classes of goals that the operator doesn't want. I'd be interested to hear whether you have a different mental model of the difference, and if so, what it is. It's definitely possible I've missed something here, since I'm really just describing an intuition.

I’d ask the question whether things typically are aligned or not.

Just out of interest, how exactly would you ask that question?

There’s a good argument that many systems are not aligned.

Certainly. This is a big issue in our time. Something needs to be done or things may really go off the rails.

Ecosystems, society, companies, families, etc all often have very unaligned agents.

Indeed. Is there anything that can be done?

AI alignment, as you pointed out, is a higher stakes game.

It is a very high-stakes game. How might we proceed?

Summing up all that, this post made me realize Alignment Research should be its own discipline.

Yeah I agree! It seems that AI alignment is not really something that any existing disciplines is well set up to study. The existing disciplines that study human values are generally very far away from engineering, and the existing disciplines that have an engineering mindset tend to be very far away from directly studying human values. If we merely created a new "subject area" that studies human values + engineering under the standard paradigm of academic STE... (read more)

Do you mean that as a way to understand what Stuart is talking about when he says that a UR-optimiser would answer questions in a certain way?

1Algon8mo
Yeah, instead of asking it a question, we can just see what happens when we put it in a world where it can influence another robot going left or right. Set it up the right way, and Stuart's arguement should go through.

But we do know of simple utility functions which don't fear Goodharting.

Agreed.

The fact that we fear Goodharting is information about our values.

Yeah, well said.

I feel like discussing the EM scenario and how it may/may not differ from the general AI scenario would have been useful

Yeah would love to discuss this. I have the sense that intelligent systems vary along a dimension of "familiarity of building blocks" or something like that, in which systems built out of groups of humans are at one end, and system built from first principles out of bas... (read more)

1Algon8mo
I guess you could rephrase it as "suppose a UR optimizer had a button which randomly caused an agent to be a UR optimizer" or something along those lines and have similair results.

Thank you for taking the time to publish this. It's kind of sad to see companies painting a picture of some kind of internal intellectual vibrancy or freedom or something when in fact it's more of a recruiting or morale gimmick, or is just dominated in practice by performance demands. I have the sense that utilization numbers are low because it's actually quite hard to formulate something compelling to work on for oneself, even absent any demands for justification or approval, and one of the reasons that people work at companies is to be given something co... (read more)

I thought this was brilliant, actually. My favorite line is:

Of course, B wasn't in analysis paralysis, that would be irrational

In seriousness though, I don't actually see the monastic academy's culture as naturally contrary to the rationalist culture. Both are fundamentally concerned with how to cultivate the kind of mind that can reduce existential risk. Compared to mainstream culture, these two cultures are really very similar. There are some methodological differences, of course, and these details are important, but they are not that deep.

First, an ontology is just an agents way of organizing information about the world...

Second, a third-person perspective is a "view from nowhere" which has the capacity to be rooted at specific locations...

Yep I'm with you here

Well, what's a 3rd-person perspective good for? Why do we invent such things in the first place? It's good for communication.

Yeah I very much agree with justifying the use of 3rd person perspectives on practical grounds.

we should be able to consider the [first person] viewpoint of any physical object.

Well if we are choos... (read more)

Today this link does not seem to be working for me, I see:

Our apologies, your invite link has now expired (actually several hours ago, but we hate to rush people).

I also notice that the date is still 10/25 so perhaps the event is not happening today?

2adamShimi1y
Really sorry, I have to recreate a link every week, and I was at EAG this week end so I completely forgot. It should work now.

Thank you so much for writing this up, Zvi!

It's hard to actually be correct about the nature of the bottleneck in such a scenario, and harder still to find a workable solution. I suspect that a good part of the success of this effort was just that Ryan was actually correct about the nature of the problem and the nature of the solution. Beyond that, Ryan being head of Flexport probably helped a lot in convincing the initial signal boosters to trust his diagnosis and prescription, and then for the government folks to take the whole thing seriously. It's not just that he had a general-purpose platform, but that he had credibility in that particular industry.

But how exactly do you do this without hammering down on the part that hammers down on parts? Because the part that hammers down on parts really has a lot to offer, too, especially when it notices that one part is way out of control and hogging the microphone, or when it sees that one part is operating outside of the domain in which its wisdom is applicable.

(Your last paragraph seems to read "and now, dear audience, please see that the REAL problem is such-and-such a part, namely the part that hammers down on parts, and you may now proceed to hammer down on this part at will!")

2null10mo
You cannot truly dissolve an urge by creating another one. Now there are 2 urges at odds with one another, using precious cognitive resources while not achieving anything. You can only dissolve it by becoming conscious of it and seeing clearly that it is not helping. Perhaps internal double crux would be a tool for this. I'd expect meditation to help, too.

You can apply the lesson to that conclusion as well, avoid hammering down on the part that hammers down on parts. The point is not to belittle it, but to reform it so that it's less brutishly violent and gullible, so that the parts of mind it gardens and lives among can grow healthy together, even as it judiciously prunes the weeds.

Thank you!

Well, I would just say that the significance of it for me comes from the connection between the conclusion "I am" and practical life. I like to remind myself that there is something that really matters, and that my actions really seem to affect it, and so I take "I am" to be a reminder of that.

It's just that you end up in circular reasoning in that case, because you have to start with the view that things that have worked in the past will continue to work in the future, then you see that this principle itself has worked in the past, then on the basis of the view you already started with as a premise you conclude that therefore this view that has worked in the past (that things that have worked in past will continue to work in the future) will continue to work in the future.

It's like if I would claim to you that things that have never worked in t... (read more)

1DPiepgrass1y
The starting point here is not "things that have worked in the past will continue to work in the future". The starting point is induction: when we see a pattern, the we expect it is likely to continue. For instance, if we take a random sampling of 10 balls from an urn and they are all blue, I predict the next one is blue with some probability around 95% (I'm not sure what theory says my confidence should be). That's induction. And the more reliable the pattern is, the more we expect it to continue. In this case the pattern holds over all the Ns we have information about, therefore we expect it to hold for larger N too, especially because we have no reason to think there is anything special about N > 13.8 billion as compared to N < 13.8 billion. "Empirical" results are inductive by definition, and while we can see that induction works by induction, I'm not arguing that induction proves itself to work, just that "things that have worked in the past will continue to work in the future" is an ordinary inductive result like any other.

Yeah thank you for sharing these thoughts.

I have not really resolved these questions to my own satisfaction, but the thing that seems clearest to me is to really notice when these doubts are become a drag on energy levels and confidence, and, if they are, to carve out a block of time to really turn towards them in earnest.

Yeah, these are definitely instances of the problem of the criterion. I actually had a link to your post in the original version of this post but somehow it got edited out as I was moving things around before publishing.

Thank you for sharing this.

In my own experience, there are moments where I see something that I haven't seen before, such as what is really going on in a certain relationship in my life, or how I have been unwitting applying a single heuristic over and over, or how I have been holding tension my body, and it feels like a big gong has just rung with truth. But I think what's really going on is that I was seeing things in one particular way for a long time, and then upon seeing things in just a slightly different way, I let go of some unconscious tightness a... (read more)

1Jarred Filmer1y
I've never seen that feeling described quite that way, I like it! Out of curiousity, how do you feel about the proclaimed self evidence of "the cognito", "I think therefore I am"?

Right. But it's notable that almost no-one in the world is stuck in an actual infinite why-regress, in that there don't seem to be many people sitting around asking themselves "why" until they die, or sitting with a partner asking "why" until one person dies. (I also don't think this is what is happening for monks or other contemplative folks.) I guess in practice people escape by shifting attention elsewhere. But sometimes that is a helpful thing to do, such as when stuck in a rut, and sometimes it is an unhelpful thing to do, such as when already overwhe... (read more)

Ah good point. OK yeah I believe that (2) doesn't require the graph to be finite, and I also agree that it's not tenable to believe all three of your statements.

If, hypothetically, we were to stop here, then you might look at our short dialog up to this point as, roughly, a path through a justification graph. But if we do stop, it seems that it will be because we reached some shared understanding, or ran out of energy, or moved on to other tasks. I guess that if we kept going, we would reach a node with no justifications, or a cycle, or an infinite chain a... (read more)

1justinpombrio1y
As you said, very often a justification-based conversation is looking to answer a question, and stops when it's answered using knowledge and reasoning methods shared by the participants. For example, Alice wonders why a character in a movie did something, and then has a conversation with Bob about it. Bob shares some facts and character-motivations that Alice didn't know, they figure out the character's motivation together, and the conversation ends. This relied on a lot of shared knowledge (about the movie universe plus the real universe), but there's no reason for them to question their shared knowledge. You get to shared ground, and then you stop. If you insist on questioning everything, you are liable to get to nodes without justification: * "The lawn's wet." / "Why?" / "It rained last night." / "Why'd that make it wet?" / "Because rain is when water falls from the sky." / "But why'd that make it wet?" / "Because water is wet." / "Why?" / "Water's just wet, sweetie.". A sequence of is-questions, bottoming out at a definition. (Well, close to a definition: the parent could talk about the chemical properties of liquid water, but that probably wouldn't be helpful for anyone involved. And they might not know why water is wet.) * "Aren't you going to eat your ice cream? It's starting to melt." / "It sure is!" / "But melted ice cream is awful." / "No, it's the best." / "Gah!". This conversation comes to an end when the participants realize that they have fundamentally different preferences. There isn't really a justification for "I dislike melted ice cream". (There's an is-ought distinction here, though it's about preferences rather than morality.) Ultimately, all ought-question-chains end at a node without justification. Suffering is just bad, period. And I think if you dig too deep, you'll get to unjustified-ish nodes in is-question-chains too. For example, direct experience, or the belief that the past informs the future, or t

But then are you saying that it's impossible to experience profound doubt? Or are you saying that it's possible to experience profound doubt, but noting perception as belief is a reliable way out of it? If the latter then how do you go from noting perception as belief to making decisions?

Thank you for the kind words. If you have time and inclination, I'd be interested to hear anything at all about what the raw justification in your own experience is like.

3Jarred Filmer1y
You're quite welcome 🙂 For existence it's "I think therefore I am", just seems like an unavoidable axiom of experience. It feels like wherever I look I'm staring at it. For conciousness I listened to an 80k hours podcast with David Chalmers on The Hard Problem and ever since then it's been self evident there's something that it's like to be me. It felt like something that had to be factored out of my experience and pointed at for me to see. But it seems as self evident as existing. For wellbeing and suffering it took some extreme moments for me to start thinking about the fact that some things feel good and bad and that might be like, quite important actually. Also with the realisation that I never decided to find wellbeing good and suffering bad they just are. For causality I admit it's not as clear cut, and I only really thought about it yesterday reading this article. But in this moment I'm running an operating system shaped by the past. In that past I experienced the phenomena of prediction and causality. This moment seems no different to that moment so it feels natural to unambiguous act as though this moment will effect the next. Hmm that last explanation feels much more unwieldy than existence, conciousness, and valence. Perhaps it doesn't quite deserve the category of self evident, and is more like n+1 induction.

I disbelieve 2 because it assumes that there are a finite number of nodes in the graph. (We don't have to hold an infinite graph in our finite brains; we might instead have a finite algorithm for lazily expanding an infinite graph.)

3justinpombrio1y
(2) doesn't require the graph to be finite. Infinite graphs also have the property that if you repeatedly follow in-edges, you must eventually reach (i) a node with no in-edges, or (ii) a cycle, or (iii) an infinite chain. EDIT: Proof, since if we're talking about epistemology I shouldn't spout things without double checking them. Let G be any directed graph with at most countably many nodes. Let P be the set of paths in G. At least one of the following must hold: (i) Every path in P is finite and acyclic. (ii) At least one path in P is cyclic. (iii) At least one path in P is infinite. Now we just have to show that (i) implies that there exists at least one node in G that has no in-edges. Since every path is finite and acyclic, every path has a (finite) length. Label the nodes of G with the length of the largest path that ends at that node. Pick any node N in G. Let n be its label. Strongly induct on n: * If n=0, we're done: the maximum path length ending at this node is 0, so it has no in-edges. (A.k.a. it lacks justification.) * If n>0, then there is a non-empty path ending at N. Follow it back one edge to a node N'. N' must be labeled at most n-1, because if its label was larger then N's label would be larger too. By the inductive hypothesis, there exists a node in G with no in-edges.

I think it's mostly incoherent as a principle

What is it that you are saying is incoherent as a principle?

2romeostevensit1y
universalizability of compressions in light of them being bound to intentionality on the part of the one doing the compression. The closest we get to universal compressions are when the intent is more upstream of other intents like survival and reproduction.

Plenty of people are perfectly well satisfied with various answers to this question within all sorts of systems

Yeah, I'm interested in this. If you have time, what are some of the answers that you see people being satisfied by?

While there's nothing fundamentally wrong with supposing that things may well change right now when they don't appear to have changed in at least the past few billion occasions of right now, it does seem to privilege the observer almost to the point of solipsism.

Right yeah it seems like empiricism follows from a certain kind o... (read more)

Do you not want to strongly believe in something without a strong and non-cyclic conceptual justification for it?

It's not that I don't want to strongly believe in something without a strong and non-cyclic conceptual justification for it. It's that I want my actions to help reduce existential risk, and in order to do that I use reasoning, and so it's important to me that I use the kind of reasoning that actually helps me to reduce existential risk, so I am interested in what aspects of my reasoning are trustworthy or not.

Now you have linked to many compe... (read more)

1justinpombrio1y
If you ask me whether my reasoning is trustworthy, I guess I'll look at how I'm thinking at a meta-level and see if there are logical justifications for that category of thinking, plus look at examples of my thinking in the past, and see how often I was right. So roughly your "emperical" and "logical" foundations. And I sometimes use my reasoning to bootstrap myself to better reasoning. For example, I didn't used to be Bayesian; I did not intuitively view my beliefs as having probabilities associated with them. Then I read Rationality, and was convinced by both theoretical arguments and practical examples that being Bayesian was a better way of thinking, and now that's how I think. I had to evaluate the arguments in favor of Bayesianism in terms of my previous means of reasoning --- which was overall more haphazard, but fortunately good enough to recognize the upgrade. From the phrasing you used, it sounded to me like you were searching for some Ultimate Justification that could by definition only be found in regions of the space that have been ruled out by impossibility arguments. But it sounds like you're well aware of those reasons, and must be looking elsewhere; sorry for misunderstanding. But honestly I still don't know what you mean by "trustworthy". What is the concern, specifically? Is it: * That there are flaws in the way we think, for example the Wikipedia list of biases? * That there's an influential bias that we haven't recognized? * That there's something fundamentally wrong with the way that we reason, such that most of our conclusions are wrong and we can't even recognize it? * That our reasoning is fine, but we lack a good justification for it? * Something else?

Yeah that resonates with me. I'd be interested in any more thoughts you have on this. Particularly anything about how we might recognize knowing in another entity or in a physical system.

3G Gordon Worley III1y
I don't really have a whole picture that I think says more than what others have. I think there's something to knowing as the act of operationalizing information, by which I mean a capacity to act based on information. To make this more concrete, consider a simple control system like a thermostat or a steam engine governor. These systems contain information in the physical interactions we abstract away to call "signal" that's sent to the "controller". If we had only signal there'd be no knowledge because that's information that is not used to act. The controller creates knowledge by having some response it "knows" to perform when it gets the signal. This view then doesn't really distinguish knowledge from purpose in a cybernetic sense, and I think that seems reasonable at first blush. This let's us draw a hard line between "dead" information like words in a book and "live" information like words being read. Of course this doesn't necessarily make all the distinctions we'd hope to make, since this makes no difference between a thermostat and a human when it comes to knowledge. Personally I think that's correct. There's perhaps some interesting extra thing to say about the dynamism of these two systems (the thermostat is an adaption executor only, the human is that and something capable of changing itself intentionally), but I think that's separate from the knowledge question. Obviously this all hinges on a particular sort of deflationary approach to these terms to have them make sense with the weakest possible assumptions and covering the broadest classes of systems. Whether or not this sort of "knowledge" I'm proposing here is useful for much is another question.

Yes it's true, there are people who have spent time at the Monastic Academy and have experienced psychological challenges after leaving.

For me, I enjoyed the simplicity and the living in community and the meditation practice as you say. The training style at the Monastic Academy seemed to really really really work for me. There were tons of difficult moments, but underneath that I felt safe, actually, in a way that I don't think I ever had before. That safety was really critical for me to face some deep doubts that I'd been carrying for a really long time.... (read more)

[+][comment deleted]1y12

Regarding the first enigma, the expectation that what has worked in the past will work in the future is not a feature of the world, it's a feature of our brains. That's just how neural networks work, they predict the future based on past data.

Yeah right, we are definitely hard-wired to predict the future based on the past, and in general the phenomenon of predicting the future based on the past is a phenomenon of the mind, not of the world. But it sure would be nice to know whether that aspects of our minds is helping us to see things clearly or not. F... (read more)

Load More