TsviBT

Wiki Contributions

Comments

Oh I see, the haploid cells are, like, independently viable and divide and stuff.

Cool! Is it known how to sequence the haploid cell? Can you get a haploid cell to divide so you can feed it into PCR or something? (I'm a layperson w.r.t. biology.) I just recently had an idea about sequencing sperm by looking at their meiotic cousins and would be interested in talking in general about this topic; email at gmail, address tsvibtcontact. https://tsvibt.blogspot.com/2022/06/non-destructively-sequencing-gametes-by.html

I haven't looked really, seems worth someone doing. I think there's been a fair amount of experimentation, though maybe a lot of it is predictably worthless (e.g. by continuing to inflict central harms of normal schooling), I don't know. (This post is mainly aimed at adding detail to what some of the harms are, so that experiments can try to pull the rope sideways on supposed tradeoffs like permissiveness vs. strictness or autonomy vs. guidance.) I looked a little. Aside from Montessori (which would take work to distinguish things branded as Montessori vs. actually implementing her spirit), there's the Summerhill School: https://en.wikipedia.org/wiki/Summerhill_School which seems to have ended up with creepy stuff going on, and Sudbury schools https://en.wikipedia.org/wiki/List_of_Sudbury_schools which I don't know about.

[To respond to not the literal content of your comment, in case it's relevant: I think some teachers are intrinsically bad, some are intrinsically great, and many are unfortunately compelled or think they're compelled to try to solve an impossible problem and do the best they can. Blame just really shouldn't be the point, and if you're worried someone will blame someone based on a description, then you may have a dispute with the blamer, not the describer.]

criticism of schools unrealistic

Well, it's worth distinguishing (1) whether/what harms are being done, and (2) under what circumstances the harms can be avoided. I don't know precisely what you mean by "criticism of schools". I don't think you mean, it's unrealistic--fantastical, unbelievable--that schools do these harms. I take you to mean, it's unrealistic not to do these harms to kids. I don't want to blur between "there's no way to avoid this" and "this isn't happening" (largely because it's just not true that there's no way to avoid it), or between "this is happening" and "we must do certain things, and blame/punish certain people" (because the implication just doesn't hold, and the implication is sometimes used to couple the belief with the plan more than it has to be, and then push against the belief because the supposedly implied plan would be bad; as in "if school were harmful, I'd have to take my family and go live in the woods and be cut off from society; that would be bad for my family; therefore school is not harmful").

To allow teachers to not harm their kids, parents might have to be willing to firmly disclaim for their kids anything like "expectations from society/government that they will be ready pass this exam at end of the year". It may be unrealistic that parents would do that; I'd like to know why, but more acutely I'd like to see parents who are willing to do that organize.


Afterthoughts:

-- An attitude against pure symbolism is reflected in the Jewish prohibition against making a bracha levatala (= idle, null, purposeless). That's why Jews hold their hands up to the havdalah candle: not to "feel the warmth of Shabbat" or "use all five senses", but so that the candle is being actually used for its concrete function.

-- An example from Solstice of a "symbolic" ritual is the spreading-candle-lighting thing. I quite like the symbolism, but also, there's a hollowness; it's transparently symbolic, and on some level what's communicated to me is less "the people around me will share their light with me, and I mine with them" and more "the people around me will participate in a showy performance of solidarity with an aesthetic, to trick me into trusting them, and I'll go along with it out of fear". It may almost as well be a high school pep rally.

-- Following this advice might be especially hard for Rationalists because the true Rationality is intimately involved with the possibility of surprise (confusion, curiosity, exploration, prediction, changing your mind), and it's paradoxical to enact that attitude in a stereotyped ritual. A possible method would be to ritualize things that Rationalists already habitually do, like betting, though I don't immediately see how to make a nice public ritual out of betting. (Maybe a cooperative game to resist information cascades, or something, could be made into a public ritual?)

Answer by TsviBTMar 31, 2021Ω715

I speculate (based on personal glimpses, not based on any stable thing I can point to) that there's many small sets of people (say of size 2-4) who could greatly increase their total output given some preconditions, unknown to me, that unlock a sort of hivemind. Some of the preconditions include various kinds of trust, of common knowledge of shared goals, and of person-specific interface skill (like speaking each other's languages, common knowledge of tactics for resolving ambiguity, etc.).
[ETA: which, if true, would be good to have already set up before crunch time.]

In modeling the behavior of the coolness-seekers, you put them in a less cool position.

It might be a good move in some contexts, but I feel resistant to taking on this picture, or recommending others take it on. It seems like making the same mistake. Focusing on the object level because you want to be [cool in that you focus on the object level], that does has the positive effect of focusing on the object level, but I think also can just as well have all the bad effects of trying to be in the Inner Ring. If there's something good about getting into the Inner Ring, it should be unpacked, IMO. On the face of it, it seems like mistakenly putting faith in there being an Inner Ring that has things under control / knows what's going on / is oriented to what matters. If there were such a group it would make sense to apprentice yourself to them, not try to trick your way in. 

I agree that the epistemic formulation is probably more broadly useful, e.g. for informed oversight. The decision theory problem is additionally compelling to me because of the apparent paradox of having a changing caring measure. I naively think of the caring measure as fixed, but this is apparently impossible because, well, you have to learn logical facts. (This leads to thoughts like "maybe EU maximization is just wrong; you don't maximize an approximation to your actual caring function".)

In case anyone shared my confusion:

The while loop where we ensure that eps is small enough so that

bound > bad1() + (next - this) * log((1 - p1) / (1 - p1 - eps))

is technically necessary to ensure that bad1() doesn't surpass bound, but it is immaterial in the limit. Solving

bound = bad1() + (next - this) * log((1 - p1) / (1 - p1 - eps))

gives

eps >= (1/3) (1 - e^{ -[bound - bad1()] / [next - this]] })

which, using the log(1+x) = x approximation, is about

(1/3) ([bound - bad1()] / [next - this] ).

Then Scott's comment gives the rest. I was worried about the fact that we seem to be taking the exponential of the error in our approximation, or something. But Scott points out that this is not an issue because we can make [next-this] as big as we want, if necessary, without increasing bad1() at all, by guessing p1 for a very long time until [bound - bad1()] / [next - this]] is close enough to zero that the error is too small to matter.

Could you spell out the step

every iteration where mean(𝙴[𝚙𝚛𝚎𝚟:𝚝𝚑𝚒𝚜])≥2/5 will cause bound - bad1() to grow exponentially (by a factor of 11/10=1+(1/2)(−1+2/5𝚙𝟷))

a little more? I don't follow. (I think I follow the overall structure of the proof, and if I believed this step I would believe the proof.)

We have that eps is about (2/3)(1-exp([bad1() - bound]/(next-this))), or at least half that, but I don't see how to get a lower bound on the decrease of bad1() (as a fraction of bound-bad1() ).

Load More