I've just finished reading it, and wanted to thank you very much for recommending this great experience :)
Thanks to whoever upvoted my comment recently bringing it again to my attention via notification system - rereading my comment after 2 years, I feel really sorry for myself that despite writing the sentence
And your post made me realize, that the technique from the book you describe is somewhat like this, if you look through "subagents model of the brain" perspective: there is a part of you which is having emotional crisis, and it's terrified by some problem it needs to solve, but this part is not ready to listen for solution/change, as long as it's in the busy loop waiting for an ACK packet confirming someone got the S.O.S. signal.
I did not really understand what it means and how to implement it and how huge impact on my life it will have once finally executed. Only recently I took part in a Lowen's therapy, in which by performing some body movements typical for aggression I've finally established connection between the part which was angry and the part which could listen about it.
Who's the intended audience of this post?
If it's for "internal" consumption, summary of things we already knew in the form of list of sazens, but perhaps need a refresher, then it's great.
But if it's meant to actually educate anyone, or worse, become some kind of manifesto cited by New Your Times to show what's going on in this community, then I predict this is not going to end well.
The problem, as I see it, is that in the current way this website is setup, it's not up to author to decide who's the audience.
ML models, like all software, and like the NAH would predict, must consist of several specialized "modules".
After reading source code of MySQL InnoDB for 5 years, I doubt it. I think it is perfectly possible - and actually, what I would expect to happen by default - to have a huge working software, with no clear module boundaries.
Take a look at this case in point: the row_search_mvcc()
function https://github.com/mysql/mysql-server/blob/8.0/storage/innobase/row/row0sel.cc#L4377-L6019 which has 1500+ lines of code and references hundreds of variables. This function is in called in the inner loop of almost every SELECT
query you run, so on the one hand it probably works quite correctly, on the other was subject to "optimization pressure" over 20 years, and this is what you get. I think this is because Evolution is not Intelligent Design and it simply uses the cheapest locally available hack to get things done, and that is usually to reuse the existing variable#135, or more realistically combination of variables#135 and #167 to do the trick - see how many of the if
statements have conditions which use more than a single atom, for example:
if (set_also_gap_locks && !trx->skip_gap_locks() &&
prebuilt->select_lock_type != LOCK_NONE &&
!dict_index_is_spatial(index)) {
(Speculation: I suspect that unless you chisel your neural network architecture to explicitly disallow connecting a neuron in question directly to neuron#145 and #167, it will, as soon as it discovers they provide useful bits of information. I suspect this is why figuring out what layers and connectivity between them you need is difficult. Also, I suspect this is why simply ensuring right high-level wiring between parts of the brain and how to wire them to input/output channels might the most important part to encode in DNA, as the inner connections and weights can be later figured out relatively easily)
I've made a visualization tool for that:
https://codepen.io/qbolec/pen/qBybXQe
It generates an elliptical cloud of white points where X is distributed normally, and Y=normal + X*0.3, so the two are correlated. Then you can define a green range on X and Y axis, and the tool computes the correlation in a sample (red points) restricted to that (green) range.
So, the correlation in the general population (white points) should be positive (~0.29). But if I restrict attention to upper right corner, then it is much lower, and often negative.
The extremely-minimalist description would be: “Stop believing in the orthodox model, stop worrying, feel and act as if you’re healthy, and then the pain goes away”.
IDK if this will be important to you, but I'd like to thank you for this comment, as it relieved my back pain after 8 years! Thank you @p.b. for asking for clarification and not giving up after first response. Thank you @Steven Byrens for writing the article and taking time to respond.
8 fucking years..
I've read this article and comments a month ago. Immediately after reading it the pain was gone. (I never had mystical experiences, like enlightenment, so the closest thing I can compare it to personally, was the "perspectival shift" I've felt ten years ago when "the map is not the territory" finally clicked)
I know - it could've been "just a placebo effect" - but as the author argues, who cares, and that's kinda the main point of the claim. Still, I was afraid of giving myself a false hope - there were several few days long remissions of pain scattered along these 8 years, but the pain always returned - this is why I gave myself and this method a month before writing this comment. So far it works!
I know - "Post hoc ergo propter hoc" is not the best heuristic - there could be other explanations of my pain relief. For example a week or two before reading this article I've started following this exercise routine daily. However, I've paused the routine for three days before reading your article, and the pain relief happened exactly when I've finished reading your comment, so IMO timing and rarity (8 years...) of the event really suggests this comment is what helped. I still do the exercise routine, and it surely contributes and helps, too. Yet, I do the routine just once in the morning, yet I consciously feel how whenever throughout the day the pain starts to raise its head again, I can do a mental move inspired by this article to restore calm and dissolve the pain.
Also this is definitely how it felt from the inside! In the hope that it will help somebody else alleviate their pain here are some specific patterns of thoughts induced by this article I found helpful:
I am really excited about all this positive change in my mind, because as one can imagine (and if you can't, recall main character of House M.D.) a constant pain corrupts other parts of your mind and life. It's like a prior to interpret every sentence of family-members and every event in life. It's a crony belief, a self-sustaining "bitch eating cracker syndrome". It took 8 years to build this thought-cancer, and it will probably take some time to disband it, but I see the progress already.
Also, I am "counter-factually frightened" by how close I was to completely missing this solution to my problem. I was actively seeking, you see, fruitlessly, though, for years! I had so much luck: to start reading LW long ago; to subscribe Scott Alexander's blog (I even read his original review of "unlearn your pain" from 2016 yet it sounded negative and (I) concentrated too much on discrediting the underlying model of action, so perhaps I could fix my pain 6 years earlier); to develop a habit of reading LW and searching for interesting things and reading comments, not just the article.. Thank you again for this article and this comment thread. When I imagine how sad would be the future if on that afternoon I didn't read it I want to cry...
I have similar experience with it today (before reading your article) https://www.lesswrong.com/editPost?postId=28XBkxauWQAMZeXiF&key=22b1b42041523ea8d1a1f6d33423ac
I agree that this over-confidence is disturbing :(
We already live in a world in which any kid can start a difficult to stop and contain chain reaction: fire. We responded by:
Honestly I still don't understand very well what exactly stops evil/crazy people from starting fires in forests whenever they want to. Norms to punish violators? Small gain to risk factor?
Also, I wonder to what extent our own "thinking" is based on concepts we ourselves understand. I'd bet I don't really understand what concepts most of my own thinking processes use.
Like: what are the exact concepts I use when I throw a ball? Is there a term for velocity, gravity constant or air friction, or is it just some completely "alien" computation which is "inlined" and "tree-shaked" of any unneeded abstractions, which just sends motor outputs given the target position?
Or: what concepts do I use to know what word to place at this place in this sentence? Do I use concepts like "subject", or "verb" or "sentiment", or rather just go with the flow subconsciously, having just a vague idea of the direction I am going with this argument?
Or: what concepts do I really use when deciding to rotate the steering wheel 2 degrees to the right when driving a car through a forest road's gentle turn? Do I think about "angles", "asphalt", "trees", "centrifugal force", "tire friction", or rather just try to push the future into the direction where the road ahead looks more straight to me and somehow I just know that this steering wheel is "straightening" the image I see?
Or: how exactly do I solve (not: verify an already written proof) a math problem? How does the solution pop into my mind? Is there some systematic search over all possible terms and derivations, or rather some giant hash-map-like interconnected "related tricks and transformations I seen before" which get proposed?
I think my point is that we should not conflate the way we actually solve problems (subconsciously?), with the way we talk (consciously) about solutions we've already found when trying to verify them ourselves (the inner monologue) or convey them to another person. First of all, the Release binary and Debug binaries can differ (it's completely different experience to ride a bike for a first time, than an on 2000th attempt). Second, the on-the-wire format and the data structure before serialization can be very different (the way I explain how to solve an equation to my kid is not exactly how I solve it).
I think, that training a separate AI to interpret for us the inner workings of another AI is risky, the same way a Public Relations department or a lawyer doesn't necessarily give you the honest picture of what the client is really up to.
Also, I there's much talk about distinction between system 1 and 2, or subconsciousness and consciousness, etc.
But, do we really treat seriously the implication of all that: the concepts our conscious part of mind uses to "explain" the subconscious actions have almost nothing to do with how it actually happened. If we force the AI to use these concepts it will either lie to us ("Your honor, as we shall soon see the defendant wanted to..") , or be crippled (have you tried to drive a car using just the concepts from physics text book?). But even in the later case it looks like a lie to me, because even if the AI is really using the concepts it claims/seems/reported to be using, there's still the mismatch in myself: I think I now understand that the AI works just like me, while in the reality I work completely differently than I thought. How bad that is depends on problem domain, IMHO. This might be pretty good if the AI is trying to solve a problem like "how to throw a ball" and a program using physic equations is actually also a good way of doing it. But once we get to more complicated stuff like operating a autonomous drone on the battlefield or governing country's budget I think there's a risk because we don't really know how we ourselves make these kind of decisions.
ChatGPT's answer:
(I am a bit worried by this given that China seems to restrict AIs more than US...)
I like how ChatGPT can help in operatinalizing fuzzy intuitions. I feel an eerie risk that it makes me think even less, and less carefully, and defer to AIs wisdom more and more... it's very tempting ... as if finding an adult who you can cede control to.