Wiki Contributions

Comments

Answer by qbolecMar 26, 202300

ChatGPT's answer:
 

Your two assumptions and intuitions are plausible, but they may not hold true in every case. It is important to consider the specific context and motivations of individual rulers when making predictions about their behavior.

Regarding your first intuition, it is possible that some rulers may support the development of powerful AGI if they see it as a means to achieve their goals more efficiently. However, they may also take precautions to ensure that the AGI is under their control and cannot threaten their power.

Regarding your second intuition, rulers may not necessarily get rid of humans as soon as AI can achieve goals more efficiently. Human labor can still be cheaper or more reliable in certain contexts, and rulers may also want to maintain the appearance of caring about human welfare for political reasons.

To make your intuitions more testable, you could formulate specific hypotheses and predictions that can be observed or measured. For example, you could predict that rulers of authoritarian regimes will be more likely to suppress the development of powerful AGI than rulers of democracies, or that companies with highly centralized decision-making structures will be more resistant to AI-driven automation than companies with more decentralized structures. You could then collect data to test these hypotheses, such as by analyzing government policies or conducting surveys of corporate leaders.

(I am a bit worried by this given that China seems to restrict AIs more than US...)
 

Me: Thanks, the idea with checking what authoritarian regimes are doing today, was very helpful. Can you suggest more operationalisations of my theses?

GPT:
Sure, here are some possible operationalizations of your intuitions:

  1. Operationalizing your first intuition:
  • Hypothesis: Rulers will be more likely to suppress the development of powerful AGI if they perceive it as a threat to their power.
  • Prediction: Countries with more authoritarian governments will have fewer AI research projects and investments than countries with more democratic governments.
  • Operationalization: Use publicly available data on AI research funding and publications by country, as well as measures of political freedom and democracy, to test the relationship between government type and AI development.
  1. Operationalizing your second intuition:
  • Hypothesis: Rulers will be more likely to replace human workers with AI when it is cost-effective and does not threaten their power.
  • Prediction: Companies with highly centralized decision-making structures will be slower to adopt AI-driven automation than companies with more decentralized structures.
  • Operationalization: Survey companies to collect data on their decision-making structures and level of AI adoption, and test the relationship between these variables using statistical analysis.
  1. Operationalizing your conclusion:
  • Hypothesis: Rulers will prioritize their own interests over the well-being of their constituents.
  • Prediction: Countries with high levels of corruption will be less likely to invest in social welfare programs and more likely to invest in projects that benefit the ruling elite.
  • Operationalization: Use measures of corruption and social welfare spending, as well as data on government contracts and investments, to test the relationship between corruption and government priorities.

I like how ChatGPT can help in operatinalizing fuzzy intuitions. I feel an eerie risk that it makes me think even less, and less carefully, and defer to AIs wisdom more and more... it's very tempting ... as if finding an adult who you can cede control to.

I've just finished reading it, and wanted to thank you very much for recommending this great experience :)

Thanks to whoever upvoted my comment recently bringing it again to my attention via notification system - rereading my comment after 2 years, I feel really sorry for myself that despite writing the sentence 

And your post made me realize, that the technique from the book you describe is somewhat like this, if you look through "subagents model of the brain" perspective: there is a part of you which is having emotional crisis, and it's terrified by some problem it needs to solve, but this part is not ready to listen for solution/change, as long as it's in the busy loop waiting for an ACK packet confirming someone got the S.O.S. signal.

I did not really understand what it means and how to implement it and how huge impact on my life it will have once finally executed. Only recently I took part in a Lowen's therapy, in which by performing some body movements typical for aggression I've finally established connection between the part which was angry and the part which could listen about it.

Who's the intended audience of this post? 

If it's for "internal" consumption, summary of things we already knew in the form of list of sazens, but perhaps need a refresher, then it's great.

But if it's meant to actually educate anyone, or worse, become some kind of manifesto cited by New Your Times to show what's going on in this community, then I predict this is not going to end well.

The problem, as I see it, is that in the current way this website is setup, it's not up to author to decide who's the audience.

ML models, like all software, and like the NAH would predict, must consist of several specialized "modules".

After reading source code of MySQL InnoDB for 5 years, I doubt it. I think it is perfectly possible - and actually, what I would expect to happen by default - to have a huge working software, with no clear module boundaries. 

Take a look at this case in point: the row_search_mvcc() function https://github.com/mysql/mysql-server/blob/8.0/storage/innobase/row/row0sel.cc#L4377-L6019 which has 1500+ lines of code and references hundreds of variables. This function is in called in the inner loop of almost every SELECT query you run, so on the one hand it probably works quite correctly, on the other was subject to "optimization pressure" over 20 years, and this is what you get. I think this is because Evolution is not Intelligent Design and it simply uses the cheapest locally available hack to get things done, and that is usually to reuse the existing variable#135, or more realistically combination of variables#135 and #167 to do the trick - see how many of the if statements have conditions which use more than a single atom, for example:

      if (set_also_gap_locks && !trx->skip_gap_locks() &&
          prebuilt->select_lock_type != LOCK_NONE &&
          !dict_index_is_spatial(index)) {

(Speculation: I suspect that unless you chisel your neural network architecture to explicitly disallow connecting a neuron in question directly to neuron#145 and #167, it will, as soon as it discovers they provide useful bits of information. I suspect this is why figuring out what layers and connectivity between them you need is difficult. Also, I suspect this is why simply ensuring right high-level wiring between parts of the brain and how to wire them to input/output channels might the most important part to encode in DNA, as the inner connections and weights can be later figured out relatively easily)

I've made a visualization tool for that:

https://codepen.io/qbolec/pen/qBybXQe

It generates an elliptical cloud of white points where X is distributed normally, and Y=normal + X*0.3, so the two are correlated. Then you can define a green range on X and Y axis, and the tool computes the correlation in a sample (red points) restricted to that (green) range.

So, the correlation in the general population (white points) should be positive (~0.29). But if I restrict attention to upper right corner, then it is much lower, and often negative.

The extremely-minimalist description would be: “Stop believing in the orthodox model, stop worrying, feel and act as if you’re healthy, and then the pain goes away”. 

IDK if this will be important to you, but I'd like to thank you for this comment, as it relieved my back pain after 8 years! Thank you @p.b. for asking for clarification and not giving up after first response. Thank you @Steven Byrens for writing the article and taking time to respond.

8 fucking years..

I've read this article and comments a month ago. Immediately after reading it the pain was gone. (I never had mystical experiences, like enlightenment, so the closest thing I can compare it to personally, was the "perspectival shift" I've felt ten years ago when "the map is not the territory" finally clicked)

I know - it could've been "just a placebo effect" - but as the author argues, who cares, and that's kinda the main point of the claim. Still, I was afraid of giving myself a false hope - there were several few days long remissions of pain scattered along these 8 years, but the pain always returned - this is why I gave myself and this method a month before writing this comment. So far it works!

I know - "Post hoc ergo propter hoc" is not the best heuristic - there could be other explanations of my pain relief. For example a week or two before reading this article I've started following this exercise routine daily. However, I've paused the routine for three days before reading your article, and the pain relief happened exactly when I've finished reading your comment, so IMO timing and rarity (8 years...) of the event really suggests this comment is what helped. I still do the exercise routine, and it surely contributes and helps, too. Yet, I do the routine just once in the morning, yet I consciously feel how whenever throughout the day the pain starts to raise its head again, I can do a mental move inspired by this article to restore calm and dissolve the pain.

Also this is definitely how it felt from the inside! In the hope that it will help somebody else alleviate their pain here are some specific patterns of thoughts induced by this article I found helpful:

  • "oh, so my pain-center is simply confused about the signals, it is screaming like a child who can't express well what's wrong, and I was overreacting. I should show it love, not anger, I should calm it down, I must be the adult in the room and figure out what's the real problem here."
  • "I should ignore the pain by gently putting the pain to the side (like you do to the thoughts during meditation) as opposed to fighting through it. Like hitting snooze, vs clenching my jaw and fist to overcome it."
  • "yeah, I've heard you pain-center, but I think you are mistaken about the magnitude and source of the problem, and I am actively working on the solution to the real problem, so please do not distract me while I am helping you"
  • "the pain-center is presenting me a whole crayon-drawn image of a tiger, but it was just connecting-the-dots creatively, and there really was no tiger, just the dots". I think this one is most helpful metaphor for me. I can feel how I dissolve a full certainty of "the pain of the whole upper back" into individual, small, shaky dots of unsure signals from small patches of the back.
  • "looks like it was just one small place around this shoulder blade which started the alarm, maybe I should just change the position of right arm, oh, yes, this brought relief, good"
  • "ok, so this part near neck is so tense it started complaining, and this was probably because I was trying too hard to finish answering this email before visiting the restroom - let's just give myself a pause and treat the body more gently".
  • "ok, I need to be more precise: which patch of my back is in pain right now? If I can't tell, then perhaps it's something in the environment that is causing stress, or some thought, or some anticipation, or maybe some physiological need? Let's look around and find out what this alarm is about"
  • The Bohr's horseshoe: "I was told that it works even if you don't believe in it"
  • I just imagine a volume knob on the pain and just turn it down

I am really excited about all this positive change in my mind, because as one can imagine (and if you can't, recall main character of House M.D.) a constant pain corrupts other parts of your mind and life. It's like a prior to interpret every sentence of family-members and every event in life. It's a crony belief, a self-sustaining "bitch eating cracker syndrome". It took 8 years to build this thought-cancer, and it will probably take some time to disband it, but I see the progress already. 

Also, I am "counter-factually frightened" by how close I was to completely missing this solution to my problem. I was actively seeking, you see, fruitlessly, though, for years! I had so much luck: to start reading LW long ago; to subscribe Scott Alexander's blog (I even read his original review of "unlearn your pain" from 2016 yet it sounded negative and (I) concentrated too much on discrediting the underlying model of action, so perhaps I could fix my pain 6 years earlier); to develop a habit of reading LW and searching for interesting things and reading comments, not just the article.. Thank you again for this article and this comment thread. When I imagine how sad would be the future if on that afternoon I didn't read it I want to cry...

I have similar experience with it today (before reading your article) https://www.lesswrong.com/editPost?postId=28XBkxauWQAMZeXiF&key=22b1b42041523ea8d1a1f6d33423ac 

I agree that this over-confidence is disturbing :(

We already live in a world in which any kid can start a difficult to stop and contain chain reaction: fire. We responded by:

  • making a social norm of not allowing kids to buy or use
  • separating houses 1.5 of their height
  • adding sprinklers, requiring them by law
  • having a state founded agency to stop fires

Honestly I still don't understand very well what exactly stops evil/crazy people from starting fires in forests whenever they want to. Norms to punish violators? Small gain to risk factor?

Also, I wonder to what extent our own "thinking" is based on concepts we ourselves understand. I'd bet I don't really understand what concepts most of my own thinking processes use.

Like: what are the exact concepts I use when I throw a ball? Is there a term for velocity, gravity constant or air friction, or is it just some completely "alien" computation which is "inlined" and "tree-shaked" of any unneeded abstractions, which just sends motor outputs given the target position?

Or: what concepts do I use to know what word to place at this place in this sentence? Do I use concepts like "subject", or "verb" or "sentiment", or rather just go with the flow subconsciously, having just a vague idea of the direction I am going with this argument?

Or: what concepts do I really use when deciding to rotate the steering wheel 2 degrees to the right when driving a car through a forest road's gentle turn? Do I think about "angles", "asphalt", "trees", "centrifugal force", "tire friction", or rather just try to push the future into the direction where the road ahead looks more straight to me and somehow I just know that this steering wheel is "straightening" the image I see?

Or: how exactly do I solve (not: verify an already written proof) a math problem? How does the solution pop into my mind? Is there some systematic search over all possible terms and derivations, or rather some giant hash-map-like interconnected "related tricks and transformations I seen before" which get proposed?

I think my point is that we should not conflate the way we actually solve problems (subconsciously?), with the way we talk (consciously) about solutions we've already found when trying to verify them ourselves (the inner monologue) or convey them to another person. First of all, the Release binary and Debug binaries can differ (it's completely different experience to ride a bike for a first time, than an on 2000th attempt). Second, the on-the-wire format and the data structure before serialization can be very different (the way I explain how to solve an equation to my kid is not exactly how I solve it).

I think, that training a separate AI to interpret for us the inner workings of another AI is risky, the same way a Public Relations department or a lawyer doesn't necessarily give you the honest picture of what the client is really up to. 

Also, I there's much talk about distinction between system 1 and 2, or subconsciousness and consciousness, etc.

But, do we really treat seriously the implication of all that: the concepts our conscious part of mind uses to "explain" the subconscious actions have almost nothing to do with how it actually happened. If we force the AI to use these concepts it will either lie to us ("Your honor, as we shall soon see the defendant wanted to..") , or be crippled (have you tried to drive a car using just the concepts from physics text book?). But even in the later case it looks like a lie to me, because even if the AI is really using the concepts it claims/seems/reported to be using, there's still the mismatch in myself: I think I now understand that the AI works just like me, while in the reality I work completely differently than I thought. How bad that is depends on problem domain, IMHO. This might be pretty good if the AI is trying to solve a problem like "how to throw a ball" and a program using physic equations is actually also a good way of doing it. But once we get to more complicated stuff like operating a autonomous drone on the battlefield or governing country's budget I think there's a risk because we don't really know how we ourselves make these kind of decisions.

Load More