Wiki Contributions

Comments

It seems to me that you are attempting to write a timeless, prescriptive reference piece. Then a paragraph sneaks in that is heavily time and culture dependent.

I'm honestly not certain about the intended meaning. I think you intent mask wearing to be an example of a small and reasonable cost. As a non-american, I'm vaguely aware what costco is, but don't know if there's some connotation or reference to current events that I'm missing. And if I'm confused now, imagine someone reading this in 2030...

Without getting into the object-level discussion, I think such references have no place in the kind of post this is supposed to be, and should be cut or made more neutral.

You didn't address the part of my comment that I'm actually more confident about. I regret adding that last sentence, consider it retracted for now (I currently don't think I'm wrong, but I'll have to think/observe some more, and perhaps find better words/framing to pinpoint what bothers me about rationalist discourse).

It's analogous to a customer complaining "if Costco is going to require masks, then I'm boycotting Costco."  All else being equal, it would be nice for customers to not have to wear masks, and all else being equal, it would be nice to lower the barrier to communication such that more thoughts could be more easily included.

 

Just a small piece of feedback. This paragraph is very unclear, and it brushes on a political topic that tends to get heated and personal.

I think you intended to say that the norms you're proposing are just the basic cost of entry to a space with higher levels of cooperation and value generation. But I can as easily read it as your norms being an arbitrary requirement that destroys value by forcing everyone to visibly incur pointless costs in the name of protecting against a bogeyman that is being way overblown.

This unintended double meaning seems apt to me: I mostly agree with the guidelines, but also feel that rationalists overemphasize this kind of thing and discount the costs being imposed. In particular, the guidelines are very bad for productive babbling / brainstorming, for intuitive knowledge transfer, and other less rigorous ways of communicating that I find really valuable in some situations.

One thing I've read somewhere is that people who sign but aren't deaf, tend to use sign language in parallel with spoken language. That's an entire parallel communications channel!

Relatedly, rationalists lean quite heavily towards explicit ask/tell culture. This is sometimes great, but often clunky: "are you asking for advice? I might have some helpful comments but I'm not sure if you actually want peoples' opinions, or if you just wanted to vent."

Combining these two things, I see possible norms evolving where spoken language is used for communicating complex thoughts, and signing is used for coordination, cohesion, making group decisions (which is often done implicitly in other communities). I think there's a lot of potential upside here.

I think you're confusing arrogance concerning the topic itself with communicating my insights arrogantly. I'm absolutely doing the latter, partly as a pushback to your overconfident claims, partly because better writing would require time and energy I don't currently have. But the former? I don't think so.

Re: the Turing test. My apologies, I was overly harsh as well. But none of these examples are remotely failing the Turing test. For starters, you can't fail the test if you're not aware you're taking it. Should we call anyone misreading some text or getting a physics question wrong as "having failed the Turing test" from now on, in all contexts?

Funnily enough, the pendulum problem admits a bunch of answers, because "swinging like a pendulum" has multiple valid interpretations. Furthermore, a discerning judge shouldn't just fail every entity that gets the physics wrong, nor pass every entity that get the physics right. We're not learning anything here except that many people are apparently terrible at performing Turing tests, or don't even understanding what the test is. That's why I originally read your post as an insult, because it just doesn't make sense to me how you're using the term (so it's reduced to a "clever" zinger)

fair enough, I can see that reading. But I didn't mean to say I actually believe that, or that it's a good thing. More like an instinctive reaction.

It's just that certain types of life experiences put a small but noticeable barrier between you and other people. It was a point about alienation, and trying to drive home just how badly typical minding can fail. When I barely recognize my younger self from my current perspective, that's a pretty strong example.

Hope that's clearer.

What you said, exactly, was:

Just hope you at least briefly consider that I was exactly at your stage one day

which is what I was responding to. I know you're not claiming that I'm 100% hackable, but yet you insist on drawing strong parallels between our states of mind, e.g., that being dismissive must stem from arrogance. That's the typical-minding I'm objecting to. Also, being smart has nothing to do with it, perhaps you might go back and carefully re-read my original comment.

The Turing test doesn't have a "reading comprehension" section, and I don't particularly care if some commenters make up silly criteria for declaring someone as failing it. And humans aren't supposed to have a 100% pass rate, btw, that's just not in the nature of the test. It's more of a thought experiment than a benchmark really.

Finally, it's pretty hard to not take this the wrong way, as it's clearly a contentless insult.

I read your original post and I understood your point perfectly well. But I have to insist that you're typical-minding here. How do you know that you were exactly at my stage at some point? You don't.

You're trying to project your experiences to a 1-dimensional scale that every human falls on. Just because I dismiss a scenario, same as you did, does not imply that I have anywhere near the same reasons / mental state for asserting this. In essence, you're presenting me with a fully general counterargument, and I'm not convinced.

So, are all rationalists 70% susceptible, all humans? specifically people who scoff at the possibility of it happening to them? what's your prior here?

100 hours also seems to be a pretty large number. In the scenario in question, not only does a person need to be hacked at 100h, but they also need to decide to spend hour 2 after spending hour 1, and so on. If you put me in an isolated prison cell with nothing to do but to talk to this thing, I'm pretty sure I'd end up mindhacked. But that's a completely different claim.

Thanks for posting this, I recognize this is emotionally hard for you. Please don't interpret the rest of this post as being negative towards you specifically. I'm not trying to put you down, merely sharing the thoughts that came up as I read this.

I think you're being very naive with your ideas about how this "could easily happen to anyone". Several other commenters were focusing on how lonely people specifically are vulnerable to this. But I think it's actually emotionally immature people who are vulnerable, specifically people with a high-openness, "taking ideas seriously" kind of personality, coupled with a lack of groundedness (too few points of contact with the physical world).

This is hard to explain without digressing at least a bit, so I'm going to elaborate, as much for my own benefit as yours.

As I've aged (late 30's now), there's been some hard to pin down changes in my personality. I feel more solidified than a decade ago. I now perceive past versions of myself almost as being a bit hollow; lots of stuff going on at the surface level, but my thoughts and experiences weren't yet weaving together into the deep structures (below what's immediately happening) that give a kind of "earthiness" or "groundedness" to all my thoughts and actions now. The world has been getting less confusing with each new thing I learn, so whatever I encounter, I tend to have related experiences already in my repertoire of ideas I've digested and integrated. Thus, acquisition of new information/modes of thinking/etc becomes faster and faster, even as my personality shifts less and less from each encounter with something new. I feel freer, more agenty now. This way of saying it is very focused on the intellect, but something analogous is going on at the emotional level as well.

I've started getting this impression of hollowness from many people around me, especially from young people who have had a very narrow life path, even highly intelligent ones. Correlates: living in the same place/culture all their life, doing the same activity all their life e.g. high school into undergrad into phd without anything in between, never having faced death, never having questioned or been exposed to failure modes of our social reality, etc etc.

I know it's outrageously offensive to say, but at least some instincive part of me has stopped perceiving these beings as actual people. They're just sort of fluttering around, letting every little thing knock them off-balance, because they lack the heft to keep their own momentum going, no will of their own. Talking to these people I'm more and more having the problem of the inferential distances being too high to get any communication beyond social niceties going. You must think I'm super arrogant, but I'm just trying to communicate this important, hard to grasp idea.

Most people don't ever become solified in this way (the default mode for humans seems to be to shut off the vulnerable surface layer entirely as they age), but that's yet another digression...

All of this is a prelude to saying that I'm confident I wouldn't fall for these AI tricks. That's not a boast, or put-down, or hubris, just my best estimation based on what I know about myself. I'd consider being vulnerable in this way as a major character flaw. This not only applies to interacting with an AI btw, but also with actual humans that follow similar exploitative patterns of behavior, from prospective lovers, to companies with internal cultures full of bullshit, all the way up to literal cults. (Don't get me wrong, I have plenty of other character flaws, I'm not claiming sainthood here)

As other people have already pointed out, you've been shifting goalposts a lot discussing this, letting yourself get enchanted by what could be, as opposed to what actually is, and this painfully reminds me of several people I know, who are so open-minded that their brain falls out occasionally, as the saying goes. And I don't think it's a coincidence that this happens a lot to rationalist types, it seems to be somehow woven into the culture that solidifying and grounding yourself in the way I'm gesturing at is not something that's valued.

Relatedly, in the last few years there's been several precipitating events that have made me distance myself a bit from the capital-R Rationalist movement. In particular the drama around Leverage research and other Rationalist/EA institutions, which seem to boil down to a lack of common sense and a failure to make use of the organizational wisdom that human institutions have developed over millenia. A general lack of concern for robustness, defense-in-depth, designing with the expectation of failure, etc. The recent FTX blow-up wrt EA also has a whiff of this same hubris. Again, I don't think it's a coincidence, just a result of the kind of people that are drawn to the rationalist idea-space doing their thing and sharing the same blind spots.

As long as I'm being offensively contrarian anyway, might as well throw in that I'm very skeptical of the median LW narrative about AGI being very near. The emotional temperature on LW wrt these topics has been rising steadily, in a way that's reminiscent of your own too-generous read of "Charlotte"'s behavior. You can even see a bunch of it in the discussion of this post, people who IMO are in the process of losing their grasp on reality. I guess time will tell if the joke's on me after all.

Load More