All of Ben Pace's Comments + Replies

In case you're interested, I choose the latter, for there is at least the hope of learning from the mistakes.

On this Earth there are two factions, and you must pick one.

  1. The Knights Who Arrive at False Conclusions
  2. The Knights Who Arrive at True Conclusions, Too Late to Be Useful

(Hat tip: I got these names 2 years ago from Robert Miles who had been playing with GPT-3.)

2Ben Pace15m
In case you're interested, I choose the latter, for there is at least the hope of learning from the mistakes.

Did people say why they deferred to these people?

I think another interesting question to correlate this would be "If you believe AI x-risk is a severely important issue, what year did you come to believe that?".

3Sam Clarke6h
No, only asked respondents to give names Agree, that would have been interesting to ask

For the closing party of the Lightcone Offices, I used Midjourney 5 to make a piece of art to represent a LessWrong essay by each member of the Lightcone team, and printed them out on canvases. I'm quite pleased about how it came out. Here they are.

How I buy things when Lightcone wants them fast

by jacobjacob

(context: Jacob has been taking flying lessons, and someday hopes to do cross-country material runs for the Rose Garden Inn at shockingly fast speeds by flying himself to pick them up)

My thoughts on direct work (and joining LessWrong)

by RobertM

A Quick G

... (read more)

I actually think your caveat helps a lot.

Gotcha, I was unclear about whether you were saying it prescriptively or descriptively.

2trevor1d
Ah, that's helpful, that was my bad. I was definitely being descriptive, about legal dynamics and stuff. With the internet being what it is, I'll be more cautious about vague language in the future.

Right. I suspect we still have some disagreement but happy to leave it here. 

(To briefly leave pointer, but with no expectation Jeff for you to respond to it: I think this sort of dynamic extends further into lots of other criticism, where even if your criticism isn't about bad behavior you're still pretty unsure how they respond to criticism and whether they'll respond well, and it can be very stressful to engage directly yet still pro-social to publish criticism.)

2jefftk1d
Edited to add something covering this, though I suspect it doesn't go as far as you'd prefer? (Also curious what you think of Ray's argument [https://forum.effectivealtruism.org/posts/hM4atR2MawJK7jmwe/building-cooperative-epistemology-response-to-ea-has-a-lying?commentId=QYJRPpc2b9mzZMujp])

Yep, it seems to good to me to respond to just one point that you disagreed with, definitely positive to do so relative to responding to none :)

I genuinely have uncertainty here, I know there were a bunch of folks at CSET who understood some of the args, I'm not sure whether/what roles they have in Government, I think of many of them as being in "policy think tanks" that are outside of government. Matheny was in the White House for a while but now he runs RAND; if he were still there I would be wrong and there would be at least one person who I believe gro... (read more)

Consider an average human, who understands goodness enough to do science without catstrophic consequences, but is not a benevolent sovereign.

If "science" includes "building and testing AGIs" or "building and testing nukes" or "building and testing nanotech", then I think the "average human" "doing science" is unaligned.

I have occasionally heard people debate whether "humans are aligned". I find it a bit odd to think of it as a yes/no answer. I think humans are good at modeling some environments and not others. High pressured environments with superstimuli ... (read more)

It's not clear to me that OpenAI has a clear lead over Anthropic in terms of capabilities.

1dsj1d
I believe Anthropic is committed to not pushing at the state-of-the-art, so they may not be the most relevant player in discussions of race dynamics.

Just repeating from my other comments: my main issue is the broad proposal of "let's get governments involved" that appears to not be aware of all the horrible and corrupt things governments do by-default when they get involved (cf. Covid), nor proposes any ways to avoid lots of dysfunction.

1Leo11h
Just compare how collaborative governments solving global warming versus Gates Foundation solving polio.

When we did Scott's petition, names were not automatically added to the list, but each name was read by me-or-Jacob, and if we were uncertain about one we didn't add it without checking in with others or thinking it over. This meant that added names were staggered throughout the day because we only checked every hour or two, but overall prevented a number of fake names from getting on there.

(I write this to contrast it with automatically adding names then removing them as you notice issues.)

I've slept, and now looked it over again.

Such decisions must not be delegated to unelected tech leaders.

I don’t agree with the clear implication that the problem with tech leaders is that they weren’t elected. I commonly think their judgment is better than people who are elected and in government. I think competent and elected people are best, but given the choice between only competent or only elected (in the current shitty electoral systems of the UK and the US that I am familiar with), I think I prefer competent.

If such a pause cannot be enacted quickly

... (read more)
2Evan R. Murphy1d
Just want to flag that I'm pretty sure this isn't true anymore. At least a few important people in the US government (and possibly many) have now taken this course [https://www.gladstone.ai/course] . I am still in progress on my technical review of the course for AIGS Canada, but my take so far is that it provides a good education on relevant aspects of AI for a non-technical audience and also focuses quite a bit on AI existential risk issues. (I know this only one point out of many you made but I wanted to respond to it when I spotted it and had time.)

I wasn't referring specifically to the OP when I wrote that, I meant I ought to pushback against a pretty wide swath of possible arguments and pressures against publishing criticism. Nonetheless I want to answer your question.

My answer is more "yes" than "no". If someone is to publish a critique of an EA org and hasn't shown it to the org, people can say "But you didn't check it with the org, which we agreed is a norm around here, so you don't seem to be willing to play by the rules, and I now suspect the rest of this post is in bad faith." Yet I think it'... (read more)

2TekhneMakre1d
Aha. Now it seems to me that my reading of the OP and my reaction, as well as others's reading of my comments, have both followed this pattern: 1. P1 implicitly proposes to call on some social machinery in some way (in jefftk's case, the machinery of norm setting; in my case, the machinery of group-epistemology) 2. P2 objects to a wide swath of proposals to call on that machinery (you, me, others pushing back on this norm setting; jefftk and others push back against trusting group epistemology) 3. P1 is confused about the response to some other proposal, or the imputation of claims not made In both cases I think that the most salient thing should be: this social machinery is broken. The norm setting/enforcing machine is broken, and the group epistemology machine is broken.

(Also Zoe Curzi and Leverage. Really there’s a lot of examples.)

Also examples on the other side, I would note. Without a healthy skepticism of anonymous or other kinds of reduced-accountability reports, one would've been lead around by the nose by Ziz's attempts.

-4jefftk1d
In cases where you're worried about bad behavior by an org or have had a bad experience with them and don't want to interact with them (including the examples you described above) I agree it's fine to go ahead without sending it to them. On the other hand, I think this is only rarely the case for critical posts? The larger category, what this doesn't apply, is what I was trying to address here. I should edit the post to include this, though I need to think about the wording and don't have time to make the change right now.

Oh, I didn't read that correctly. Good point.

I am concerned about some other parts of it, that seem to imbue a feeling of "trust in government" that I don't share, and I am concerned that if this letter is successful then governments will get involved in a pretty indiscriminate and corrupt way and then everything will get worse; but my concern is somewhat vague and hard to pin down.

I think it'd be good for me to sleep on it, and see if it seems so bad to sign on to the next time I see it.

7Ben Pace1d
I've slept, and now looked it over again. I don’t agree with the clear implication that the problem with tech leaders is that they weren’t elected. I commonly think their judgment is better than people who are elected and in government. I think competent and elected people are best, but given the choice between only competent or only elected (in the current shitty electoral systems of the UK and the US that I am familiar with), I think I prefer competent. I don’t think this threat is very good. Firstly, it seems a bit empty. This is not the government speaking, I don’t know that FLI is in a position to make the government do this. Secondly, it doesn’t feel like… it feels closer to just causing uncontrollable chaos than it does to doing anything sensible. Maybe it works, but I haven’t seen any arguments that governments won’t just become pretty corrupt and mess everything up if given pretty vague yet significant power. I would much rather the current status quo where I think, in principle, you can have a conversation with the relevant decision-makers and have arguments with them, rather than surrender to the government bureaucracies where pretty commonly there is nobody with any power to do anything differently, and any free power is being competed over by the biggest possible factions in the country, and also a lot of the unaccountable decision-makers are pretty corrupt (cf. Patrick McKenzie and Zvi’s writing about how Covid was dealt with), and where close to literally zero people understand how the systems work or the arguments for existential risks here. I think people writing about realistic and concrete stories of how this could go well would change my mind here, but I don’t want to put my name to this threat, seems overall like reducing civilization’s agency in the matter, and I wouldn’t currently take this action myself. I think a bunch of the rest of the letter is pretty good, I like the AI summer and AI fall bit at the end (it is tasteful in relation to

Ah, I was under a misapprehension, I thought the data was much more recent, but the GPT-4 page says:

GPT-4 generally lacks knowledge of events that have occurred after the vast majority of its data cuts off (September 2021)

However that is after GPT-3 was released (June 2020), so it's a new dataset.

Extrapolating naively, 2 years from now we will see GPT-5 trained on data from today. 

My model is that OpenAI and Anthropic researchers set up a web-scraper that reads through lots of popular internal reddit links (or possibly literally all of reddit) and then uses all of that as the training data for their language models.

...googling shows this as the official answer for GPT-3, which contains a lot of the popular and public internet. I am unclear whether that contains reddit, but if not then I believe I heard that they made a crawler specifically for reddit.

4the gears to ascension2d
But are they going to do that again? GPT4 used the same training set as GPT3 didn't it?

I don't agree with the recommendation, so I don't think I should sign my name to it.

To describe a concrete bad thing that may happen: suppose the letter is successful and then there is a pause. Suppose a bunch of AI companies agree to some protocols that they say that these protocols "ensure that systems adhering to them are safe beyond a reasonable doubt". If I (or another signatory) is then to say "But I don't think that any such protocols exist" I think they'd be in their right to say "Then why on Earth did you sign this letter saying that we could find them within 6 months?" and then not trust me again to mean the things I say publicly.

The letter says to pause for at least 6 months, not exactly 6 months.

So anyone who doesn't believe that protocols exist to ensure the safety of more capable AI systems shouldn't avoid signing the letter for that reason, because the letter can be interpreted as supporting an indefinite pause in that case.

I think scraping reddit is common. The SSC subreddit is pretty popular. I wonder if there could be a post on that subreddit that was just a space for people to publish books in the comments.

2the gears to ascension2d
I feel like we have very different models of how people get their datasets. I'm pretty sure you'd have to just hand someone a dataset and say "here I downloaded some books for your agi kid to read"

I concur, the typo in "Poeple" does call into question whether he has truly signed this letter.

I think this letter is not robust enough to people submitting false names. Back when Jacob and I put together DontDoxScottAlexander.com we included this section, and I would recommend doing something pretty similar:

I think someone checking some of these emails would slow down high-profile signatories by 6-48 hours, but sustain trust that the names are all real.

I'm willing to help out if those running it would like, feel free to PM me.

5Evan R. Murphy1d
Yea they made a mistake not verifying signatures from the beginning. But they have course-corrected, see this notice FLI has posted now above the signatories list: Screenshot [https://photos.app.goo.gl/jeRi2RJ1KNyPaH3F8]

I believe the high-profile names at the top are individually verified, at least, and it looks like there's someone behind the form deleting fake entries as they're noticed. (Eg Yann LeCun was on the list briefly, but has since been deleted from the list.)

Oh no.  Apparently also Yann LeCun didn't really sign this.

https://twitter.com/ylecun/status/1640910484030255109

Indeed.  Among the alleged signatories:

Xi Jinping, Poeple's Republic of China, Chairman of the CCP, Order of the Golden Eagle, Order of Saint Andrew, Grand Cordon of the Order of Leopold

Which I heavily doubt.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.

That's nice, but I don't currently believe there are any audits or protocols that can prove future AIs safe "beyond a reasonable doubt".

In parallel, AI developers must work with policymakers to dramatically accelerate development of ro

... (read more)
4evhub1d
I think you can do this with a capabilities test (e.g. ARC's [https://www.alignmentforum.org/posts/4Gt42jX7RiaNaxCwP/more-information-about-the-dangerous-capability-evaluations]), just not with an alignment test (yet [https://www.alignmentforum.org/posts/uqAdqrvxqGqeBHjTP/towards-understanding-based-safety-evaluations]).
9Evan R. Murphy2d
The letter isn't perfect, but the main ask is worthwhile as you said. Coordination is hard, stakes are very high and time may be short, so I think it is good to support these efforts if they are in the ballpark of something you agree with.

I'm not saying it won't improve someone's post to get direct feedback from us, and I'm not saying it might not end up reducing some amount of effort from someone on the lightcone team to respond to things people are wrong about, but my current model is that for people to have justified belief in their model of the work that an org does, they should believe they would have heard negative info about us if it exists, and so I ought to encourage people to be openly severely critical and push back against demands to not write their criticism for a pretty large ... (read more)

-2jefftk2d
Do you read my post as a demand that people not write their criticism?

For the record I don't think anyone needs to check with Lightcone before criticizing any of our work.

3RobertM2d
To the extent that this is saying that you expect Lightcone to pay fewer (unjustified) costs than most EA orgs from people publishing criticisms without pinging us first, I think that's probably true.  Those costs are obviously not zero and it's not obvious to me that the "correct" policy for any given individual who might want to publish such a criticism is to skip that step by default, if they're optimizing for the propagation of true information. The reason most people should at least consider skipping that step is because, as Jeff points out, this expectation is a non-trivial cost that will sometimes cause the publication to not happen.  However, if you're a person for whom the cost of that kind of check-in is low, there may often be an intersection above zero between the curves of your cost and the improved output you'd get from the check-in.

For the record I think "the correct number of people to die as a result of technological progress is not zero". My issue is that the correct number is not "all of the people".

2trevor1d
I'm a bit confused, I was mainly thinking about getting inside the heads of the devs and executives/lawyers who work on the implementing the prudishness of public chatbots.

My current understanding is that Sam gained good standing as a result of having lots of money for EA causes, not as a result of being charismatic in EA spaces? My sense is that the person you mentioned would struggle to gain good standing in the Lightcone offices without any preexisting money or other power. 


No, he gained good standing from being around the EA community for so many years and having sophisticated ethical views (veganism, a form of utilitarianism, etc) and convincing well-respected EAs to work with him and fund him, as well as from havi... (read more)

Sorry if I wrote unclearly. For most of the time (even in the last 6 months) I thought it was worth continuing to support the ecosystem, and certainly to support the people in the office, even if I was planning later to move on. I wanted to move on primarily because of the opportunity cost — I thought we could do something greater. But I believe Habryka wanted to separate from the whole ecosystem and question whether the resources we were providing were actually improving the world at all, and at that point it's not simply a question of opportunity cost bu... (read more)

2016-17

Added: To give context, here's a list of number of LW posts by year:

  • 2009: 852
  • 2010: 1143
  • 2011: 3002
  • 2012: 2583
  • 2013: 1973
  • 2014: 1797
  • 2015: 2002 (<– This should be ~1880, as we added all ~120 HPMOR posts and backdated them to 2015)
  • 2016: 1303 (<– This is the most 'dead' year according to me, and the year with the fewest posts)
  • 2017: 1671 (<– LW 2.0 revived in the second half of this year)
  • 2018: 1709
  • 2019: 2121
  • 2020: 3099
  • 2021: 3230
  • 2022: 4538
  • First quarter of 2023: 1436, if you 4x that it is 5744 

(My, it's getting to be quite a lot of posts these days.)

This is an odd response from me, but, recently for my birthday, I posted a survey for my friends to fill out about me, anonymously rating me on lots of different attributes. 

I included some spicier/fun questions, one of which was whether they thought they were smarter than me or not. 

Here were the results for that question:

Forms response chart. Question title: (Optional) Is Ben smarter than you, or are you smarter than Ben?. Number of responses: 44 responses.

It was roughly 50/50 throughout the entire time data came in over the two days.

The vast majority of people responding said that they'd read the sequences (either "some" or "yes"). I'd guess that basically everyone had except my... (read more)

Even though Eliezer claims that there was no fire alarm for AGI, perhaps this is the fire alarm?

I mean, obviously not, as most governments don't know that metaculus exists or what a prediction market is.

And if you block any one path to the insight that the earth is round, in a way that somehow fails to cripple it, then it will find another path later, because truths are interwoven. Tell one lie, and the truth is ever-after your enemy.

In case it's of any interest, I'll mention that when I "pump this intuition", I find myself thinking it essentially impossible to expect we could ever build a general agent that didn't notice that the world was round, and I'm unsure why (if I recall correctly) I sometimes I read Nate or Eliezer write that they think it's quit... (read more)

1baturinsky9d
It's probably not a good idea to feed AI an inconsistent data. For example, if evidence shows that Earth is round, but AI is absolutely sure it isn't, it will doubt about any evidence of that, which could lead to the very weird world view. But I think it's possible to make AI know about the fact, but avoiding thinking about it.
3Gerald Monroe9d
Couldn't you just prompt a different model to modify all training data, both text and images, to change it where the data is consistent with the earth being flat or state it is impossible to do so? Model wouldn't be allowed to learn from user sessions (like gpt-n) or to generate answers and reflect on it's own beliefs (used to fine-tune gpt-4)
2JNS9d
Doable in principle, but such measures would necessarily cut into the potential capabilities of such a system. So basically a trade off, and IMO very worth it. The problem is we are not doing it, and more basic, people generally do not get why it is important. Maybe its the framing, like when EY goes "superintelligence that firmly believes 222+222=555 without this leading to other consequences that would make it incoherent" [https://www.alignmentforum.org/posts/HByDKLLdaWEcA2QQD/?commentId=S2wch4gagHCkG6t6i]. I get exactly what he means, but I suspect that a lot of people are not able to decompress and unroll that into something they "grook" on a fundamental level. Something like "superintelligence without knowledge about itself and never reason about itself, without this leading to other consequences that would make it incoherent" would cut out a ton of lethality, and combine that with giving such a thing zero agency in the world, you might actually have something that could do "things we want, but don't know how to do" without it ending us on the first critical try.
  • the AGI was NOT exercising its intelligence & reason & planning etc. towards an explicit, reflectively-endorsed desire for “I am being helpful / I am being docile / I am acting with integrity / blah blah”.

I am naively more scared about such an AI. That AI sounds more like if I say "you're not being helpful, please stop" that it will respond "actually I thought about it, I disagree, I'm going to continue doing what I think is helpful".

I think that, if an AGI has any explicit reflectively-endorsed desire whatsoever, then I can tell a similar scary story: The AGI’s desire isn’t quite what I wanted, so I try to correct it, and the AGI says no. (Unless the AGI’s explicit endorsed desires include / entail a desire to accept correction! Which most desires don’t!)

And yes, that is a scary story! It is the central scary story of AGI alignment, right? It would be nice to make an AGI with no explicit desires whatsoever, but I don’t think that’s possible.

So anyway, if we do Procedure X which will n... (read more)

The fact that some people in EA (a huge broad community) are probably wrong about some things didn't seem to be an argument that Lightcone Offices would be ineffective as (AFAIK) you could filter people at your discretion. 

I mean, no, we were specifically trying to support the EA community, we do not get to unilaterally decide who is part of the community. People I don't personally have much respect for but are members of the EA community who are putting in the work to be considered members in good standing definitely get to pass through. I'm not goin... (read more)

And these are both real obstacles. But there are deeper obstacles, that seem to me more central, and that I haven't observed others to notice on their own.

I brainstormed some possible answers. This list is a bit long. I'm publishing this comment because it's not worth the half hour to make it concise, yet it seems worth trying the exercise before reading the post and possibly others will find it worth seeing my quick attempt.

I think the last two bullets are probably my best guesses. Nonetheless here is my list:

  • Just because an AI isn't consciously deceptive
... (read more)

For contrast, on the same time period, $185k/month could provide for salary, lodging and office space for 50 people in Europe, all who counterfactually would not be doing that work otherwise, for which I claim 50 man months per month of extra x-risk reduction work.

The default outcome of giving people money, is either nothing, noise, or the resources getting captured by existing incentive gradients. In my experience, if you give people free money, they will take it, and they will nominally try to please you with it, so it's not that surprising if you can fi... (read more)

the local incentives of those with high status agree with performance quantification just fine, so long as the metric in question is one by which they're already doing well.

To me this rhymes pretty closely with the message in Is Success the Enemy of Freedom?, in that in both cases you're very averse to competition on even pretty nearby metrics that you do worse on.

The vast majority of users see a little circle widget in the bottom right. For instance I can see it now as I look at your comment.

Sometimes users have added things to their browsers that remove it, and you can also remove it in your account settings.

1Diabloto9611d
ah yes I see, thank you, it was Brave's fault, time to finally do my switch to Firefox

People think the speed-up by rationalists is only ~5 years? I thought people were thinking 10-40. I do not think I would trade the entire history of LessWrong, including the articulation of the alignment problem, for 5 years of timelines. I mean, maybe it's the right call, but it hardly seems obvious. 

When LessWrong was ~dead (before we worked on the revival) I had this strong sense that being able to even consider that OpenAI could be bad for the world, or the notion that the alignment problem wasn't going to go okay by-default, was being edged out o... (read more)

1Chris van Merwijk5d
"When LessWrong was ~dead" Which year are you referring to here?

I mean, I don't see the argument for more than that. Unless you have some argument for hardware progress stopping, my sense is that things would get cheap enough that someone is going to try the AI stuff that is happening today within a decade.

A few replies:

That is ~$185k/month and ~$2.22m/year. I wonder if the cost has anything to do with the decision? There may be a tendency to say "an action is either extremely good or extremely bad because it either reduces x-risk or increases x-risk, so if I think it's net positive I should be willing to spend huge amounts of money."

I don't think cost had that much to do with the decision, I expect that Open Philanthropy thought it was worth the money and would have been willing to continue funding at this price point. 

In general I think the correct re... (read more)

-1Jakub Kraus2d
This is confusing to me. Why not do "something quite different" from the start? I'm trying to point at opportunity costs more than "gee, that's a lot of money, the outcome had better be good!" There are many other uses for that money besides the Lightcone offices. My current understanding is that Sam gained good standing as a result of having lots of money for EA causes, not as a result of being charismatic in EA spaces? My sense is that the person you mentioned would struggle to gain good standing in the Lightcone offices without any preexisting money or other power. 
3Jonathan Claybrough11d
You're probably not the one to rant to about funding but I guess while the conversation is open I could use additional feedback and some reasons for why OpenPhil wouldn't be irresponsible in spending the money that way. (I only talk about OpenPhil and not particularly Lightcone, maybe you couldn't think of better ways to spend the money and didn't have other options) Cost effectiveness calculations for reducing x-risk kinda always favor x-risk reduction so looking at it in the absolute isn't relevant. Currently AI x-risk reduction work is limited because of severe funding restrictions (there are many useful things to do that no one is doing for lack of money) which should warrant carefully done triage (and in particular considering the counterfactual).  I assume the average Lightcone office resident would be doing the same work with slightly reduced productivity (let's say 1/3) if they didn't have that office space (notably because many are rich enough to get other shared office space from their own pocket). Assuming 30 full time workers in the office, that's 10 man months per month of extra x-risk reduction work.  For contrast, on the same time period, $185k/month could provide for salary, lodging and office space for 50 people in Europe, all who counterfactually would not be doing that work otherwise, for which I claim 50 man months per month of extra x-risk reduction work. The biggest difference I see is incubation time would be longer than for the Lightcone offices, but if I start now with $20k/month I'd find 5 people and scale it up to 50 by the end of the year.   

The hard question is "how much goodharting is too much goodharting".

You did see part of it before; I posted in Open Thread a month ago with the announcement, but today Ray poked me and Oli to also publish some of the reasoning we wrote in slack.

Yeah, but that doesn't sound like my strategy. I've many times talked to people who are leaving or left and interviewed them about why and what they didn't like and their reasons for leaving.

5ojorgensen22d
Didn't get that impression from your previous comment, but this seems like a good strategy!

Thanks for saying. Sounds like another piece I will skip! 

While I am generally interested in justice around these parts, I generally buy the maxim that if the news is important, I will hear they key info in it directly from friends (this was true both for covid and for Russia-nukes stuff), and that otherwise the news media spend enough effort to do narrative-control that I'd much rather not even read the media's account of things.

This seems like a bad rule of thumb. If your social circle is largely comprised of people who have chosen to remain within the community, ignoring information from "outsiders" seems like a bad strategy for understanding issues with the community.

I also went to hpmor.com yesterday, was disappointed that the old site was gone and now redirects to a relatively under-optimized LessWrong page, and complained to the LW team about this.

I think mostly somewhat confused? 

Though I've never met her, from her writing and things others have told me, I expect LaSota seems much more visibly out-of-it and threatening than e.g. Michael does, who I have met and didn't seem socially alarming or unpredictable in the way where you might be scared of a sudden physical altercation.

I think Vassar is alarming and unpredictable in a way that causes people to be afraid of a sudden physical altercation. For example, I have felt scared of physical altercations with him. If I recall correctly, he raised his voice while telling a friend of mine that he thought they were worse than the Nazis during a conversation in a hotel lobby, which freaked out other people who were in the lobby (I don't remember how my friend felt).

7DanielFilan1mo
For what it's worth, when I've talked to Ziz, she didn't seem out of it, threatening, alarming, or unpredictable to me, and I wasn't scared of a sudden physical altercation.

Not sure if this answers your question, but recently I had an assistant who would ask me questions about how I was feeling. Often, when I was in the midst of focusing on some difficult piece of work, I would answer "I don't know", and get back to focusing on the work. 

My vague recollection is that she later showed me notes she'd written that said I was sighing deeply, holding my forehead, had my shoulders raised, was occasionally talking to myself, and I came to realize I was feeling quite anxious at those times, but this information wasn't accessible... (read more)

I think I've been implicitly coming to believe that (a) all people are feeling emotions all the time, but (b) people vary in how self-aware they are of these emotions.

Does anyone want to give me a counter-argument or counter-evidence to this claim?

2Dagon1mo
I think I need an operational definition of "feeling emotion", especially when not aware of it, in order to agree or disagree.  I think for many reasonable definitions, like "illegible reactions below the level of self-modeling of causality", it's extremely common for this to affect almost everyone almost all the time. I'll still dispute "all", but it wouldn't surprise me if it were close.  It is still highly variable (over time and across individuals) how much impact emotions have on behaviors and choices.  And if you mean to imply "semi-legible abstract structures with understandable causes, impacts, and ways to communicate about them", then I pretty much fully disagree. Note that as someone who is sometimes less aware of (and I believe less impacted by) their emotions than many seem to be, I strenuously object to being told what I'm feeling by someone who has no clue what (if anything) I'm feeling.  And if you're rounding "low impact" to "not feeling", I object to being excluded from the set of "all people".  (only because it's relevent) Note that my "strenuous objection" is mostly about the lack of precision or correctness of the statement - you're free to believe what you like.  I'm not actually offended, as far as I can tell.
4Vladimir_Nesov1mo
People vary in how relevant their emotions are to anything in their life.

I heard that LaSota ('ziz') and Michael interacted but I am sort of under the impression she was always kind of violent and bizarre before that, so I'm not putting much of this bizarreness down to Michael. Certainly interest in evidence about this (here or in DM).

It sure sounds like you think outsiders would typically have the "common sense" to avoid Ziz. What do you think such an outsider would make of this comment?

Load More