All of swarriner's Comments + Replies

(this comment is copied to the other essay as well)

I respect the attempt, here, and I think a version of the thesis is true. Letting go of control and trying to appreciate the present moment is probably the best course of action given that one is confronted with impending doom. I also recognize that reaching this state is not just a switch one can immediately flip in one's mind; it can only be reached by way of practice.

With these things in mind, I am still not okay. More than anything I find myself craving ignorance. I envy my wife; she's not in ratspaces... (read more)

2[comment deleted]1mo
Mostly agreed, these days I don't talk about AI safety with anyone if they don't already know about it, precisely because it wouldn't be kind at all. If you end up never coming back on here: I wish you a happy life, man, may we all do the best we can in the time we have left.
(I have a strong personal aversion to the balm of ignorance/feeling better because of not seeing what's true, so I strong disagree-voted but upvoted the comment in general.)

(this comment is copied to the other essay as well)

I respect the attempt, here, and I think a version of the thesis is true. Letting go of control and trying to appreciate the present moment is probably the best course of action given that one is confronted with impending doom. I also recognize that reaching this state is not just a switch one can immediately flip in one's mind; it can only be reached by way of practice.

With these things in mind, I am still not okay. More than anything I find myself craving ignorance. I envy my wife; she's not in ratspaces... (read more)

1Gretta Duleba1mo
Do what you need to do to take care of yourself! It sounds like you don't choose to open up to your wife about your distress, for fear of causing her distress. I follow your logic there, but I also hope you do have someone you can talk to about it whom you don't fear harming, because they already know and are perhaps further along on the grief / acceptance path than you are. Good luck. I wish you well.

It's the "Boy who cried wolf" fable in the format of an incident report such as what might be written in the wake of an industrial disaster. Whether the fictional report writer has learned the right lessons I suppose is an exercise left for the reader.

My advice would be this:

Trying to meet people for the sole purpose of dating them is a spiritually toxic endeavor, with online dating being particularly bad. I had a handful of girlfriends before meeting my wife, none of whom came to me through online dating or trying to get dates with people I didn't know.

I contend that the best path to a relationship is through community, broadly defined. What you want is to be around people with whom you can cultivate compatibility. The online dating/cold approach model relies on being able to quickly discern compatibil... (read more)

2Brendan Long3mo
I disagree that that intentionally going on dates is "spiritually toxic". You don't really need to be able to discern compatibility quickly (you can always go on another date). I approached dates as "doing something fun with a friend who thinks I'm hot", and even though I didn't end up seriously dating most of them, they were still fun experiences (having conversations at coffee shops and restaurants, hiking, paddlingboarding, etc.). I do think dating people in your community is easier than online dating, but my experience is that finding a community is much harder than finding someone to date. Maybe this is a case of which one you're better at though. For what it's worth, in my friend group, half of us are dating or married to people we met though online dating, and the other half are with people they met in college. I only know one person married to someone they met in other ways and their method isn't helpful (be so hot that people will hit on you at the gym).

You're not wrong. Learning to crimp really does enable climbers to perform feats that others cannot, and plenty of them suffer injuries like the one I've linked to and decide to heal and keep going. My addendum isn't "never do something hard or risky," it's "pain is a warning; consider what price you are willing to pay before you go pushing through it."

Addendum: Crimp grips are a major cause of climbing injuries. It's sheer biomechanics. The crimp grip puts massive stress on connective tissues which aren't strong enough to reliably handle them.

The moral of the addendum: choose your impossible challenges wisely; even if you can overcome them the stress and pain might have been a warning from the beginning. If nothing else it should be a warning to get some good advice about prevention or you may find yourself unable to pursue your goal for weeks at a time.

Oh this is really interesting I did not know this - thank you for bringing it up. Definitely undercuts the metaphor, but do you think the main flow of the post still stands? Curious if you have any thoughts. 

It's going to be tricky. You may already be too close to the situation to judge impartially, and a case study is going to be difficult to use as evidence against population-level surveys of well-being, especially for your implied time horizon. You could attempt to benchmark against previous work, e.g. see what the literature has to say about the effects of poverty on diet, educational attainment, etc. in first-world cities, but your one new data point still won't generalize and it wouldn't be doing the heavy lifting in your argument for localism at that point.

I have never met the family and don’t plan on meeting them in person. A case worker will be in contact with them. I aim to maintain that distance, merely asking the case worker for data to plug into my spread sheet. The problem is that I do not know what data I should be plugging in.

Unless I'm very much mistaken, emergency mobilization systems refers to autonomic responses like a pounding heartbeat, heightened subjective senses, and other types of physical arousal; i.e. the things your body does when you believe someone or something is coming to kill you with spear or claw. Literal fight or flight stuff.

In both examples you give there is true danger, but your felt bodily sense doesn't meaningfully correspond to it; you can't escape or find the bomb by being ready for an immediate physical threat. This is the error being referred to. In both cases the preferred state of mind is resolute problem-solving and inability to register a felt sense of panic will likely reduce your ability to get to such a state.

This. I think a lot of the problems re emergency mobilization systems relate to that feeling of immediateness, when it's not. I think a lot of emergencies are way too long-term for us, so we apply emergency mobilization systems even when they aren't there.

I think I see your point but I'm not sure how to answer the question as you posed it so let me make an analogy:

Imagine I come to you and say "I have a revolutionary new car design that will revolutionize the market and break the chokehold of big auto! Best of all, it's completely safe; the locks are unpickable and the windows are unbreakable, so no one will ever be able to mug you in your car!"

You would be wise to ask "Okay, but what about in a crash? Is it safe there?" and the truth would be no, not really. Actually people who get caught in crashes with t... (read more)

This is a really good way of putting it, and I've never thought about it this effectively before. Before reading this, my best thoughts like this on crypto were comparing cryptocoins to gold reserves or corn or oil, which are difficult to flood the market with. But, like, you might end up disappointed if a new cold war starts and world gold reserves get sold off. But that risk is nothing compared to crypto exchanges tanking the industry or everything related to hacking. If AI becomes a fire hose of cognitive labor [] that makes everything topsy turvy then that's even worse for crypto.

I'd say that's a good point but perhaps doesn't exhaustively cover all the problems. The way I've come to think about crypto which I think is roughly congruent is that the things crypto is good for (decentralization, security against hacking) are not major vectors of attack by bad actors under the status quo, and the things it isn't robust against definitely are. (social engineering, obfuscation of value)

how broad did you intend “status quo” to read here (status quo of the non-crypto society, status quo of the crypto-only society, or status quo of all of society)? there’s a certain interpretation of this comment which is almost tautological: the things any social system is good at are (definitionally?) the things about it which can’t be exploited, the things it isn’t robust against are those where it can be exploited. but i can’t tell if that’s exactly what you’re getting at, or not.

This can be true but it varies a decent amount with expectations I think. As my friends get older and more of us have kids to think about it's becoming more normalized to have a mix of sobriety levels at what would have once been drunk parties.

Any industry with public exposure is going to run into problems. Take retail; having the store open at all possible profitable hours is much more important than having a full complement of staff at any given moment. My job is only adjacent to retail but even so, having a whole team go on vacation would put the supply chain on pause. That move might technically be possible with advance planning but it would have major impacts on throughput.

I think any sector that relies on moving physical matter (including people) through space is a bad candidate because yo... (read more)

The thing you've outlined sounds to me like news media, sort of, as well as implicitly leaning on existing news media. The amount of information entailed is comparable; having up-to-date info on over 3000 United States counties is a far from trivial endeavor.[1]

It's different of course in that existing news media isn't remotely incentivized to support this kind of work, instead being caught in the tar pit of getting eyeballs and ad dollars, as well as being an arena which monied interests know they need to optimize for. Of course if the tool you're describ... (read more)

Yes, that's my point. I'm not aware of a path to meaningful contribution to the field that doesn't involve either doing research or doing support work for a research group. Neither is accessible to me without risking the aforementioned effects.

2Alex Flint5mo
Yeah right. It does seem like work in alignment at the moment is largely about research, and so a lot of the options come down to doing or supporting research. I would just note that there is this relatively huge amount of funding in the space at the moment -- OpenPhil and FTX both open to injecting huge amounts of funding and largely not having enough places to put it. It's not that it's easy to get funded -- I wouldn't say it's easy at all -- but it does really seems like the basic conditions in the space are such that one would expect to find a lot of opportunities to be funded to do good work.
I'll say one thing. I too do not like the AI doomtide/doomerism, despite thinking it's a real problem. You can take breaks from LW or hide posts for AI from your frontpage if you're upset.

I feel like you mean this in kindness, but to me it reads as "You could risk your family's livelihood relocating and/or trying to get recruited to work remotely so that you can be anxious all the time! It might help on the margins ¯\_(ツ)_/¯ "

2Alex Flint5mo
Why would you risk your family's livelihood? That doesn't seem like a good idea. And why would you go somewhere that you'd be anxious all the time?

AI discourse triggers severe anxiety in me, and as a non-technical person in a rural area I don't feel I have anything to offer the field. I personally went so far as to fully hide the AI tag from my front page and frankly I've been on the threshold of blocking the site altogether for the amount of content that still gets through by passing reference and untagged posts. I like most non-AI content on the site, been checking regularly since the big LW2.0 launch, and I would consider it a loss of good reading material to stop browsing, but since DWD I'm takin... (read more)

1Alex Flint5mo
Thank you for writing this comment. Just so you know, probably you can contribute to the field, if that is your desire. I would start by joining a community where you will be happy and where people are working seriously on the problem.
Yeah this is a point that I failed to make in my own comment — it's not just that I'm not interested in AIS content / not technically up to speed, it's that seeing it is often actively extremely upsetting

Aside from double-counting, here's a problem; you should have just set your starting priors on the false and true statements as x and 1-x respectively, where x is the chance your whole ontology is screwed up, and you'd be equally well calibrated and much more precise. You've correctly identified that the perfect calibration on 90% is meaningless, but that's because you explicitly introduced a gap between what you believe to be true and what you're representing as your beliefs. Maybe that's your point; that people are trying to earn a rationalist merit badge by obfuscating their true beliefs, but I think at least many people treat the exercise as a serious inquiry into how well-founded beliefs feel from the inside.

1[comment deleted]6mo

Is there a strong theoretical basis for guessing what capabilities superhuman intelligence may have, be it sooner or later? I'm aware of the speed & quality superintelligence frameworks, but I have issues with them.

Speed alone seems relatively weak as an axis of superiority; I can only speculate about what I might be able to accomplish if, for example, my cognition were sped up 1000x, but it find it hard to believe it would extend to achieving strategic dominance over all humanity, especially if there are still limits on my ability to act and perceive ... (read more)

In some ways this doesn't matter. During the time that there is no AGI disaster yet, AGI timelines are also timelines to commercial success and abundance, by which point AGIs are collectively in control. The problem is that despite being useful and apparently aligned in current behavior (if that somehow works out and there is no disaster before then), AGIs still by default remain misaligned in the long term, in the goals they settle towards after reflecting on what that should be. They are motivated to capture the option to do that, and being put in control of a lot of the infrastructure makes it easy, doesn't even require coordination. There are some stories [] about that []. This could be countered by steering the long term goals and managing current alignment security, but it's unclear how to do that at all and by the time AGIs are a commercial success it's too late, unless the AGIs that are aligned in current behavior can be leveraged to solve such problems in time. Which is, unclear. This sort of failure probably takes away cosmic endowment [], but might preserve human civilization in a tiny corner of the future if there is a tiny bit of sympathy/compassion in AGI goals, which is plausible for goals built out of training on human culture, or if it's part of generic values [] that most CEV processes starting from disparate initial volitions settle on. This can't work out for AGIs with reflectively stable goals that hold no sympathy, so that's a bit of apparent alignment that can backfire.

This feels important but after the ideal gas analogy it's a bit beyond my vocabulary. Can you (or another commenter) distill a bit for a dummy?

I think 3blue1brown's videos give a good first introduction about neural nets (the "atomic" description):  Does this help?
The ideal gas law describes relations between macroscopic gas properties like temperature, volume and pressure. E.g. "if you raise the temperature and keep volume the same, pressure will go up". The gas is actually made up of a huge number of individual particles each with their own position and velocity at any one time, but trying to understand the gas's behavior by looking at long list of particle positions/velocities is hopeless. Looking at a list of neural network weights is analogous to looking at particle positions/velocities. This post claims there are quantities analogous to pressure/volume/temperature for a neutral network (AFAICT it does not offer an intuitive description of what they are)
Answer by swarrinerJun 27, 202211-8

Epistemic status: intuition and some experience, no sources.

Long-form profiles are mostly a waste of time. The two key weaknesses are 1) the primacy of photos and 2) adversarial communication

  1. Tinder made millions by realizing that many, many users make snap decisions based on photos and the rest doesn't produce as strong of results. That's not to say no one reads the profiles and decides on them, but at least a minority do (and some will just stay anchored to their impression of the photos while reading anyways).

  2. Dating is a "lemon market," where every

... (read more)
1Randomized, Controlled9mo
High quality, interesting, funny writing has been a difficult to manufacture signal up till now. It's possible GPT-n will change that. But folks on LW are probably filtering for people who will filter for making real pretty with the letter forms.

I see at least two ways in which it isn't a lemon market for everyone/every circumstance:

1. If you value compatibility a lot and seek a long-term relationship, you're wasting your own time if you try to cover up things that some people might consider to be flaws or dealbreakers.
2: Some people are temperamentally quite sensitive to rejection, and rejection hurts more the more someone gets close to the real you. To protect against the pain from rejection at a later point, some people are deliberately very open about their flaws right out of the gate.

Doing a ... (read more)

While that's an admirable position to take and I'll try to take it in hand, I do feel EY's stature in the community puts us in differing positions of responsibility concerning tone-setting.

somebody not publishing the latter is, I'm worried, anticipating social pushback that isn't just from me.

Respectfully, no shit Sherlock, that's what happens when a community leader establishes a norm of condescending to inquirers.

I feel much the same way as Citizen in that I want to understand the state of alignment and participate in conversations as a layperson. I too, have spent time pondering your model of reality to the detriment of my mental health. I will never post these questions and criticisms to LW because even if you yourself don't show up to h... (read more)

You (and probably I) are doing the same thing that you're criticizing Eliezer for. You're right, but don't do that. Be the change you wish to see in the world.

Let me make it clear that I'm not against venting, being angry, even saying to some people "dude, we're going to die", all that. Eliezer has put his whole life into this field and I don't think it's fair to say he shouldn't be angry from time to time. It's also not a good idea to pretend things are better than they actually are, and that includes regulating your emotional state to the point that you can't accurately convey things. But if the linchpin of LessWrong says that the field is being drowned by idiots pushing low-quality ideas (in so many words), t... (read more)

Yup, I've been disappointed with how unkindly Eliezer treats people sometimes. Bad example to set. 

EDIT: Although I note your comment's first sentence is also hostile, which I think is also bad.

Even an unknown member of parliament has still been tested against a competitive market, has at least met many or all the key power brokers, etc. They're much closer to the president end of the spectrum than the random citizen end.

To add to the consensus— if a random person actually had a favorable matchup against a career politician of any stripe, that would be a massive low-hanging fruit for existing political actors to capitalize on. The RNC and DNC would be falling all over themselves to present "average joe" candidates if doing so provided a consistent advantage. They're not, so it follows that either those organizations are both highly un-optimized for winning elections (probably not; too much money at stake) or else that the evidence doesn't bear out that they should.

5Eli Tyre10mo
I don't buy this. I think RNC and DNC are not just trying to win votes; the individuals within those orgs are mainly trying to preserve and increase their own power, which entails winning votes, but it also entails a bunch of internal coalition stuff.  Grabbing an excelent non-politician, and grooming him or her for power, and then attempting to put him in a position of power, without "climbing the ranks", isn't obviously in the interest of any of the party officials of either party. 

I'm coming back to this thread having just seen the movie and really enjoyed it. femtogrammar's remarks about the emotional core of the movie partially resonate with me, in that there's a strong thread of making the active choice to live one's life as opposed to being swept along in it. I would elaborate that I think this is a movie about gaining perspective in the face of struggle. I also think the sci-fi action elements are quite effective in communicating this theme as well as being very technically well executed.

Basically, each act of the movie shows E... (read more)

It seems to me like a big part of the picture here is legibility. Social and private boundaries are a highly illegible domain, and that state of affairs is in conflict with the desires of a society which is increasingly risk-averse. To stick with the language of this particular analogy, a successful benign violation for you is one that shows metis over the domain of "living with Duncan". On the flip side, the illegibility makes it harder for you to distinguish between malicious probing for weakness and innocent misjudgment, and for the other party to disti... (read more)

I find this comment offensive.

First, your description of the process of consent is not universal; it doesn't describe any relationship I've been in going all the way back to when I was a teenager. At the very least this should tell you that this series of events wasn't acceptable because it's "just the way humans interact." Many men, including myself, actually talk to the women we want to have sex with, and "having lower amounts of sex" is far from an adequate reason to resort to the boundary-pushing and manipulation you describe.

Second, "the fact that you... (read more)

Thank you for this response and for so clearly pointing out these issues. I also found this comment offensive and hostile - and do not think I could have better or more concisely articulated these points as you have or that they would have been as well-received coming from myself. I appreciate the clarity, awareness, and directness in your response. 
Ah, I think you're right, it was very unkind to post this here, and I regret it. Though I think from your reply and the downvotes that I didn't properly articulate my view, but I don't think that a debate would be especially useful here.

I don't like this defense for two reasons. One, I don't se why the same argument doesn't apply to the role Eliezer has already adopted as an early and insistent voice of concern. Being deliberately vague on some types of predictions doesn't change the fact that his name is synonymous with AI doomsaying. Second, we're talking about a person whose whole brand is built around intellectual transparency and reflection; if Eliezer's predictive model of AI development contains relevant deficiencies, I wish to believe that Eliezer's predictive model of AI developm... (read more)

Three pillars; body, mind, environment.

Body - A varied diet including lots of plants and a mix of proteins— with 5 or 6 figures to spend a month you never need to eat another low-quality convenience meal. An appropriate exercise routine for the subject's level, incorporating at least some light strength training and adding modules as the habits become ingrained and sustainable (candidate exercises include yoga, jogging, crossfit, and kickboxing in no particular order). Sleep hygiene— 6-8 hours on a consistent schedule according to the subject's needs. Quit... (read more)

I don't agree with this as a principle, although it may be a correct output. I think the notion of "a decent default" misses the mark compared to "think about your audience and the key elements of your message before deciding your form and tone."

To use a simple metaphor, if you need to anchor two pieces of wood together, a hammer and nails are usually going to be the quickest and cheapest way to do it. A drill and screws are often overkill. However I don't think that makes the hammer and nails the default; I think it makes them the correct tool in the majo... (read more)

I'm not the OP, but I bite that bullet all day long. My parents' last wishes are only relevant in two ways that I can see:

  1. Their values are congruent with my own. If my parents last wishes are morally repugnant to me I certainly feel no obligation to help execute those wishes. Thankfully, in real life my parents values and wishes are fairly congruent with my own, so their request is likely to be something I could evaluate as worthy on its own terms; no obligation needed.

  2. I wish to uphold a norm of last wishes being fulfilled. This has to meet a minimum

... (read more)
6Jackson Wagner1y
Your #2 motivation goes pretty far, so this is actually a much bigger exception to your bullet-bite than you might think.  The idea of "respecting the will of past generations to boost the chances that future generations will respect your will" goes far beyond sentimental deathbed wishes and touches big parts of how cultural & financial influence is maintained beyond death.  See my comment here. []

Facts and data are of limited use without a paradigm to conceptualize them. If you have some you think are particularly illuminative though by all means share them here.

My main point is that there is not enough evidence for a strong claim like doom-soon. In absence of hard data anybody is free to cook up argument pro or against doom-soon.  You may not like my suggestion, but I would strongly advise to get deeper into the field and understand it better yourself, before taking important decisions. In terms of paradigms, you may have a look at why building AI-software development is hard (easy to get to 80% accurate, hellish to get to 99%),  AI-winters and hype cycles (disconnect between claims-expectations and reality), the development of dangerous technologies (nuclear, biotech) and how stability has been achieved.

As a layperson, the problem has been that my ability to figure out what's true relies on being able to evaluate subject-matter experts respective reliability on the technical elements of alignment. I've lurked in this community a long time; I've read the Sequences and watched the Robert Miles videos. I can offer a passing explanation of what the corrigibility problem is, or why ELK might be important.

None of that seems to count for much. Yitz made what I thought was a very lucid post from a similar level of knowledge, trying to bridge that gap, and got mos... (read more)

Well, at least one of us thinks you're going to die and to maximize your chance to die with dignity you should quit your job, say bollocks to it all, and enjoy the sunshine while you still can!
This is probably pretty tangential to the overall point of your post, but you definitely don't need to take loans for this, since you could apply for funding from Open Philanthropy's early-career funding for individuals interested in improving the long-term future [] or the Long-Term Future Fund []. You don't have to have a degree in machine learning. Besides machine learning engineering or machine learning research there are plenty of other ways to help reduce existential risk from AI, such as: * software engineering at Redwood Research or Anthropic * independent alignment research [] * operations for Redwood Research, Encultured AI, Stanford Existential Risks Initiative, etc. * community-building work for a local AI safety group (e.g., at MIT or Oxford) * AI governance research [] * or something part-time like participating in the EA Cambridge AGI Safety Fundamentals program [] and then facilitating for it Personally, my estimate of the probability of doom is much lower than Eliezer's, but in any case, I think it's worthwhile to carefully consider how to maximize your positive impact on the world, whether that involves reducing existential risk from AI or not. I'd second the recommendation for applying for career advising from 80,000 Hours [] or scheduling a call with AI Safety Support [] if you're open to working on AI safety.
I can't help with the object level determination, but I think you may be overrating both the balance and import of the second-order evidence. As far as I can tell, Yudkowsky is a (?dramatically) pessimistic outlier among the class of "rationalist/rationalist-adjacent" SMEs in AI safety, and probably even more so relative to aggregate opinion without an LW-y filter applied (cf. []). My impression of the epistemic track-record is Yudkowsky has a tendency of staking out positions (both within and without AI) with striking levels of confidence but not commensurately-striking levels of accuracy.  In essence, I doubt there's much epistemic reason to defer to Yudkowsky more (or much more) than folks like Carl Shulman, or Paul Christiano, nor maybe much more than "a random AI alignment researcher" or "a superforecaster making a guess after watching a few Rob Miles videos" (although these have a few implied premises around difficulty curves/ subject matter expertise being relatively uncorrelated to judgemental accuracy).  I suggest ~all reasonable attempts at idealised aggregate wouldn't take a hand-brake turn to extreme pessimism on finding Yudkowsky is. My impression is the plurality LW view has shifted more from "pretty worried" to "pessimistic" (e.g. p(screwed) > 0.4) rather than agreement with Yudkowsky, but in any case I'd attribute large shifts in this aggregate mostly to Yudkowsky's cultural influence on the LW-community plus some degree of internet cabin fever (and selection) distorting collective judgement. None of this is cause for complacency: even if p(screwed) isn't ~1, > 0.1 (or 0.001) is ample cause for concern, and resolution on values between (say) [0.1 0.9] is informative for many things (like personal career choice). I'm not sure whether you get more yield for marginal effort on object or second-order uncertainty (e.g
I'm sympathetic to the position you feel you're in. I'm sorry it's currently like that. I think you should be quite convinced by the point you're taking out loans to study, and that the apparent plurality of the LessWrong commentariat is unlikely to be sufficient evidence to reach that level of convincement – just my feeling. I'm hoping some more detailed arguments for doom will be posted in the near future and that will help many people reach their own conclusions not based on information cascades, etc. Lastly, I do think people should be more "creative" in finding ways to boost log odds of survival. Direct research might make sense for some, but if you'd need to go back to the school for it, there are maybe other things you should brainstorm and consider.
That part really shouldn't be necessary (even if it may be rational, conditional on some assumptions). In the event that you do decide to devote your time to helping, whether for dignity or whatever else, you should be able to get funding to cover most reasonable forms of upskilling and/or seeing-if-you-can-help trial period. That said, I think step one would be to figure out where your comparative advantage lies (80,000 hours folk may have thoughts, among others). Certainly some people should be upskilling in ML/CS/Math - though an advanced degree may not be most efficient -, but there are other ways to help. I realize this doesn't address the deciding-what's-true aspect. I'd note there that I don't think much detailed ML knowledge is necessary to follow Eliezer's arguments on this. Most of the ML-dependent parts can be summarized as [we don't know how to do X], [we don't have any clear plan that we expect will tell us how to do X], similarly for Y, Z, [Either X, Y or Z is necessary for safe AGI]. Beyond that, I think you only need a low prior on our bumping into a good solution while fumbling in the dark and a low prior on sufficient coordination, and things look quite gloomy. Probably you also need to throw in some pessimism on getting safe AI systems to fundamentally improve our alignment research.

I agree. I find myself in an epistemic state somewhat like: "I see some good arguments for X. I can't think of any particular counter-argument that makes me confident that X is false. If X is true, it implies there are high-value ways of spending my time that I am not currently doing. Plenty of smart people I know/read believe X; but plenty do not"

It sounds like that should maybe be enough to coax me into taking action about X. But the problem is that I don't think it's that hard to put me in this kind of epistemic state. Eg, if I were to read the right bl... (read more)

Don't look at opinions, look for data and facts.  Speculations, opinions or beliefs cannot be the basis on which you take decisions or update your knowledge. It's better to know few things, but with high confidence.  Ask yourself, which hard data points are there in favour of doom-soon?

This indicates a range of .05 to .40. That's congruent with my experience in the ag industry; farmers tend to be risk-averse concerning price volatility and as such rarely scale up total production massively.

You can hedge against that volatility to some extent by signing purchase contracts in the spring during planting, but buyers obviously offer such contracts based on their own desire to not be stuck buying high at harvest time, so the hedging can't totally resolve the problem.

There's also the agr... (read more)

I don't understand how this isn't just making friends and encouraging the formation of friend groups based on common interests.

4Jarred Filmer1y
ha don't worry it basically is 😄, it's just that (for me at least) the notion I could put effort into making strong 1-1 connections with people and forming intimate small groups online wasn't really something that occurred to me to do before I started reading about microsolidarity. May also be worth noting that the microsolidarity framework is about a bunch of other stuff beyond just crews and case clinics, notably dynamics that come into play once you try to take a bunch of crews and form a larger group of ~150 or so people out of them.

I mostly agree with swarriner, and I want to add that writing out more explicit strategies for making and maintaining friends is a public good.

The "case clinic" idea seems good. This sometimes naturally emerges among my friends, and trying to do it more would probably be net positive in my social circles.

For the record, ISO 3103 is in no way optimized for a tasty cup of tea; it's explicitly standardized. Six minutes of brewing with boiling water can "scorch" certain teas by over-extracting tannins and other bitter compounds. If you dislike tea there's a decent chance you would like it better with shorter brews or lower temperature water (I use 90C water for my black teas and 85C for greens, for example).

I stand corrected. I'll never trust the ISO norms for my tea again.
More explicitly calibrated: If your green tea is reminicent of grass clippings you likely overdid it on brew temp and/or time.

I find myself concerned. Steven Pinker's past work has been infamously vulnerable to spot-checks of citations, leading me to heavily discount any given factual claims he makes. Is there reason to think he has made an effort here that will be any better constructed?

I don't necessarily agree with your impression of the McAfee thing. The man was by all accounts a very strange person; it doesn't seem overly credulous to think that he might have been both suicidal and paranoid about being murdered and made to look like a suicide.

Your notation is confusing but I achieved a similar result.

1Stevie Lantalia Metke2y
It isn't really notation so much as a recording of the 3 states each of the pieces goes through (each piece is equilibrated n times for when the other block is split into n pieces, so I record the state of the piece after each of it's equilibrations), expressed as how much of the maximum temperature the piece has (I suppose it would have been cleaner if I'd included the implicit initial states of 0 for the blue pieces, and 1 for the red pieces)

>It seems to me much safer to lay the burden of proof on the moral indulgence--at very least, the burden of proof shouldn't always rest on the demands of conscience. 

I think I disagree. It seems to me that moral claims don't exist in a vacuum, they require a combination of asserted values and contextualizing facts. If the contextualizing facts are not established, the asserted value is irrelevant. For instance, I might claim that we have a moral duty not to brush our hair because it produces static electricity, and static electricity is a painful e... (read more)

One human's moral arrogance is another human's Occam's razor. The evidence suggests to me, on grounds of both observation (very small organisms demonstrate very simple behaviour not consistent with a high level awareness) and theory (very small organisms have extremely minimal sensory/nervous architecture to contain qualia) that dust-mites are morally irrelevant, and the chance that I am mistaken in my opinion amounts to a Pascal's Mugging.

From Ozy:

"I recently read an essay by Peter Singer, Ethics Beyond Species and Beyond Instincts, in which he defined the moral as that which is universalizable, in this sense: “We can distinguish the moral from the nonmoral by appeal to the idea that when we think, judge, or act within the realm of the moral, we do so in a manner that we are prepared to apply to all others who are similarly placed.”

I read that, sat back, and said to myself: “I cannot do morality.”

I cannot do it in the same sense that an alcoholic cannot drink, and a person with an eating di... (read more)

I am not a true expert, but there is one major element of this narrative that most coverage leaves out— no matter what happens to the short-sellers, the price of Gamestop and other short squeezed stocks must eventually normalize to a "truer" valuation.

I have seen a truly alarming lack of recognition of this fact, with some people apparently believing the squeezed price is the new normal for GME. Here's why that probably isn't the case:

The value of a stock is tied to two factors. One is (broadly) the cash flows one can expect to receive in the form of divid... (read more)

In particular, my understanding is that most people who shorted in the early days are now out (including, for some, giving up on shorting entirely) and have realized billion dollar losses, but short interest remains approximately the same, because new funds have taken their place. It was quite risky to think a stock at $4 would decline to $0, but it's not very risky to think a stock at $350 will decline to $40. It remains to be seen where the price will stabilize (and, perhaps more importantly, when) but I think the main story is going to be "early shorts lost money, late shorts gained money, retail investors mostly lost money).
There are a lot of paths where the "truer" value is actually quite high.  Some amount of buyers will forget about it, and just hold the stock long-term.  The company itself could find a way to capitalize (heh) on this, before it drops by too much - buy complementary valuable companies with stock, for instance.  Even the press is probably valuable, and gives GameStop a great starting buzz for it's (future) online platform. I'd bet (at lower odds than are implied by price of put options) it's going back down under $75, but no clue if that's over weeks or months, and that's STILL almost 5X where it started the year.
I agree with you, but that goes beyond the scope of my intention when writing this post. This post was meant to be as elementary as possible. 
I haven't seen almost any traders going off a "real value" analysis for Game Stop. Almost everybody believes Game Stop has a broken business model with no fundamentals, but are all buying it and taking losses just to screw over hedge funds. This is coordinated short-sited financial shitposting out of spite. There is bound to be many losers, but man is it interesting to watch.   Edit: I would also love to see an analysis at one point of the game theory involved getting so many individual traders to coordinate.

Saskatchewanian checking in here. As with your Vancouver Island example, there's a lot of heterogeneity here too. The south of the province, where I grew up, has extremely low numbers of cases even relative to the sparse rural population, while anywhere north of Saskatoon where I currently live is doing fairly badly relative to their sparse rural population. I don't have a strong gears-level understanding of why this should be except some vague notion that the North sees more traffic entering and exiting in the course of resource extraction industries, and close living quarters associated with the same. Plus something something rampant spread in First Nations which I don't even want to get into. 

Appreciating you chiming in. That's a great point about how different rural communities are doing different. I kind of had the impression some rural areas in the prairies were doing bad, but I didn't off-hand have a sense of where or why. Your rough sketch with vague notions is helpful on that front. I drove across the country on the way out to BC a couple months ago, and it's indeed hard to imagine the farming areas in the south half of the prairies having much covid spread, whereas it makes sense that resource-extraction areas would for the 2 reasons you describe. That plus exponentials/nonlinearities seems sufficient to explain most of the discrepancy, maybe.

The notion of weirdness points has never spoken to me, personally, because it seems to collapse a lot of social nuance into a singular dichotomy of weird/not weird, and furthermore that weirdness is in some sense measurable and fungible. Neither, I think, is true, and the framework ought to be dissolved. So what's goes into a "weirdness point"?

  • How familiar is the idea? - Vegetarians/vegans are a little weird, but most people probably know a handful and most have a notion that those people care about animal welfare and maybe some even know about nutritional
... (read more)
I very much think I understand this perspective but yet I also sometimes find that a specific "gameplay" to be, e.g. restrictive, 'degenerate' (in a gameplay sense), or some degree of un-fun/bad. Just considering the 'gameplay mechanic' 'smalltalk' – I can and often do enjoy it, but it can also be a thankless chore (or worse). The phrase "correct gameplay" makes me think of consequentialism and 'shutting-up-and-multiplying'. But beyond understanding that there is a best 'move', I can't perfectly escape thoughts about the possibility of playing different games. There's also not just one 'game', as you and others have pointed out, but there's also not just one level of games either and an aspect of 'meta-gaming' is deciding whether or not to play specific games at all. In the expansive myriads-of-games-at-criss-crossing-levels-of-meta-gaming perspective, there isn't even any obvious "correct gameplay" at all, which is part of what I think this post was gesturing at.

Are there any resources that amount to "80,000 Hours for (hopefully reformed) underachievers"? I've been weighing the possibility of going back to school in the hopes of getting into a higher-impact field, but my academic resume from my bachelor's is pretty lackluster, leaving me unsure where to start reconstruction. My mental health and general level of conscientiousness are both considerably improved from my younger years so I'm optimistic I can exceed my past self.

80k has changed the general plan that they push (People took "earning to give" too seriously). This post here [] is probably the article that you're looking for with regards to "what should I do now?"

Not necessarily. If I am an academic whose research is undermined by bias, I may be irrational but not stupid, and if I am in a social environment where certain signals of stupid beliefs are advantageous, I may be stupid but not irrational. It seems to be the latter is more what the author is getting at.

See my comments above for some discussion of this topic. Broadly speaking we do know how to keep farmland productive but there are uncaptured externalities and other inadequacies to be accounted for.

That's fair, and I'm grumbling less as an ag scientist or policy person than as a layperson born and raised in the ag industry. It is my opinion that the commercial ag industry in my country both contains inadequacies and is a system of no free energy, to borrow from Inadequate Equilibria.

To elaborate, I observe the following facts:

  • Conventional agriculture using fertilizer and pesticide creates negative externalities, notably by polluting runoff and consuming non-renewable resources (fertilizer is made from potash, a reasonably abundant but not infinite mi
... (read more)
That all makes sense - I'm less certain that there is a reachable global maximum that is a Pareto improvement in terms of inputs over the current system. That is, I expect any improvement to require more of some critical resource - human time, capital investment, or land.

Agricultural practice is my Gell-Mann pet peeve. While it's true that fertilizer and pest control are currently central to large swaths of the commercial ag industry, this is not necessarily a case of pure necessity so much as local maxima— for many crops we could reduce dependence on synthetic fertilizers and pesticides by integrating livestock, multi-cropping land, etc. Some of them are also ecologically unsustainable as practiced and may eventually need to be replaced.

That said, this doesn't actually detract from the central point; I would very much lik... (read more)

My dad and uncle can farm 2,000 acres between them because of synthetic fertilizer and pesticides. I would like to see you do the same with integrated livestock and multi-cropping.

This isn't critiquing the claim, though. Yes, there are alternatives that are available, but those alternatives - multi-cropping, integrating livestock, etc. are more labor intensive, and will produce less over the short term. And I'm very skeptical that the maximum is only local - have you found evidence that you can use land more efficiently, while keeping labor minimal, and produce more? Because if you did, there's a multi-billion dollar market for doing that. Does that make the alternatives useless, or bad ideas? Not at all, and I agree that changes ar... (read more)

Load More