70 comments, sorted by Click to highlight new comments since: Today at 8:35 PM
New Comment

I just learned some important things about indoor air quality after watching Why Air Quality Matters, a presentation by David Heinemeier Hanson, the creator of Ruby on Rails. It seems like something that is both important and under the radar, so I'll brain dump + summarize my takeaways here, but I encourage you to watch the whole thing.

  • He said he spent three weeks researching and experimenting with it full time. I place a pretty good amount of trust in his credibility here, based on a) my prior experiences with his work and b) him seeming like he did pretty thorough research.
  • It's easy for CO2 levels to build up. We breathe it out and if you're not getting circulation from fresh air, it'll accumulate.
  • This has pretty big impacts on your cognitive function. It seems similar to not getting enough sleep. Not getting enough sleep also has a pretty big impact on your cognitive function. And perhaps more importantly, it's something that we are prone to underestimating. It feels like we're only a little bit off, when in reality we're a lot off.
  • There are things called volatile organic compounds, aka VOCs. Those are really bad for your health. They come from a variety of sources. Cleaning pro
... (read more)
4Viliam2y
It is my repeated experience in companies that well-ventilated rooms are selected by people as workplaces, and the unventilated ones then remain available for meetings. I seem to be more sensitive about this than most people, so I often notice that "this room makes me drowsy". (My colleagues usually insists that it is not so bad, and they have a good reason to do so... why would they risk that their current workplace will be instead selected as a new meeting room, and they get this unventilated place as a new workspace?)
2AllAmericanBreakfast2y
I just ordered the Awair on Amazon. It can be returned through Jan. 31; I've just ordered it to play with it for a few days, and will probably return it. I have a few specific questions I plan to answer with it: * How much CO2 builds up in my bedroom at night, both when I'm alone and when my partner is over. * How much CO2 builds up in my office during the day? * How much do I need to crack the window in my bedroom in order to keep CO2 levels low throughout the night? * When CO2 builds up, how quickly does opening a window restore a lower level of CO2? With the answers to those questions, I hope I can return the detector and just keep my windows open enough to prevent CO2 buildup without making the house too cold.
4adamzerner2y
That sounds reasonable and I considered doing something similar. What convinced me to get it anyway is that in the long run, even if the marginal gains in productivity and wellness you get from owning the Awair vs your approach are tiny, even tiny gains add up to the point where the $150 seems like a great ROI.
4AllAmericanBreakfast2y
Have you gotten yours yet? If so, what are the results? I found that the only issue in my house is that the bedroom can get to quite high levels of CO2 if the door and windows are shut. Opening a window solves the problem, but makes the room cold. However, it's more comfortable to sleep with extra blankets in a cold room, than with fewer blankets in a stuffy room. It improves sleep quality. It would be interesting to experiment in the office with having a window open, even during winter. However, I worry that being cold would create problems. My feeling is that "figure out how to crack a window if the room feels stuffy" is the actionable advice here. Unless $150 is chump change to you, I'm not sure it's really worth keeping a device around to monitor the air quality.
4adamzerner2y
Yup I got it both the Awair and the Alen. * PM2.5 started off crazy high for me before I got the Alen. Using the Alen brings it to near zero. * VO2 and PM2.5 accumulates rather easily when I cook, although I do have a gas stove. Also random other things like the dishwasher cause it to go up. The Alen brings it back down in ~30 minutes maybe. * CO2 usually hovers around a 3/5 on the Awair if I don't have a window open. I'm finding it tricky to deal with this, because opening a window makes it cold. I'm pretty sure my apartment's HVAC system just recycles the current air rather than bringing in new air. I'm hoping to buy a house soon so I think ventilation is something I'm going to look for. * For me I don't actually notice the CO2 without the Awair telling me. I don't think I'd do a good job of remembering to crack a window or something without it. * I wonder if your house has better ventilation than mine if you're not getting issues with PM2.5. Could be if it's an older house or if your HVAC system does ventilation. I see what you're saying about how the actual actions you should take seem pretty much the same regardless of whether you have the Awair or not. I agree that it's close, but I think that small differences do exist, and that those small differences will add up to a massively large ROI over time. 1) If it prompts you to crack a window before you would otherwise notice/remember to do so. 2) If something new is causing issues. For me I noticed that my humidifier was jacking up the PM2.5 levels and realized I need to get a new one. I also noticed that the dishwasher jacks it up so now I know to not be around while it's running. I would imagine that over time new things like this will pop up, eg. using a new cleaning product or candle. 3) Moving to a new home, remodeling or buying eg. new furniture could cause differences. 4) Unknown unknowns that could cause issues. Suppose you value time spent in better air qualit
4AllAmericanBreakfast2y
I do live in an old house. I get the same effects of spiking VOCs and PM2.5 running the stove and microwave. In my case, the spikes seem to last only as long as the appliance is running. This makes sense, since the higher the concentration, the faster it will diffuse out of the house. A rule to turn on the stove vent or crack a window while cooking could help, but it's not obvious to me that a few minutes per day of high VOC is something to worry about over the long term. I note in this paper [https://www.researchgate.net/profile/Krassi-Rumchev/publication/6323174_Volatile_Organic_Compounds_Do_they_present_a_risk_to_our_health/links/5815933908aedc7d89640f0c/Volatile-Organic-Compounds-Do-they-present-a-risk-to-our-health.pdf] that "The chemical diversity of the VOC group is reflected in the diversity of the health effects that individual VOCs can cause, ranging from no known health effects of relatively inert VOCs to highly toxic effects of reactive VOCs." How do I know that the Awair is testing for the more toxic end of the spectrum? There are no serious guidelines for VOCs in general. How do I know that the Awair's "guidelines" are meaningful? My bedroom has poor ventilation. Cracking a window seems to improve my sleep quality, which seems like the most important effect of all in the long run. It sounds like the effect of CO2 itself on cognitive performance is questionable [https://www.nature.com/articles/s41526-019-0071-6]. However, bioeffluents - the carbonyls, alkyl alcohols, aromatic alcohols, ammonia, and mercaptans we breathe out - do seem to have an effect on cognition when the air's really poorly ventilated. But the levels in my house didn't even approach the levels at which researchers have found statistically significant cognitive effects. I'm wondering if the better sleep quality is due to the cooler air rather than the better ventilation. I really doubt that the Awair will last 25 years. I'd guess more like 5. I can set a reminder on my phone to c
8adamzerner2y
Hm, let's see how those assumptions you're using affect the numbers. If it lasts 5 years instead of 25 the breakeven would become 30 hours/year instead of 6. And if we say that the value of better air quality is $0.20/hr instead of $1/hr due to the uncertainty in the research you mention, we multiply by 5 again and get 150 hours/year. With those assumptions, it seems like it's probably not worth it. And more generally, after talking it through, I no longer see it as an obvious +ROI. (Interesting how helpful it is to "put a number on it". I think I should do this a lot more than I currently do.) However, for myself I still feel really good about the purchases. I put a higher value on the $/hr because I value health, mood and productivity more than others probably do, and because I'm fortunate enough to be doing well financially. I also really enjoy the peace of mind. Knowing what I know now, if I didn't have my Awair I would be worried about things screwing up my air quality without me knowing.
4adamzerner1y
I posted an update in the OP. When we initially talked about this I was pretty strongly on the side of pro-Awair+Alen. Now I lean moderately against Alen for most people and slightly against Awair, but slightly in favor of Awair for me personally.

Noticing confusion about the nucleus

In school, you learn about forces. You learn about gravity, and you learn about the electromagnetic force. For the electromagnetic force, you learn about how likes repel and opposites attract. So two positively charged particles close together will repel, whereas a positively and a negatively charged particle will attract.

Then you learn about the atom. It consists of a bunch of protons and a bunch of neutrons bunched up in the middle, and then a bunch of electrons orbiting around the outside. You learn that protons are p... (read more)

5JBlack6mo
Yes, this was a point of confusion for me. The point of confusion that followed very quickly afterward were why the strong nuclear force didn't mean that everything piles up into one enormous nucleus, and from there to a lot of other points of confusion - some of which still haven't been resolved because nobody really knows yet. The most interesting thing to me is that the strong nuclear force is just strong enough without being too strong. If it was somewhat less strong then we'd have nothing but hydrogen, and somewhat more strong would make diprotons, neutronium, or various forms of strange matter more stable than atomic elements.
4Dagon6mo
I remember this confusion from Jr. High, many decades ago. I was lucky enough to have an approachable teacher who pointed me to books with more complete explanations, including the Strong Nuclear force and some details about why inverse-square doesn't apply, making it able to overcome EM at very small distances, when you'd think EM is strongest.

The other day Improve your Vocabulary: Stop saying VERY! popped up in my YouTube video feed. I was annoyed.

This idea that you shouldn't use the word "very" has always seemed pretentious to me. What value does it add if you say "extremely" or "incredibly" instead? I guess those words have more emphasis and a different connotation, and can be better fits. I think they're probably a good idea sometimes. But other times people just want to use different words in order to sound smart.

I remember there was a time in elementary school when I was working on a paper... (read more)

4Dagon1y
Communication advice is always pretentious - someone's trying to say they know more about your ideas and audience than you do. And simultaneously, it's incorrect for at least some listeners, because they're wrong - they don't. Also, correct for many listeners, because many are SO BAD at communication that generalized simple advice can get them to think a little more about it. At least part of the problem is that there is a benefit to sounding smart. "very" is low-status, and will reduce the impact of your writing, for many audiences. That's independent of any connotation or meaning of the word or it's replacement. Likewise with "I think". In many cases, it's redundant and unnecessary, but in many others it's an important acknowledgement, not that it's your thought or that you might be wrong, but that YOU KNOW you might be wrong. I think (heh) your planned follow-up is a good idea, to include context and reasoning for recommendations, so we can understand what situations it applies to.
4G Gordon Worley III1y
I've tried doing this in my writing in the past, of the form of just throw away "I think" all together because it's redundant: there's no one thinking up these words but me. Unfortunately this was a bad choice because many people take bald statements without softening language like "I think" as bids to make claims about how they are or should be perceiving reality, which I mean all statements are but they'll jump to viewing them as claims of access to an external truth (note that this sounds like they are making an error here by having a world model that supposes external facts that can be learned rather than facts being always conditional on the way they are known (which is not to say there is not perhaps some shared external reality, only that any facts/statements you try to claim about it must be conditional because they live in your mind behind your perceptions, but this is a subtle enough point that people will miss it and it's not the default, naive model of the world most people carry around anyway)). Example: I think you're doing X -> you're doing X People react to the latter kind of thing as a stronger kind of claim that I would say it's possible to make. This doesn't quite sound like what you want to do, though, and instead want to insert more nuanced words to make it clearer what work "think" is doing.
2adamzerner1y
Yeah. And also a big part of what I'm trying to propose is some sort of new standard. I just realized I didn't express this in my OP, but I'll express it now. I agree with the problems you're saying, and I think that if we all sort of agreed on this new standard, eg. when you say "I suspect" it means X, then these problems seem like they'd go away.
3FlorianH1y
Not answering your main point, but small note on the "leaving out very" point: I've enjoyed McCloskey's writing on writing. She calls the phenomenon "elegant variation" [https://www.google.com/search?q=mccloskey+elegant+variation] (I don't know whether this is her only) and also teaches we have to get rid of this unhelpful practice that we get thought in school.
3Dagon1y
Thanks! I always upvote McClosky references - one of the underappreciated writers/thinkers on topics of culture and history.

"It's not obvious" is a useful critique

I recall hearing "it's not obvious that X" a lot in the rationality community, particularly in Robin Hanson's writing.

Sometimes people make a claim without really explaining it. Actually, this happens a lot of times. Often times the claim is made implicitly. This is fine if that claim is obvious.

But if the claim isn't obvious, then that link in the chain is broken and the whole argument falls apart. Not that it's been proven wrong or anything, just that it needs work. You need to spend the time establishing that clai... (read more)

2Dagon2y
Agreed, but in many contexts, one should strive to be clear to what extent "it's not obvious that X" implies "I don't think X is true in the relevant context or margin". Many arguments that involve this are about universality or distant extension of something that IS obvious in more normal circumstances. Robin Hanson generally does specify that he's saying X isn't obvious (and is quite likely false) in some extreme circumstances, and his commenters are ... not obviously understanding that.
2adamzerner2y
Hm, I'm having a little trouble thinking about the distinction between X in the current context vs X universally. Do you have any examples? Glad to hear you've noticed this from Hanson too and it's not just me.
2Raemon2y
I think you might have reversed your opening line?
2adamzerner2y
Hm, I might be having a brain fart but I'm not seeing it. My point is that people will make an argument "A is true based on X, Y and Z", someone will point out "it's not obvious that Y", and that comment is useful because it leads to a discussion about whether Y is true.
4Pattern2y
Suggested title: If it's not obvious, then how do we know it's true?
2adamzerner2y
Changed to "It's not obvious" is a useful critique.
2Raemon2y
Okay, I thought you intended to say "People claim 'it's obvious that X'" when X wasn't obvious. Your new title is more clear.
2adamzerner2y
Gotcha. I appreciate you pointing it out. I'm glad to get the feedback that it initially wasn't clear, both for self-improvement purposes and for the more immediate purpose of improving the title. (It's got me thinking about variable names in programming. There's something more elegant about being concise, but then again, humans are biased towards expecting short inferential distances, so I probably should err on the side of longer more descriptive variable names. And post title!)

Words as Bayesian Evidence

Alice: Hi, how are you?

Bob: Good. How are you?

Alice: Actually, I'm not doing so well.

Let me ask you a question. How confident are you that Bob is doing good? Not very confident, right? But why not? After all, Bob did say that he is doing good. And he's not particularly well known for being a liar.

I think the thing here is to view Bob's words as Bayesian evidence. They are evidence of Bob doing good. But how strong is this evidence? And how do we think about such a question?

Let's start with how we think about such a question. I... (read more)

2Dagon2mo
I notice I'm confused. I don't actually know what it would mean (what predictions I'd make or how I'd find out if I were correct about) for Bob to be "doing good". I don't think it generally means "instantaneous hedonic state relative to some un-tracked distribution", I think it generally means "there's nothing I want to draw your attention to". And I take as completely obvious that the vast majority of social interactions are more contextual and indirect than overt legible information-sharing. This combines to make me believe that it's just an epistemic mistake to take words literally most of the time, at least without a fair bit of prior agreement and contextual sharing about what those words mean in that instance. I'm agreed that thinking of it as a Bayesean update is often a useful framing. However, the words are a small part of evidence available to you, and since you're human, you'll almost always have to use heuristics and shortcuts rather than actually knowing your priors, the information, or the posterior beliefs.
4adamzerner2mo
It sounds like we mostly agree. Agreed. Agreed. I think the big thing I disagree on is that this is always obvious. Thought of in the abstract like this I guess I agree that it is obvious. However, I think that there are times when you are in the moment where it can be hard to not interpret words literally, and that is what inspired me to write this. Although now I am realizing that I failed to make that clear or provide any examples of that. I'd like to provide some good examples now, but it is weirdly difficult to do so. Agreed. I didn't mean to imply otherwise, even though I might have.

Closer to the truth vs further along

Consider a proposition P. It is either true or false. The green line represents us believing with 100% confidence that P is true. On the other hand, the red line represents us believing with 100% confidence that P is false.

We start off not knowing anything about P, so we start off at point 0, right at that black line in the middle. Then, we observe data point A. A points towards P being true, so we move upwards towards the green line a moderate amount, and end up at point 1. After that we observe data point B. B is weak... (read more)

3tailcalled6mo
I believe that similar to conservation of expected evidence, there's a rule of rationality saying that you shouldn't expect your beliefs to change back and forth too much, because that means there's a lot of uncertainty about the factual matters, and the uncertainty should bring you closer to max entropy. Can't remember the specific formula, though.
2adamzerner6mo
Good point. I was actually thinking about that and forgot to mention it. I'm not sure how to articulate this well, but my diagram and OP was mainly targeted at gears level modesl. Using the athiesm example, the worlds smartest theist might have a gears level model that is further along than mine. However, I expect that the worlds smartest atheist has a gears level model that is further along than the worlds smartest theist.

There's a concept I want to think more about: gravy.

Turkey without gravy is good. But adding the gravy... that's like the cherry on top. It takes it from good to great. It's good without the gravy, but the gravy makes it even better.

An example of gravy from my life is starting a successful startup. It's something I want to do, but it is gravy. Even if I never succeed at it, I still have a great life. Eg. by default my life is, say, a 7/10, but succeeding at a startup would be so awesome it'd make it a 10/10. But instead of this happening, my brain pulls a ... (read more)

Covid-era restaurant choice hack: Korean BBQ. Why? Think about it...

They have vents above the tables! Cool, huh? I'm not sure how much that does, but my intuition is that it cuts the risk in half at least.

Science as reversed stupidity

Epistemic status: Babbling. I don't have a good understanding of this, but it seems plausible.

Here is my understanding. Before science was a thing, people would derive ideas by theorizing (or worse, from the bible). It wasn't very rigorous. They would kinda just believe things willy-nilly (I'm exaggerating).

Then science came along and was like, "No! Slow down! You can't do that! You need to have sufficient evidence before you can justifiably believe something like that!" But as Eliezer explains, science is too slow. It judges t... (read more)

I was just listening to the Why Buddhism Is True episode of the Rationally Speaking podcast. They were talking about what the goal of meditation is. The interviewee, Robert Wright, explains:

the Buddha said in the first famous sermon, he basically laid out the goal, "Let's try to end suffering."

What an ambitious goal! But let's suppose that it was achieved. What would be the implications?

Well, there are many. But one that stands out to me as particularly important as well as ignored, is that it might be a solution to existential risk. Maybe if people we... (read more)

Collaboration and the early stages of ideas

Imagine the lifecycle of an idea being some sort of spectrum. At the beginning of the spectrum is the birth of the idea. Further to the right, the idea gets refined some. Perhaps 1/4 the way through the person who has the idea texts some friends about it. Perhaps midway through it is refined enough where a rough draft is shared with some other friends. Perhaps 3/4 the way through a blog post is shared. Then further along, the idea receives more refinement, and maybe a follow up post is made. Perhaps towards the ve... (read more)

2ChristianKl7mo
It sounds to me like in a more normal case it doesn't begin with texting friends but talking in person with them about the idea. For that to happen you usually need a good in person community. These days more is happening via Zoom but reaching out to chat online still isn't as easy as going to a meetup.
2adamzerner7mo
Perhaps. I'm not sure.

Inconsistency as the lesser evil

It bothers me how inconsistent I am. For example, consider covid-risk. I've eaten indoors before. Yet I'll say I only want to get coffee outside, not inside. Is that inconsistent? Probably. Is it the right choice? Let's say it is, for arguments sake. Does the fact that it is inconsistent matter? Hell no!

Well, it matters to the extent that it is a red flag. It should prompt you to have some sort of alarms going off in your head that you are doing something wrong. But the proper response to those alarms is to use that as an op... (read more)

2Dagon7mo
Inconsistency is a pointer to incorrectness, but I don't think that example is inconsistent. There's a reference class problem involved - eating a meal and getting a coffee, at different times, with different considerations of convenience, social norms, and personal state of mind, are just not the same decision.
2adamzerner7mo
I hear ya. In my situation I think that when you incorporate all of that and look at the resulting payoffs and probabilities, it does end up being inconsistent. I agree that it depends on the situation though.

The other day I was walking to pick up some lunch instead of having it delivered. I also had the opportunity to freelance for $100/hr (not always available to me), but I still chose to walk and save myself the delivery fee.

I make similarly irrational decisions about money all the time. There are situations where I feel like other mundane tasks should be outsourced. Eg. I should trade my money for time, and then use that time to make even more money. But I can't bring myself to do it.

Perhaps food is a good example. It often takes me 1-2 hours to "do" dinner... (read more)

5Dagon7mo
I suspect there are multiple things going on. First and foremost, the vast majority of uses of time have non-monetary costs and benefits, in terms of enjoyment, human interaction, skill-building, and even less-legible things than those. After some amount of satisficing, money is no longer a good common measurement for non-comparable things you could do to earn or spend it. Secondly, most of our habits on the topic are developed in a situation where hourly work is not infinitely available at attractive rates. The marginal hour of work, for most of us, most of the time, is not the same as our average hour of work. In the case where you have freelance work available that you could get $1.67/minute for any amount of time you choose, and you can do equally-good (or at least equally-valuable) work regardless of state of mind, your instincts are probably wrong - you should work rather than any non-personally-valuable chores that you can hire out for less than this.
1acylhalide7mo
Assume that the number of hours you can work per day without burning out is constant, and you are already at this limit. Then ordering food + doing work instead of cooking only reshuffles the hours you don't work from one activity to another. In practice yeah a lot of things impact both your personal satisfaction and the amount of work you can do without burning out - including the nature of your hobbies, nature of work, order in which you do them (rather than just number of hours), etc. Money isn't your goal, it's instrumental to get you life satisfaction. Making money is at some level trading satisfaction in present for satisfaction in future, assuming you don't love your job. There's a minimum amount of satisfaction needed in the present to ensure you see a future at all.
2adamzerner7mo
Agreed if we assume this premise is true, but I don't think it is often true.
1acylhalide7mo
Agreed. I'm not sure what the original question is - isn't the answer as simple as valuing present satisfaction over a multiple of future statisfaction?
2adamzerner7mo
The original question is based on the observation that a lot of people, including me, including rationalists, do things like spending an hour of time to save $5-10 when their time is presumably worth a lot more than that, and in contexts where burnout or dips in productivity wouldn't explain it. So my question is whether or not this is something that makes sense. I feel moderately strongly that it doesn't actually make sense, and that what Eliezer eludes to in Money: The Unit of Caring [https://www.lesswrong.com/posts/ZpDnRCeef2CLEFeKM/money-the-unit-of-caring] is what explains the phenomena.
1acylhalide7mo
Got it. Maybe I just can't relate to this feeling you describe.
1JBlack7mo
One thing strikes me: you appear to be supposing that apart from how much money is involved, every possible activity per hour is equally valuable to you in itself. This is not required by rationality unless you have a utility function that depends only upon money and a productivity curve that is absolutely flat. Maybe money isn't everything to you? That's rationally allowed. Maybe you actually needed a break from work to clear your head for the rest of the afternoon or whatever? That's rationally allowed too. It's even allowed for you to not want to do that freelancing job instead of going for a walk at that time, though in that case you might consider the future utility of the net $90 in getting other things that you might want. Regarding food, do you dislike cooking for yourself more than doing more work for somebody else? Do you actually dislike cooking at all? Do you value deciding what goes into your body and how it is prepared? How much of your hourly "worth" is compensation for having to give up control of what you do during that time? How much is based on the mental or physical "effort" you need to put into it, which may be limited? How much is not wanting to sell your time much more cheaply than they're willing to pay? Rationality does not forbid that any of these should be factors in your decisions. On the startup example, my experience and those of everyone else I've talked to who have done it successfully is that leading a startup is hell, even if it's just a small scale local business. You can't do it part time or even ordinary full time, or it will very likely fail and make you less than nothing. If you're thinking "I could spend some of my extra hours per week on it", stop thinking it because that way lies a complete waste of time and money.
2adamzerner7mo
No, I am not supposing that. Let me clarify. Consider the example of me walking to pick up food instead of ordering it. Suppose it takes a half hour and I could have spent that half hour making $50 instead. The way I phrased it: * Option #1: Spend $5 to save myself the walk and spend that time freelancing to earn $50, netting me $45. * Option #2: Walk to pick up the food, not spending or earning anything. The problem with that phrasing is that dollars aren't what matter, utility is, as you allude to. My point is that it still seems like people often make very bad decisions. In this example, the joy of walking versus freelancing + any productivity gains are not worth $45, I don't think. I do agree that this doesn't last forever though. At some point you get so exhausted from working where the walk has big productivity benefits, the work would be very unpleasant, and the walk would be a very pleasant change of pace. Tangential, but Paul Graham wouldn't call [http://paulgraham.com/growth.html] that a startup. I disagree here. 1) I know of real life counterexamples. I'm thinking of people I met at an Indie Hackers meetup I used to organize. 2) It doesn't match my model of how things work.

Betting is something that I'd like to do more of. As the LessWrong tag explains, it's a useful tool to improve your epistemics.

But finding people to bet with is hard. If I'm willing to bet on X with Y odds and I find someone else eager to, it's probably because they know more than me and I am wrong. So I update my belief and then we can't bet.

But in some situations it works out with a friend, where there is mutual knowledge that we're not being unfair to one another, and just genuinely disagree, and we can make a bet. I wonder how I can do this more often. And I wonder if some sort of platform could be built to enable this to happen in a more widespread manner.

Idea: Athletic jerseys, but for intellectual figures. Eg. "Francis Bacon" on the back, "Science" on the front.

I've always heard of the veil of ignorance being discussed in a... social(?) context: "How would you act if you didn't know what person you would be?". A farmer in China? Stock trader in New York? But I've never heard it discussed in a temporal context: "How would you act if you didn't know what era you would live in?" 2021? 2025? 2125? 3125?

This "temporal veil of ignorance" feels like a useful concept.

I just came across an analogy that seems applicable for AI safety.

AGI is like a super powerful sports car that only has an accelerator, no brake pedal. Such a car is cool. You'd think to yourself:

Nice! This is promising! Now we have to just find ourselves a brake pedal.

You wouldn't just hop in the car and go somewhere. Sure, it's possible that you make it to your destination, but it's pretty unlikely, and certainly isn't worth the risk.

In this analogy, the solution to the alignment problem is the brake pedal, and we really need to find it.

2adamzerner7mo
(I'm not as confident in the following, plus it seems to fit as a standalone comment rather than on the OP.) Why do we really need to find it? Because we live in a world where people are seduced by the power of the sports car. They are in a competition to get to their destinations as fast as possible and are willing to be reckless in order to get there. Well, that's the conflict theory [https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/] perspective. The mistake theory perspective is that people simply think they'll be fine driving the car without the brakes. That sounds crazy. And it is crazy! But think about it this way. (The analogy starts to break down a bit here.) These people are used to driving wayyyy less powerful cars. Sometimes these cars don't have breaks at all, other times they have mediocre brake systems. Regardless, it's not that dangerous. These people understand that the sports car is in a different category and is more dangerous, but they don't have a good handle on just how much more dangerous it is, and how it is totally insane to try to drive a car like that without brakes. We can also extend the analogy in a different direction (although the analogy breaks down when pushed in this direction as well). Imagine that you develop breaks for this super powerful sports car. Awesome! What do you do next? You test them. In as many ways as you can. However, with AI, we can't actually do this. We only have one shot. We just have to install them, hit the road, and hope they work. (Hm, maybe the analogy does work. Iirc, the super powerful racing cars, are built to only be driven once/a few times. There's a trade-off between performance and how long the car lasts. And for races, they go all the way towards the performance side of the spectrum.)

Alice, Bob, and Steve Jobs

In my writing, I usually use the Alice and Bob naming scheme. Alice, Bob, Carol, Dave, Erin, etc. Why? The same reason Steve Jobs wore the same outfit everyday: decision fatigue. I could spend the time thinking of names other than Alice and Bob. It wouldn't be hard. But it's just nice to not have to think about it. It seems like it shouldn't matter, but I find it really convenient.

Epistemic status: Rambly. Perhaps incoherent. That's why this is a shortform post. I'm not really sure how to explain this well. I also sense that this is a topic that is studied by academics and might be a thing already.

I was just listening to Ben Taylor's recent podcast on the top 75 NBA players of all time, and a thought started to crystalize for me that I always have wanted to develop. For people who don't know him (everyone reading this?), his epistemics are quite good. If you want to see good epistemics applied to basketball, read his series of posts... (read more)

I wonder if it would be a good idea groom people from an early age to do AI research. I suspect that it would. Ie identify who the promising children are, and then invest a lot of resources towards grooming them. Tutors, therapists, personal trainers, chefs, nutritionists, etc.

Iirc, there was a story from Peak: Secrets from the New Science of Expertise about some parents that wanted to prove that women can succeed in chess, and raised three daughters doing something sorta similar but to a smaller extent. I think the larger point being made was that if you ... (read more)

I suspect that the term "cognitive" is often over/misused.

Let me explain what my understanding of the term is. I think of it as "a disagreement with behaviorism". If you think about how psychology progressed as a field, first there was Freudian stuff that wasn't very scientific. Then behaviorism emerged as a response to that, saying "Hey, you have to actually measure stuff and do things scientifically!" But behaviorists didn't think you could measure what goes on inside someone's head. All you could do is measure what the stimulus is and then how the human... (read more)

1JBlack9mo
The long standing meaning of "cognitive" for hundreds of years before cognitive psychologists was having to do with knowledge, thinking, and perception. A cognitive bias is a bias that affects your knowledge, thinking, and/or perception. Epistemic bias is a fine term for those cognitive biases that are specifically biases of beliefs. Not all cognitive biases are of that form though, even when they might fairly consistently lead to certain types of biases in beliefs.
2adamzerner9mo
Hm, can you think of any examples of cognitive biases that aren't about beliefs? You mention that the term "cognitive" also has to do with perception. When I hear "perception" I think sight, sound, etc. But biases in things like sight and sound feel to me like they would be called illusions, not biases.
1JBlack9mo
The first one to come to mind was Recency Bias, but maybe I'm just paying that one more attention because it came up recently. Having noticed that bias in myself, I consulted an external source https://en.wikipedia.org/wiki/List_of_cognitive_biases [https://en.wikipedia.org/wiki/List_of_cognitive_biases] and checked that rather a lot of them are about preferences, perceptions, reactions, attitudes, attention, and lots of other things that aren't beliefs. They do often misinform beliefs, but many of the biases themselves seem to be prior to belief formation or evaluation.
2adamzerner9mo
Ah, those examples have made the distinction between biases that misinform beliefs and biases of beliefs clear. Thanks! As someone who seems to understand the term better than I do, I'm curious whether you share my impression that the term "cognitive" is often misused. As you say, it refers to a pretty broad set of things, and I feel like people use the term "cognitive" when they're actually trying to point to a much narrower set of things.

Everyone hates spam calls. What if a politician campaigned to address little annoyances like this? Seems like it could be a low hanging fruit.

2habryka1y
Depends on what you mean by "low-hanging fruit". I think there are lots of problems like this that seem net-negative, but it doesn't seem anywhere close to the most important thing I would recommend politicians to do.
2adamzerner1y
By low-hanging fruit I mean 1) non-trivial boost in electability and 2) good effort-to-reward ratio relative to other things a politician can focus on. I agree that there are other things that would be more impactful, but perhaps there is room to do those more impactful things along with smaller, less impactful things.
2Dagon1y
I don't think there IS much low-hanging fruit. Seemingly-easy things are almost always more complicated, and the credit for deceptively-hard things skews the wrong way: promising and failing hurts a lot (didn't even do this little thing), promising and succeeding only helps a little (thanks, but what important things have you done?). Much better, in politics, to fail at important topics and get credit for trying.
[+][comment deleted]2y 2