All of dynomight's Comments + Replies

Alcohol, health, and the ruthless logic of the Asian flush

In principle, I guess you could also think about low-tech solutions. For example, people who want to opt out of alcohol might have some slowly dissolving tattoo / dye placed somewhere on their hand or something. This would eliminate the need for any extra ID checks, but has the big disadvantage it would be visible most of the time.

2Pattern10dCombine it with getting entrance to a place. It doesn't have last too long, just long enough.
Alcohol, health, and the ruthless logic of the Asian flush

Thanks. Are you able to determine what the typical daily dose is for implanted disulfiram in Eastern Europe? People who take oral disulfiram typically need something like 0.25g / day to have a significant physiological effect. However, most of the evidence I've been able to find (e.g. this paper) suggest that the total amount of disulfiram in implants is around 1g. If that's dispensed over a year, you're getting like 1% of the dosage that's active orally. On top of that, the evidence seems pretty strong that bioavailability from implants is lower than from... (read more)

4alexlyzhov15dYep, the first google result http://xn--80akpciegnlg.xn--p1ai/preparaty-dlya-kodirovaniya/disulfiram-implant/ (in Russian) says that you use an implant with 1-2g of the substance for up to 5-24 months and that "the minimum blood level of disulfiram is 20 ng/ml; ". This paper https://www.ncbi.nlm.nih.gov/books/NBK64036/ [https://www.ncbi.nlm.nih.gov/books/NBK64036/] says "Mild effects may occur at blood alcohol concentrations of 5 to 10 mg/100 mL."
Alcohol, health, and the ruthless logic of the Asian flush

Very interesting! Do you know how much disulfiram the implant gives out per day? There's a bunch of papers on implants, but there's usually concerns about (a) that the dosage might be much smaller than the typical oral dosage and/or (b) that there's poor absorption.

Alcohol, health, and the ruthless logic of the Asian flush

I specified (right before the first graph) that I was using the US standard of 14g. (I know the paper uses 10g. There's no conflict because I use their raw data which is in g, not drinks.)

3MikkW17dSorry, my oversight.
Alcohol, health, and the ruthless logic of the Asian flush

Ironically, there is no standard for what a "standard drink" is, with different countries defining it to be anything from 8g to 20g of ethanol.

-3ChristianKl17dThen it makes a lot of sense to specify what standard is used in the statistics you cite. Without a defined standard a claim like the one you made feels bullshitty to me.
Alcohol, health, and the ruthless logic of the Asian flush

I wasn't (intentionally?) being ironic. I guess that for underage drinking we have the advantage that you can sort of guess how old someone looks, but still... good point.

2ZeitPolizei15dThe main advantage for underage drinking is that a bartender only has to check the birth date on the ID, whereas for self-exclusion, they would have to check the id against a database or there would have to be some kind of icon on the id.
The irrelevance of test scores is greatly exaggerated

I've politely contacted them several times via several different channels just asking for clarifications and what the "missing coefficients" are in the last model. Total stonewall- they won't even acknowledge my contacts. Some people more connected to the education community also apparently did that as a result of my post, with the same result. 

How is rationalism different from utilitarianism?

You could model the two as being totally orthogonal:

  • Rationality is the art of figuring out how to get what you want.
  • Utilitarianism is a calculus for figuring out what you should want.

In practice, I think the dividing lines are more blurry. Also, the two tend to come up together because people who are attracted to the thinking in one of these tend to be attracted to the other as well.

Simpson's paradox and the tyranny of strata

You definitely need a number of data at least exponential in the number of parameters, since the number of "bins" is exponential. (It's not so simple as to say that exponential is enough because it depends on the distributional overlap. If there are cases where one group never hits a given bin, then even an infinite amount of data doesn't save you.)

Simpson's paradox and the tyranny of strata

I see what you're saying, but I was thinking of a case where there is zero probability of having overlap among all features. While that technically restores the property that you can multiply the dataset by arbitrarily large numbers, if feels a little like "cheating" and I agree with your larger point.

I guess Simpson's paradox does always have a right answer in "stratify along all features", it's just that the amount of data you need increases exponentially in the number of relevant features. So I think that in the real world you can multiply the amount of... (read more)

It's hard to use utility maximization to justify creating new sentient beings

I like your concept that the only "safe" way to use utilitarianism is if you don't include new entities (otherwise you run into trouble). But I feel like they have to be included in some cases. E.g. If I knew that getting a puppy would make me slightly happier, but the puppy would be completely miserable, surely that's the wrong thing to do?

(PS thank you for being willing to play along with the unrealistic setup!)

Message Length

This covers a really impressive range of material -- well done! I just wanted to point out that if someone followed all of this and wanted more, Shannon's 1948 paper is surprisingly readable even today and is probably a nice companion:

http://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf

It's hard to use utility maximization to justify creating new sentient beings

Well, it would be nice if we happened to live in a universe where we could all agree on an agent-neutral definition of what the best actions to take in each situation are. It seems to be that we don't live in such a universe, and that our ethical intuitions are indeed sort of arbitrarily created by evolution. So I agree we don't need to mathematically justify these things (and maybe it's impossible) but I wish we could!

It's hard to use utility maximization to justify creating new sentient beings

If I understand your second point, you're suggesting that part of our intuition seems to suggest large populations are better is that larger populations tend to make the average utility higher. I like that! It would be interesting to try to estimate at that human population level average utility would be highest. (In hunter/gatherer or agricultural times probably very low levels. Today probably a lot higher?)

It's hard to use utility maximization to justify creating new sentient beings

Can you clarify which answer you believe is the correct one in the puppy example? Or, even better, the current utility for the dog in the "yes puppy" example is 5-- for what values you believe it is correct to have or not have the puppy?

1Dagon8moGiven the setup (which I don't think applies to real-world situations, but that's the scenario given) that they aggregate preferences, they should get a dog whether or not they value the dog's preferences. 10 + 10 < 14 + 8 if they think of the dog as an object, and 10 + 10 < 14 + 8 + 5 if they think the dog has intrinsic moral relevance. It would be a more interesting example if the "get a dog" utilities were 11 and 8 for C and B. In that case, they should NOT get a dog if the dog doesn't count in itself. And they SHOULD get a dog if it counts. But, of course, they're ignoring a whole lot of options (rows in the decision matrix). Perhaps they should rescue an existing dog rather than bringing another into the world.
Police violence: The veil of darkness

My guess is that the problem is I didn't make it clear that this is just the introduction from the link? Sorry, I edited to clarify.

1Kenny8moYes, that was it – thanks! No worries tho! I'm not aware of any good and common convention here for handling link posts. I like to post the link and then my own separate commentary. But I've also seen a lot of people go to the opposite extreme and cross-post here. For this post, it would have been much less confusing had you quoted the entire last paragraph of the intro, and also added something like "Read the rest here". I like putting "[Link] ..." in the title of my link posts here too so that that info is available for people skimming titles. (I don't think that's always necessary or should be required; just a personal preference.) What's the theory for why "state patrol agencies" are less racist/biased than "municipal police departments"? This is a hard topic to discuss rationally (or reasonably) because of politics. I also worry there's a large 'mistake theory vs conflict theory' conflict/mistake dynamic too. I like your idea of analyzing a bunch of dimensions, e.g. age, gender, income/wealth, education, and political identification, for things like police traffic stops and vehicle searches. That's something Andrew Gelman suggests a lot [https://statmodeling.stat.columbia.edu/2017/06/25/analyze-comparisons-thats-better-looking-max-difference-trying-multiple-comparisons-correction/] : It'd be nice if the researchers for the studies you reference in your post had also published their data. (Did they? I expect they didn't – but I haven't checked.)
Doing discourse better: Stuff I wish I knew

Totally agree that the different failure modes are in reality interrelated and dependent. In fact, one ("necessary despot") is a consequence of trying to counter some of the others. I do feel that there's enough similarity between some of the failure modes at different sites that's it's worth trying to name them. The temporal dimension is also an interesting point. I actually went back and looked at some of the comments on Marginal Revolution posts years ago. They are pretty terrible today, but years ago they were quite good.

Comparative advantage and when to blow up your island

In principle, for work done for market, I guess you don't need to explicitly think about free trade. Rather, by everyone pursing their own interests ("how much money can I make doing this"?) they'll eventually end up specializing in their comparative advantage anyway. Though, with finite lifetime, you might want to think about it to short-circuit "eventually".

For stuff not done for market (like dividing up chores), I'd think there's more value in thinking about it explicitly. That's because there's no invisible hand naturally pushing people toward their comparative advantage so you're more likely to end up doing things inefficiently.

Making the Monte Hall problem weirder but obvious

Thanks for pointing this out. I had trouble with the image formatting trying to post it here.

Making the Monte Hall problem weirder but obvious

That's definitely the central insight! However, experimentally, I found that explanation alone was only useful for people who already understood Monty Hall pretty well. The extra steps (the "10 doors" step and the "Monty promising") seem to lose fewer people.

That being said, my guess is that most lesswrong-ites probably fall into the "already understood Monty Hall" category, so...

5frontier649moA few months ago I tried a similar process to this with my dad who's pretty smart but like most does not know the Monty Hall Problem. I put three cards down, showed him one ace which is the winner, shuffled the cards so that only I knew where the ace was and told him to pick a card, after which I would flip over one of the other loser cards. We went through it and he said that it didn't matter whether he switched or not, 50-50. Luckily he did not pick the ace the first time so there was a bit of a uh huh moment. I repeated the process except using 10 total cards. As I was revealing the loser cards one by one he started to understand that his chances were improving. But he still thought that at the end it's a 50-50 between the card he chose and the remaining card although his resolve was wavering at that point. I hinted, "What was your chance of selecting the ace the first time", he said, "1 out of 10", and then I gave him the last hint he needed saying, "And if you selected a loser what is that other card there?" A few seconds later it clicked for him and he understood his odds were 9/10 to switch with the 10 cards and 2/3 to switch with the 2 cards. He ended up giving me additional insight when he asked what would happen if I didn't know which card was the ace, I flipped cards at random, and we discarded all the worldlines where I flipped over an ace. We worked on that situation for a while and discovered that the choice to switch at the end really is a 50-50. I did not expect that.