mmm beef tallow is pretty in these days? I also think there's got to be some mileage from optimization to find the bliss point.
I don't think the decision theory described here is correct. (I've read Planecrash.)
Specifically, there's an idea in glowfic that it should be possible for lawful deities to follow a policy wherein counterparties can give them arbitrary information, on the condition that information is not used to harm the information-provider. This could be as drastic as "I am enacting my plan to assassinate you now, and would like you to propose edits that we both would want to make to the plan"!
I think this requires agreement ahead of time, and is not the default mode of conversation. ("Can I tell you something, and you won't get mad?" is a request, not a magic spell to prevent people from getting mad at you.) I think it also is arguably something that people should rarely agree to. Many people don't agree to the weaker condition of secrecy, because the information they're about to receive is probably less valuable than the costs of partitioning their mind or keeping information secret. In situations where you can't use the information against your enemies (like two glowfic gods interacting), the value of the information is going to be even lower, and situations where it makes sense to do such an exchange even rarer. (Well, except for the part where glowfic gods can very cheaply partition their minds and so keeping secrets or doing pseudohypothetical reasoning is in fact much cheaper for them than it is for humans.)
That is, I think this is mostly a plot device that allows for neat narratives, not a norm that you should expect people to be expecting to follow or get called out.
[This is not a complete treatment of the issue; I think most treatments of it only handle one pathway, the "this lets you get information you can use for harm reduction" pathway, and in fact in order to determine whether or not an agent should do it, you must consider all relevant pathways. But I think the presumption should not be "the math pencils out here", and I definitely don't think the math pencils out in interacting with Oli. I think characterizing that as "Oli is a bad counterparty" instead of something like "Oli doesn't follow glowfic!lawful deity norms" or "I regret having Oli as a counterparty" is impolite.]
Specifically, this is the privacy policy inherited from when LessWrong was a MIRI project; to the best of my knowledge, it hasn't been updated.
Would it be better for society if we had mediocrities in those positions and my friends working as welders, craftsmen, local business owners &c?
I think of this often when it comes to teaching--many women who are now doctors would have been teachers (or similar) a hundred years ago, and so now very smart children don't come into contact with many very smart adults until they themselves are adults (or at magnet programs or events or so on).
But whenever I try to actually put numbers to it, it's pretty clear that the sort is in fact helping. Yes, education is worse, but the other fields are better, and the prices are actually conveying information about the desirability, here.
It’s also a bit jarring to read such a pessimistic book and then reach the kind of rosy optimism about international cooperation otherwise associated with such famous delusions as the Kellogg-Briand Pact (which banned war in 1929 and … did not work out).
The authors also repeatedly analogize AI to nuclear weapons and yet they never mention the fact that something very close to their AI proposal played out in real life in the form of the Baruch Plan for the control of atomic energy (in brief, this called for the creation of a UN Atomic Energy Commission to supervise all nuclear projects and ensure no one could build a bomb, followed by the destruction of the American nuclear arsenal). Suffice it to say that the Baruch Plan failed, and did so under circumstances much more favorable to its prospects than the current political environment with respect to AI. A serious inquiry into the topic would likely begin there.
I think the core point for optimism is that leaders in the contemporary era often don't pay the costs of war personally--but nuclear war changes that. It in fact was not in the interests of the elites of the US or the USSR to start a hot war, even if their countries might eventually be better off by being the last country standing. Similarly, the US or China (as countries) might be better off if they summon a demon that is painted their colors--but it will probably not be in the interests of either the elites or the populace to summon a demon.
So the core question is the technical one--is progress towards superintelligence summoning a demon, or probably going to be fine? It seems like we only know how to do the first one, at the moment, which suggests in fact people should stop until we have a better plan.
[I do think the failure of the Baruch plan means that humanity is probably going to fail at this challenge also. But it still seems worth trying!]
Directly from the farm--if there's not one near you, you might be out of luck.
Eating a largest possible animal means less amount of suffering per kg.
I think this is the right general trend but the details matter and make it probably not true. I think cow farming is probably more humane than elephant farming or whale farming would be.
If you have the ability, have your own hens. It’s a really rewarding experience and then you can know for sure that the hens are happy and treated well.
Unfortunately, I'm moderately uncertain about this. I think chickens have been put under pretty tremendous selection pressure and their internal experiences might be quite bad, even if their external situations seem fine to us. I'm less worried about this if you pick a heritage breed (which will almost definitely have worse egg production), which you might want to do anyway for decorative reasons.
Similarly, consider ducks (duck eggs are a bit harder to come by than chicken eggs, but Berkeley Bowl stocks them and many duck farms deliver eggs--they're generally eaten by people with allergies to chicken eggs) or ostriches (by similar logic to cows--but given that they lay giant eggs instead of lots of eggs, it's a much less convenient form factor).
Knowing that a godlike superintelligence with misaligned goals will squish you might be an easy call, but knowing exactly what the state of alignment science will be when ASI is first built is not.
Hmm, I feel more on the Eliezer/Nate side of this one. I think it's a medium call that capabilities science advances faster than alignment science, and so we're not on track without drastic change. (Like, the main counterargument is negative alignment tax, which I do take seriously as a possibility, but I think probably doesn't close the gap.)
I think it's called a reverse sear because the 'sear' step happens second--after the low-and-slow cooking--whereas it's a more common technique in cooking to start with the high heat to get the browning, and then lower the temperature.
Note that bacteria grow faster in hotter temperatures, until you reach the temperature where they die. (125°F, one of the temperatures mentioned in the article, is not hot enough to kill bacteria, and is thus one of the worst parts of the Danger Zone.) For large cuts of meat like a steak, you're mostly worried about stuff that's on the outside of it, and so a quick sear at a high temperature will kill stuff that's on the outside, and then you can comfortably cook at a lower temperature. My best guess is this is not a major problem at the times discussed here (30 minutes in the danger zone is within USDA guidelines) but probably was a worse idea when food safety was worse. Also note that when you put the steak in the oven, the oven temperature will be safe, in a way that means you don't need to be worried about the outside or contamination from the oven.
[As mentioned in a linked article, the commonly stated justification was to "lock in the juices", which isn't true, but it wouldn't surprise me if food safety was the actual impetus behind that advice.]
[[edit: I should also note that lots of recipes, like stew, start off with something that you want to fry (cook at temperatures higher than water boils at) and then later add something that you want to boil or steam (cook at water's boiling temperature). It is way easier to fry the meat and then add it to the boiling water than it is to boil the stew for a while, separate out the meat, and then fry it at the end.]]