Wiki Contributions


But this is really weird from a decision-theoretic perspective. An agent should be unsure of principles, not sure of principles but unsure about applying them.

I don't agree. Or at least, I think there's some level-crossing here of the axiology/morality/legality type (personally I've started to think of that as a 5 level distinction instead, axiology/metaethics/morality/cultural norms/legality). I see it as equivalent to saying you shouldn't design an airplane using only quantum field theory. Not because it would be wrong, but because it would be intractable. We, as embodied beings in the world, may have principles we're sure of - principles that would, if applied, accurately compare world states and trajectories. These principles may be computationally intractable given our limited minds, or may depend on information we can't reliably obtain. So we make approximations, and try to apply them while remembering that they're approximations and occasionally pausing when things look funny to see if the approximations are still working.

All of these arguments make what I think is a false assumption: that all cases will be tried in the courts, and the main thing is to make the courts more unbiased in deciding cases brought to them. If you make it harder to defense attorneys to defend the guilty, then the guilty will go to greater lengths to avoid being brought to court in the first place. That could mean a whole lot of things in practice, with effects pointing in many directions. I don't know what they all add up to. Maybe not much of anything, but I'd find that very surprising.

Edit to add: changing this norm could also have some... potentially interesting... effects when applied to civil disobedience and unjust laws. If lawyers can be held accountable for knowingly defending the guilty, what happens to the ACLU? What would have happened to the NAACP and the Civil Rights Movement?

I took the second point to mean, "You do not want to put your political reputation and stabding on the line to take control of a difficult decision where there is not an obvious right choice and you are not the expert and even the right choice will make a lot of people unhappy."

Early on, you're far enough from your opponents that you can't really meaningfully compete with them. You're competing with the environment, and random events. It isn't until you expand enough to actually run into each other and need to capture resources and territory from each other that conflict becomes significant. 

Then again, maybe I'm wrong and this is why I'm not very good at Civ

UK hotels engage in weekly fire alarm tests that everyone treats as not real and they look at you funny if you don’t realize. Never sound an alarm with the intention of people not responding, even or especially as a test.

All across America, counties test their tornado sirens weekly or monthly, at a standard time (to ensure equipment works), and everyone who lives there knows to ignore it. These tests can be important, even if they aren't always. Usually (but not always), the test uses lower volume or a totally different sound than a real warning.

On non-competes: The last time I left a job I was covered by a non-compete. My employer refused to provide any example companies or roles at companies that would or would not run afoul of the clause, including when asked about the specific company and role they knew I was moving to. As written, it was so broad as to include a huge majority of large corporations, VCs, think tanks, tech start-ups, and regulatory agencies on the planet. Obviously absurd, and no way they had the resources to bother trying to fight me on it even if they wanted to. I think a large fraction of non-competes are more like this than could ever make sense.

I haven't been paying much attention to Harvard since graduation, but I was class of '09. On Math 55: very disappointed to hear this. AFAICT they still have 5 different intro math sequences (multivariable calculus + linear algebra; Math 1a and 1b are the equivalent of AP Calculus AB and BC and most skip them) with different levels of rigor, so I have to wonder what Math 19, 21, 23, and 25 are now like. I took 21 and wish I'd taken 23, because even in 2005 I underestimated how much watering down had already happened.

I agree with almost everything in this post, except that (ironically) I think it draws too narrow a boundary around the concept of "mathematics." I do very little formal mathematics, but use mathematical styles of reasoning very often, to good effect. To my understanding, math is the study of patterns, and to point other people at useful patterns, definitions can be a valuable starting point. This is especially true if you explicitly point out that the definition is approximate or fuzzy. If you're trying to inform or educate or advise people, then you need to do it (in part) with words, and you'll need to give enough definition (with examples, yes, but not only with examples) to get the process started. 

What you shouldn't do is use definitions to debate someone else when they have a good underlying point.

That said, there have also been a few times in my career where the most valuable thing I've been able to observe is, "this word shouldn't exist because it doesn't refer to a natural category," or "people stop using this word to describe a thing when the thing starts working properly, so they always think things in the category don't work." My main personal examples of these are smart materials, metamaterials, and nanotech. There is a useful underlying concept in each case, but real-world usage can be so inconsistent that it needs definition at the start of any conversation for the conversation to be useful.

An optimal and consistent application of the idea would presumably also apply a Pigouvian subsidy to actions with positive externalities, which would give you more of those actions in proportion to how good they are. If you did this for everything perfectly then you wouldn't need to explicitly track which taxes pay for which subsidies. Every cost/price would correctly reflect all externalities and the market would handle everything else. In principle, "everything else" could (as with some of Robin Hanson's proposals) even include things like law enforcement and courts, with a government that basically only automatically and formulaically cashes and writes checks. I don't expect any real-world  regime could (or should try to) actually approach this limit, in practice, though.

I would note that, in aggregate, the government's net revenue is not the thing government, or tax policy, are optimized for. Surplus, deficit, and neutrality can all be optimal at different times. If the government wanted to maximize net revenue in the long run, I doubt the approach would look much like taxation. Maybe more like a sovereign wealth fund. 

As an example: If a carbon tax had its desired effect, it would collect money now (though the money might be immediately spent on related subsidies or tax breaks), but in the long run, if it's successful, we'd hope to reach a point where it's never collected again, because non-GHG-emitting options became universally better. 

I would point out that these concepts only exist in finite games. Yes, "survive the development of AGI" is very much a finite game we have to win, but "then continue to thrive afterwards" is an infinite game. Life, in general, is an infinite game. For infinite games, these boundaries blur or vanish. In some sense it's all midgame, however many transitions and phases the midgame includes.

I don't think this story about Backgammon reveals anything about how to play Chess, or StarCraft, or Civilization.  Most games have phase transitions, but most games don't have the particular phase transition from conflict-dominant to conflict-irrelevant.

I would say that Civilization, if anything, has the opposite transition, though still less sharp.

Load More