AnthonyC

Wiki Contributions

Comments

All of these arguments make what I think is a false assumption: that all cases will be tried in the courts, and the main thing is to make the courts more unbiased in deciding cases brought to them. If you make it harder to defense attorneys to defend the guilty, then the guilty will go to greater lengths to avoid being brought to court in the first place. That could mean a whole lot of things in practice, with effects pointing in many directions. I don't know what they all add up to. Maybe not much of anything, but I'd find that very surprising.

Edit to add: changing this norm could also have some... potentially interesting... effects when applied to civil disobedience and unjust laws. If lawyers can be held accountable for knowingly defending the guilty, what happens to the ACLU? What would have happened to the NAACP and the Civil Rights Movement?

I took the second point to mean, "You do not want to put your political reputation and stabding on the line to take control of a difficult decision where there is not an obvious right choice and you are not the expert and even the right choice will make a lot of people unhappy."

Early on, you're far enough from your opponents that you can't really meaningfully compete with them. You're competing with the environment, and random events. It isn't until you expand enough to actually run into each other and need to capture resources and territory from each other that conflict becomes significant. 

Then again, maybe I'm wrong and this is why I'm not very good at Civ

UK hotels engage in weekly fire alarm tests that everyone treats as not real and they look at you funny if you don’t realize. Never sound an alarm with the intention of people not responding, even or especially as a test.

All across America, counties test their tornado sirens weekly or monthly, at a standard time (to ensure equipment works), and everyone who lives there knows to ignore it. These tests can be important, even if they aren't always. Usually (but not always), the test uses lower volume or a totally different sound than a real warning.

On non-competes: The last time I left a job I was covered by a non-compete. My employer refused to provide any example companies or roles at companies that would or would not run afoul of the clause, including when asked about the specific company and role they knew I was moving to. As written, it was so broad as to include a huge majority of large corporations, VCs, think tanks, tech start-ups, and regulatory agencies on the planet. Obviously absurd, and no way they had the resources to bother trying to fight me on it even if they wanted to. I think a large fraction of non-competes are more like this than could ever make sense.

I haven't been paying much attention to Harvard since graduation, but I was class of '09. On Math 55: very disappointed to hear this. AFAICT they still have 5 different intro math sequences (multivariable calculus + linear algebra; Math 1a and 1b are the equivalent of AP Calculus AB and BC and most skip them) with different levels of rigor, so I have to wonder what Math 19, 21, 23, and 25 are now like. I took 21 and wish I'd taken 23, because even in 2005 I underestimated how much watering down had already happened.

I agree with almost everything in this post, except that (ironically) I think it draws too narrow a boundary around the concept of "mathematics." I do very little formal mathematics, but use mathematical styles of reasoning very often, to good effect. To my understanding, math is the study of patterns, and to point other people at useful patterns, definitions can be a valuable starting point. This is especially true if you explicitly point out that the definition is approximate or fuzzy. If you're trying to inform or educate or advise people, then you need to do it (in part) with words, and you'll need to give enough definition (with examples, yes, but not only with examples) to get the process started. 

What you shouldn't do is use definitions to debate someone else when they have a good underlying point.

That said, there have also been a few times in my career where the most valuable thing I've been able to observe is, "this word shouldn't exist because it doesn't refer to a natural category," or "people stop using this word to describe a thing when the thing starts working properly, so they always think things in the category don't work." My main personal examples of these are smart materials, metamaterials, and nanotech. There is a useful underlying concept in each case, but real-world usage can be so inconsistent that it needs definition at the start of any conversation for the conversation to be useful.

An optimal and consistent application of the idea would presumably also apply a Pigouvian subsidy to actions with positive externalities, which would give you more of those actions in proportion to how good they are. If you did this for everything perfectly then you wouldn't need to explicitly track which taxes pay for which subsidies. Every cost/price would correctly reflect all externalities and the market would handle everything else. In principle, "everything else" could (as with some of Robin Hanson's proposals) even include things like law enforcement and courts, with a government that basically only automatically and formulaically cashes and writes checks. I don't expect any real-world  regime could (or should try to) actually approach this limit, in practice, though.

I would note that, in aggregate, the government's net revenue is not the thing government, or tax policy, are optimized for. Surplus, deficit, and neutrality can all be optimal at different times. If the government wanted to maximize net revenue in the long run, I doubt the approach would look much like taxation. Maybe more like a sovereign wealth fund. 

As an example: If a carbon tax had its desired effect, it would collect money now (though the money might be immediately spent on related subsidies or tax breaks), but in the long run, if it's successful, we'd hope to reach a point where it's never collected again, because non-GHG-emitting options became universally better. 

I would point out that these concepts only exist in finite games. Yes, "survive the development of AGI" is very much a finite game we have to win, but "then continue to thrive afterwards" is an infinite game. Life, in general, is an infinite game. For infinite games, these boundaries blur or vanish. In some sense it's all midgame, however many transitions and phases the midgame includes.

I don't think this story about Backgammon reveals anything about how to play Chess, or StarCraft, or Civilization.  Most games have phase transitions, but most games don't have the particular phase transition from conflict-dominant to conflict-irrelevant.

I would say that Civilization, if anything, has the opposite transition, though still less sharp.

This issue is further complicated by the fact that humans aren't fully general reasoners without tool support either.

I think the discussion, not specifically here but just in general, vastly underestimates the significance of this point. It isn't like we expect humans to solve meeting planning problems in our heads. I use Calendly, or Outlook's scheduling assistant and calendar. I plug all the destinations into our GPS and re-order them until the time looks low enough. One of the main reasons we want to use LLMs for these tasks at all is that, even with tool support, they are not trivial or instant for humans to solve.

There is also a reason why standardized tests for kids so often include essay questions on breaking down tasks step by step, like (to pick an example from my own past) "describe in as much detail as possible how you would make a peanut butter and jelly sandwich." Even aspiring professional chefs have to learn proper mis en place to keep on top of their (much more complex) cooking tasks. I won't bother listing more examples, but most humans are not naturally good at these tasks.

Yes, current LLMs are worse on many axes. IDK if that would be true if we built wrappers to let them use the planning tools humans rely on in practice, and if we put them through the kinds of practice humans use to learn these skills IRL. I suspect they still would be, but to a much lesser degree. But then I also can't help thinking about the constant stream of incredible-lack-of-foresight things I see other humans do on a regular basis, and wonder if I'm just overestimating us.

FWIW, after I wrote this comment, I asked Gemini what it thought. It came up with a very similar POV about what its limitations were, what tools would help it, and how much those tools would close the gap with humans.  Also, it linked this blog post in its reply. https://gemini.google.com/app/a72701429c8d830a

Load More