Vaniver

Comments

Sunzi's《Methods of War》- Introduction

天 refers to material things which affect you but which you yourself lack the power to significantly influence.

I really like this phrasing!

Are there terms that split apart the level of abstraction on which the material thing exists? Like, I am affected by whether or not it's raining right now, without being able to do much in return; I'm also affected by whether or not my currency is undergoing inflation, able to do even less in return, and I'm also affected by whether or not 13 is prime, able to do nothing in return. [My guess is the distinction between these didn't really become crisp until this last century or so, and so probably there aren't specific terms in classical Chinese.]

Practically, I'm interested in getting a sense of how much Sunzi's distinction between Heaven and Earth is metaphorical vs. literal; in the first case, 'heaven' is about paying attention to things on a higher level of abstraction than the things 'earth' is directing you to pay attention to, in the second case, both of them are about the physical environment, but different parts of it (you need to make plans based on whether it rains or shines, and also you need to make plans based on whether there's a hill or there isn't).

Should we postpone AGI until we reach safety?

I think it's obviously a bad idea to deploy AGI that has an unacceptably high chance of causing irreparable harm. I think the questions of "what chance is unacceptably high?" and "what is the chance of causing irreparable harm for this proposed AGI?" are both complicated technical questions that I am not optimistic will be answered well by policy-makers or government bodies. I currently expect it'll take serious effort to have answers at all when we need them, let alone answers that could persuade Congress. 

This makes me especially worried about attempts to shift policy that aren't in touch with the growing science of AI Alignment, but then there's something of a double bind: if the policy efforts are close to the safety research efforts, and so you're giving the best available advice to the policymakers, but you pay the price of backlash from AI researchers if they think regulation-by-policy is a mistake. If the two are distant, then the safety researchers can say their hands are clean, but now the regulation is even more likely to be a mistake. 

Sunzi's《Methods of War》- Introduction

Heaven (climate)

I'm curious about the parenthetical; are there multiple words for Heaven, and this is the one that's meant? Or there's a generic word for Heaven that means lots of things, and here you think Sunzi is specifically referring to the climate?

How Roodman's GWP model translates to TAI timelines

Presumably there are two different contour sources: the model fit on the historical data initialized at the beginning of the historical data, and the model fit on the historical data initialized at the end of the historical data. The 'background' lets you see how the actual history compared to what the model predicts, and the 'foreground' lets you see what the model predicts for the future.

And so the black line that zooms off the infinity somewhere around 1950 is the "singularity that got cancelled", or the left line on this simplistic graph.

When Hindsight Isn't 20/20: Incentive Design With Imperfect Credit Allocation

Tho the deadweight loss of a "both pay" solution is probably optimized somewhere between "split evenly" and "both pay fully". For example, in the pirate case, I think there are schemes that you can do that result in honesty being the optimal policy and yet they only destroy some of the gold in the case of accidents (or dishonesty), tho this may be sensitive to how many pirates there are and what the wealth distribution is like.

Multiple Worlds, One Universal Wave Function

The ontology doesn't feel muddled to me, although it does feel... not very quantum? Like a thing that seems to be happening with collapse postulates is that it takes seriously the "everything should be quantized" approach, and so insists on ending up with one world (or discrete numbers of worlds). MWI instead seems to think that wavefunctions, while having quantized bases, are themselves complex-valued objects, and so there doesn't need to be a discrete and transitive sense of whether two things are 'in the same branch', and instead it seems fine to have a continuous level of coherence between things (which, at the macro-scale, ends up looking like being in a 'definite branch').

[I don't think I've ever seen collapse described as "motivated by everything being quantum" instead of "motivated by thinking that only what you can see exists", and so quite plausibly this will fall apart or I'll end up thinking it's silly or it's already been dismissed for whatever reason. But somehow this does seem like a lens where collapse is doing the right sort of extrapolating principles where MWI is just blindly doing what made sense elsewhere. On net, I still think wavefunctions are continuous, and so it makes sense for worlds to be continuous too.]

Like, I think it makes more sense to think of MWI as "first many, then even more many," at which point questions of "when does the split happen?" feel less interesting, because the original state is no longer as special. When I think of the MWI story of radioactive decay, for example, at every timestep you get two worlds, one where the particle decayed at that moment and one where it held together, and as far as we can tell if time is quantized, it must have very short steps, and so this is very quickly a very large number of worlds. If time isn't quantized, then this has to be spread across continuous space, and so thinking of there being a countable number of worlds is right out.

Where do (did?) stable, cooperative institutions come from?

do you have a story for why the public sector remained okay for ~200 years (if it did)?

I less have this sense for the last 200 years than for the preceding 2000 years, but I think for most of human history 'white collar' work has been heavily affiliated with the public sector (which, for most of human history, I think should count the church). Quite possibly the thing we're seeing is a long-term realignment where more and more administrative and intellectual ability is being deployed by the private sector instead of the public sector, both because the private sector is more able to compete on compensation and non-financial compensation has degraded in relative performance? [For example, ambitious people are less interested in the steady stability of a career track now than I think they were 100 years ago, and more and more public sector work is done in the 'steady career track' way. The ability to provide for a family mattered much more for finding a spouse before the default was a two-income family. Having a 'good enough' salary mattered more than having a shot at a stellar salary in a smaller world.]

Another thing I note is that there's variation in cultural push for various sorts of service; disproportionately many military recruits come from the South and rural areas, for example. Part of this is economic, but I think even more of it is cultural / social (in the sense of knowing and respecting more people who were in the military, coming from a culture that values martial virtues over pacifism, and so on). Hamming's book on doing scientific research, which was adapted from classes he taught at the Naval Postgraduate School, focuses on doing science for social good instead of private benefit, in a way that feels very different from modern Silicon Valley startup culture (and even from earlier Silicon Valley startup culture, which felt much more connected to the national defense system).

It wouldn't surprise me if there were simply more children who grew up wanting to be public servants in the past because it was viewed more favorably then. It also wouldn't surprise me if more bits of society are detaching from each other, where it's less and less likely that there are (say) police officers or members of the military in any particular social group, except for social groups that have very heavy representation of those groups. (Of the rationalists I know socially, I think they're at least ten times as likely to publicly state "ACAB" than to have ever considered being a police officer themselves, and I predict this will be even more skewed in the next generation of rationalists.) I know a lot of people who wanted to be teachers or professors because those were the primary adults that they spent time around; perhaps the non-academia public sector is also losing that recruitment battle (relative to the private sector, at least)?

My sense is that the detachment between public and private sector salaries is relatively recent, is concentrated in the higher ranks of the organization, and is driven in large part by greater economic integration and expansion; executive salary roughly tracks the logarithm of organization size, and private sector organizations have gotten much larger than they were 200 years ago. Public sector organizations have also gotten much larger, but haven't been able to increase compensation accordingly.

Where do (did?) stable, cooperative institutions come from?

In this case the interesting thing is tracking how many cultures we form, and what factors control this rate.

The "old web" vs. "new web" seems interesting along this dimension; quite possibly the thing that seemed different about the phpBB days compared to reddit/twitter/Facebook is that an independent forum felt like more its own culture than, say, a Facebook group or a subreddit. I have the vague impression that Discord servers are more "culture-like" than other modern options, but are considerably less durable and discoverable, which seems sad.

When Money Is Abundant, Knowledge Is The Real Wealth

When it comes to funding science, there’s quite a bit of scrutiny that goes into determining which projects to fund. Labs coordinate to solve problems. Researchers do set their sights on goals. Governments organize field-wide roadmaps to chart the next ten years of research.

Tho if you take analyses like Braden's seriously, quite possibly these filtering efforts have negative value, in that they are more likely to favor projects supported by insiders and senior people, who have historically been bad at predicting where the next good things will come from. "Science advances one funeral at a time," in a way that seems detectable from analyzing the literature.

This isn't to say that planning is worthless, and that no one can see the future. It's to say that you can't buy the ability to buy the right things; you have to develop that sort of judgment on your own, and all the hard evidence comes too late to be useful.

Where do (did?) stable, cooperative institutions come from?

Vinay Gupta, in Cutting Through Spiritual Colonialism, and Venkatesh Rao, in The Gervais Principle, paint a picture where the routine operation and maintenance of life and organizations generates some sort of pollution (focusing mostly on the intrapersonal and interpersonal varieties), and an important function of institutions is basically doing the 'plumbing work' of routing the pollution away from where it does noticeable damage to where it doesn't do noticeable damage. I don't think I fully endorse this lens, but it seems like it resonates moderately well, and combines with trends in a few unsettling ways.

In centuries past, it was common to have communities that cared very strongly about whether or not insiders were treated fairly, but perceived the rest of the world as "fair game" to be fleeced as much as you could get away with; now the 'expanding moral circle' seems more common (while obviously not universal), in a way that makes the 'plumbing work' harder to do. [If life requires aggression, and you have fewer legal targets, this increases the friction life has to work against.] 

It seems like our credit-allocation mechanisms have become weirdly unbalanced, where it's both easier to evade responsibility / delete your identity and start over / impact many people who will never know it was you who impacted them, and simultaneously it's easier to discover crimes, put things on permanent records, and rally the attention of thousands and millions to direct at wrongdoers. The new way that they operate seems to have empowered Social Desirability Bias; once we might have imagined the Very Serious People leading the crowd, and now it seems the crowds are leading the Very Serious People.

This is also one of the ways that I think about the 'crisis in confidence'; see Revolt of the Public for more details, but my basic take is that experts have always been uncertain and incorrect and yet portrayed themselves as certain and correct as part of their role's bargain with broader society. Overconfidence helps experts serve their function of reassuring and coordinating the public, and part of the 'plumbing work' is marginalizing dissent and keeping it constrained to private congregations of experts. But with expanded flow of information, we both have more expertise as a society, and more memory of expert mistakes, and more virulent memes spreading distrust in experts. This feels like the sort of thing where we get lots of short-term benefits in correcting the expert opinion, but also long-term costs in that we lose the ability to coordinate because of expertise.

[Feynman in an autobiography describes his father, who makes uniforms, pointing out that uniforms are manufactured / Feynman shouldn't reflexively trust people because of their uniforms, which seems like great advice for Feynman, but not great advice for everyone in society; the social technology of respecting uniforms does actually do a lot of useful work!]

Load More