Vaniver

Comments

Open & Welcome Thread - September 2020

For what it's worth, I think a decision to ban would stand on just his pursuit of conversational norms that reward stamina over correctness, in a way that I think makes LessWrong worse at intellectual progress. I didn't check out this page, and it didn't factor into my sense that curi shouldn't be on LW.

I also find it somewhat worrying that, as I understand it, the page was a combination of "quit", "evaded", and "lied", of which 'quit' is not worrying (I consider someone giving up on a conversation with curi understandable instead of shameful), and that getting wrapped up in the "&c." instead of being the central example seems like it's defining away my main crux.

Draft report on AI timelines

Part 1 page 15 talks about "spending on computation", and assumes spending saturates at 1% of the GDP of the largest country. This seems potentially odd to me; quite possibly the spending will be done by multinational corporations that view themselves as more "global" than "American" or "British" or whatever, and whose fortunes are more tied to the global economy than to the national economy. At most this gives you a factor of 2-3 doublings, but that's still 4-6 years on a 2-year doubling time.

Overall I'm not sure how much to believe this hypothesis; my mainline prediction is that corporations grow in power and rootlessness compared to nation-states, but it also seems likely that bits of the global economy will fracture / there will be a push to decentralization over centralization, where (say) Alphabet is more like "global excluding China, where Baidu is supreme" than it is "global." In that world, I think you still see approximately a 4x increase.

I also don't have a great sense how we should expect the 'ability to fund large projects' to compare between the governments of the past and the megacorps of the future; it seems quite plausible to me that Alphabet, without pressure to do welfare spending / fund the military / etc. could put a much larger fraction of its resources towards building TAI, but also presumably this means Alphabet has many fewer resources than the economy as a whole (because there still will be welfare spending and military funding and so on), and on net this probably works out to 1% of total gdp available for megaprojects.

Draft report on AI timelines

Thanks for sharing this draft! I'm going to try to make lots of different comments as I go along, rather than one huge comment.

[edit: page 10 calls this the "most important thread of further research"; the downside of writing as I go! For posterity's sake, I'll leave the comment.]

Pages 8 and 9 of part 1 talk about "effective horizon length", and make the claim:

Prima facie, I would expect that if we modify an ML problem so that effective horizon length is doubled (i.e, it takes twice as much data on average to reach a certain level of confidence about whether a perturbation to the model improved performance), the total training data required to train a model would also double. That is, I would expect training data requirements to scale linearly with effective horizon length as I have defined it.

I'm curious where 'linearly' came from; my sense is that "effective horizon length" is the equivalent of "optimal batch size", which I would have expected to be a weirder function of training data size than 'linear'. I don't have a great handle on the ML theory here, tho, and it might be substantially different between classification (where I can make batch-of-the-envelope estimates for this sort of thing) and RL (where it feels like it's a component of a much trickier system with harder-to-predict connections).

Quite possibly you talked with some ML experts and their sense was "linearly", and it makes sense to roll with that; it also seems quite possible that the thing to do here is have uncertainty over functional forms. That is, maybe the effective horizon scales linearly, or maybe it scales exponentially, or maybe it scales logarithmically, or inverse square root, or whatever. This would help double-check that the assumption of linearity isn't doing significant work, and if it is, point to a potentially promising avenue of theoretical ML research.

[As a broader point, I think this 'functional form uncertainty' is a big deal for my timelines estimates. A lot of people (rightfully!) dismissed the standard RL algorithms of 5 years ago for making AGI because of exponential training data requirements, but my sense is that further algorithmic improvement is mostly not "it's 10% faster" but "the base of the exponent is smaller" or "it's no longer exponential.", which might change whether or not it makes sense to dismiss it.]

Draft report on AI timelines

A simple, well-funded example is autonomous vehicles, which have spent considerably more than the training budget of AlphaStar, and are not there yet.

I am aware of other examples that do seem to be happening, but I'm not sure what the cutoff for 'massive' should be. For example, a 'call center bot' is moderately valuable (while not nearly as transformative as autonomous vehicles), and I believe there are many different companies attempting to do something like that, altho I don't know how their total ML expenditure compared to AlphaStar's. (The company I'm most familiar with in this space, Apprente, got acquired by McDonalds last year, who I presume is mostly interested in the ability to automate drive-thru orders.)

Another example that seems relevant to me is robotic hands (plus image classification) at sufficient level that warehouse pickers could be replaced by robots. 

Open & Welcome Thread - September 2020

I think you're denying him an important chance to do error correction via that decision. (This is a particularly important concept in CR/FI)

I agree that if we wanted to extend him more opportunities/resources/etc., we could, and that a ban is a decision to not do that.  But it seems to me like you're focusing on the benefit to him / "is there any chance he would get better?", as opposed to the benefit to the community / "is it reasonable to expect that he would get better?". 

As stewards of the community, we need to make decisions taking into account both the direct impact (on curi for being banned or not) and the indirect impact (on other people deciding whether or not to use the site, or their experience being better or worse).

Open & Welcome Thread - September 2020

So why hasn't civilization figured that out already? Or is not teaching moral uncertainty some kind of Chesterton's Fence, and teaching it widely would make the world even worse off on expectation?

This is sort of a rehash of sibling comments, but I think there are two factors to consider here.

The first is the rules. It is very important that people drive on the correct side of the road, and not have uncertainty about which side of the road is correct, and not very important whether they have a distinction between "correct for <country> in <year>" and "correct everywhere and for all time."

The second is something like the goal. At one point, people thought it was very important that society have a shared goal, and worked hard to make it expansive; things like "freedom of religion" are the things civilization figured out to both have narrow shared goals (like "keep the peace") and not expansive shared goals (like "as many get to Catholic Heaven as possible"). It is unclear to me whether we're better off with moral uncertainty as generator for "narrow shared goals", whether narrow shared goals is what we should be going for.

Open & Welcome Thread - September 2020

Sometimes people are warned, and sometimes they aren't, depending on the circumstances. By volume, the vast majority of our bans are spammers, who aren't warned. Of users who have posted more than 3 posts to the site, I believe over half (and probably closer to 80%?) are warned, and many are warned and then not banned. [See this list.]

Vaniver's Shortform

My boyfriend: "I want a version of the Dune fear mantra but as applied to ugh fields instead"

Me:

I must not flinch.
Flinch is the goal-killer.
Flinch is the little death that brings total unproductivity.
I will face my flinch.
I will permit it to pass over me and through me.
And when it has gone past I will turn the inner eye to see its path.
Where the flinch has gone there will be nothing. Only I will remain.

Tho they later shortened it, and I think that one was better:

I will not flinch.
Flinch is the goal-killer.
I will face my flinch.
I will let it pass through me.
When the flinch has gone,
there shall be nothing.
Only I will remain.

Him: Nice, that feels like flinch towards

[AN #115]: AI safety research problems in the AI-GA framework

Currently this is fixed manually for each crosspost by converting it to draft-js and then deleting some extra stuff. I'm not sure how high a priority it is to make that automatic.

Load More