LESSWRONG
LW

1741
Toby_Ord
2511340
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
41How Well Does RL Scale?
7h
6
Jacob_Hilton's Shortform
Toby_Ord10h50

Thanks for this Jacob — excellent analysis. 

I'm a huge fan of Bradley-Terry models. I'm quite sure they are the natural way of representing noisy contests like chess ability and that Elo is an inferior way. They key thing with Bradley-Terry is that each competitor has a raw ability score (e.g. A and B) and that then when they have a contest the odds of A beating B is just A:B. I think of it as each player puts a number of tickets of their colour into a hat and then one is drawn at random determining the winner. This is an even simpler interpretation than the one from the Hex paper and makes the 2/3 result even more intuitive.

Elo then takes the log base 10 and multiplies by 400 and then adds 1200 or so to make the numbers usually positive, injecting three (!) arbitrary constants into the mix in order to give an additive scale that matched the pre-Elo chess rating scale — but the natural interpretation of these contests is a multiplicative scale (the ratio of the numbers is the odds ratio of winning) so it should have been left alone. Linear progress in Elo is really exponential progress in the raw quantity.

I like your idea of assuming random difficulties for the different tasks (from some distribution that could be tweaked), as clearly that is part of the real underlying phenomenon. However, it is weird that you compare the highest number the agent draws from the hat to the highest number the task draws. More natural would be to have to take on the tasks one by one in a gauntlet of challenges of varying difficulty. e.g. that the probability of success is Πn1pi instead of my Πn1p where pi is a random variable drawn from some natural distribution over [0,1] that is modified by the agent's skill and represents the probability of succeeding at that subtask. There should be limiting cases where all pi are equal (my case) and where it is driven by the hardest one. But I'm not sure what distribution this creates.

That said, I like where you are going with this and how you eliminate one of the parameters.

I definitely see my constant hazard rate model as a first order approximation to what is going on, and not the full story. I'm surprised it works as well as it does because the underlying phenomenon has more structure than this. So I see it just as something of a null hypothesis for other approaches to beat, and do expect it to eventually be beaten.

Reply1
MONA: Managed Myopia with Approval Feedback
Toby_Ord9mo50

Thanks — this looks promising.

One thing I noticed is that there is an interesting analogy between your model and a fairly standard model in economics where society consists of a representative agent in each time period (representing something like a generation, but without overlap) each trying to maximise its own utility. They can plan based on the utilities of subsequent generations (e.g. predicting that the next generation will undo this generation's policies on some topic) but they don't inherently value those utilities. This is then understood via the perspective of a planner who wants to maximise the (discounted) sum of future utilities, even though each agent in the model is only trying to maximise their own utility.

This framework is rich enough to exhibit various inter-generational policy challenges, such as an intergenerational prisoner's dilemma where you can defect or cooperate on the following generation or the possibility of the desire of a generation to tie the hands of future generations or even the desire to stop future generations tying the hands of generations that follow them.

Reply
Better impossibility result for unbounded utilities
Toby_Ord3y30

This is an interesting theorem which helps illuminate the relationship between unbounded utilities and St Petersburg gambles. I particularly appreciate that you don't make an explicit assumption that the values of gambles must be representable by real numbers which is very common, but unhelpful in a setting like this. However, I do worry a bit about the argument structure.

The St Petersburg gamble is a famously paradox-riddled case. That is, it is a very difficult case where it isn't clear what to say, and many theories seem to produce outlandish results. When this happens, it isn't so impressive to say that we can rule out an opposing theory because in that paradox-riddled situation it would lead to strange results. It strikes me as similar to saying that a rival theory leads to strange result in variable population-size cases so we can reject it (when actually, all theories do), or that it leads to strange results in infinite population cases (when again, all theories do).

Even if one had a proof that an alternative theory doesn't lead to strange conclusions in the St Petersburg gamble, I don't think this would count all that much in its favour. As it seems plausible to me that various rules of decision theory that were developed in the cleaner cases of finite possibility spaces (or well-behaved infinite spaces) need to be tweaked to account for more pathological possibility spaces. For a simple example, I'm sympathetic to the sure thing principle, but it directly implies that the St Petersburg Gamble is better than itself, because an unresolved gamble is better than a resolved one, no matter how the latter was resolved. My guess is that this means the sure thing principle needs to have its scope limited to exclude gambles whose value is higher than that of any of their resolutions.

Reply
Is the potential astronomical waste in our universe too small to care about?
Toby_Ord11y90

Regarding your question, I don't see theoretical reasons why one shouldn't be making deals like that (assuming one can and would stick to them etc). I'm not sure which decision theory to apply to them though.

Reply
Is the potential astronomical waste in our universe too small to care about?
Toby_Ord11y61

The Moral Parliament idea generally has a problem regarding time. If it is thought of as making decisions for the next action (or other bounded time period), with new distribution of votes etc when the next choice comes up, then there are intertemporal swaps (and thus pareto improvements according to each theory) that it won't be able to achieve. This is pretty bad, as it at least appears to be getting pareto dominated by another method. However, if it is making one decision for all time over all policies for resolving future decisions, then (1) it is even harder to apply in real life than it looked, and (2) it doesn't seem to be able to deal with cases where you learn more about ethics (i.e. update your credence function over moral theories) -- at least not without quite a bit of extra explanation about how that works. I suppose the best answer may well be that the policies over which the representatives are arguing include branches dealing with all ways the credences could change, weighted by their probabilities. This is even more messy.

My guess is that of these two broad options (decide one bounded decision vs decide everything all at once) the latter is better. But either way it is a bit less intuitive than it first appears.

Reply
A possible tax efficient swap mechanism for charity
Toby_Ord11y10

This is a good idea, though not a new one. Others have abandoned the idea of a formal system for this on the grounds that:

1) It may be illegal 2) Quite a few people think it is illegal or morally dubious (whether or not it is actually illegal or immoral)

It would be insane to proceed with this without confirming (1). If illegal, it would open you up to criminal prosecution, and more importantly, seriously hurt the movements you are trying to help. I think that whether or not it turns out to be illegal, (2) is sufficient reason to not pursue it. It may cause serious reputational damage to the movement which I'd expect to easily outweigh the financial benefits.

I also think that the 10% to 20% boost is extremely optimistic. That would only be achieved if almost everyone was using it and they all wanted to spend most of their money funding charities that don't operate in their countries. I'd expect something more like a boost of a few percent.

Note that there are also very good alternatives. One example is a large effort to encourage people to informally do this in a non-matched way by donating to the subset of effective charities that are tax deductable in their country. This could get most of the benefits for none of the costs.

Reply
The metaphor/myth of general intelligence
Toby_Ord11y110

This is a really nice and useful article. I particularly like the list of problems AI experts assumed would be AI-complete, but turned out not to be.

I'd add that if we are trying to reach the conclusion that "we should be more worried about non-general intelligences than we currently are", then you don't need it to be true that general intelligences are really difficult. It would be enough that "there is a reasonable chance we will encounter a dangerous non-general one before a dangerous general one". I'd be inclined to believe that even without any of the theorising about possibility.

I think one reason for the focus on 'general' in the AI Safety community is that it is a stand in for the observation that we are not worried about path planners or chess programs or self-driving cars etc. One way to say this is that these are specialised systems, not general ones. But you rightly point out that it doesn't follow that we should only be worried about completely general systems.

Reply
Some alternatives to “Friendly AI”
Toby_Ord11y160

Thanks for bringing this up Luke. I think the term 'friendly AI' has become something of an albatross around our necks as it can't be taken seriously by people who take themselves seriously. This leaves people studying this area without a usable name for what they are doing. For example, I talk with parts of the UK government about the risks of AGI. I could never use the term 'friendly AI' in such contexts -- at least without seriously undermining my own points. As far as I recall, the term was not originally selected with the purpose of getting traction with policy makers or academics, so we shouldn't be too surprised if we can see something that looks superior for such purposes. I'm glad to hear from your post that 'AGI safety' hasn't rubbed people up the wrong way, as feared.

It seems from the poll that there is a front runner, which is what I tend to use already. It is not too late to change which term is promoted by MIRI / FHI etc. I think we should.

Reply
Can noise have power?
Toby_Ord11y60

This is quite possibly the best LW comment I've ever read. An excellent point with a really concise explanation. In fact it is one of the most interesting points I've seen within Kolmogorov complexity too. Well done on independently deriving the result!

Reply
Increasing the pool of people with outstanding accomplishments
Toby_Ord12y00

Without good ways to overcome selection bias, it is unclear that data like this can provide any evidence of outsized impact of unconventional approaches. I would expect a list of achievements as impressive as the above whether or not there was any correlation between the two.

Reply
Load More