Imagine you’re an algebra student and your teacher pretends not to know algebra. Despite the fact that the teacher does know it themselves, you as a student will not learn.
This is very cool and valuable work but I was also distracted by how funny I found this example.
I wouldn't consider it a common phrase, but I also wouldn't be surprised at all to hear someone say it given a sensible context.
One flaw in the setup is that the person opposing you could generate a random sequence beforehand and simply follow that when choosing options in the "game." I assume the offer to play the game is not still available and/or you would not knowingly choose to play it against someone using this strategy, but if you would I'll take the $25.
I don't think this analogy works on multiple levels. As far as I know, there isn't some sort of known probability that scaling laws will continue to be followed as new models are released. While it is true that a new model continuing to follow scaling laws is increased evidence in favor of future models continuing to follow scaling laws, thus shortening timelines, it's not really clear how much evidence it would be.
This is important because, unlike a coin flip, there are a lot of other details regarding a new model release that could plausibly affect someone's timelines. A model's capabilities are complex, human reactions to them likely more so, and that isn't covered in a yes/no description of if it's better than the previous one or follows scaling laws.
Also, following your analogy would differ from the original comment since it moves to whether the new AI model follows scaling laws instead of just whether the new AI model is better than the previous one (It seems to me that there could be a model that is better than the previous one yet still markedly underperforms compared to what would be expected from scaling laws).
If there's any obvious mistakes I'm making here I'd love to know, I'm still pretty new to the space.
Can you expand on this? I'm not sure what you mean but am curious about it.
Two years later, you could get this with Veo 2:
The picture(s) here seem to be missing.
My strong downvotes are giving +1, which is a little confusing.
I'm not sure the example you provide is actually an example of appealing to consequences. To me it seems more like looking at an action at different levels of abstraction rather than starting to think about consequences, although I do think the divide can be unclear. I don't do philosophy so my ideas of how things may be defined are certainly messy and may not map on well to their technical usage, but I'm reminded of the bit in the sequences regarding Reductionism, where you certainly can have an accurate model of a plane dealing just with particle fields and fundamental forces, but that doesn't mean the plane doesn't have wings.
This is to say I think that you can have different moral judgements about an action at different levels of abstraction, but that's something that can happen before and/or separately from thinking about the consequences of that action.
Are you expecting the Cost/Productivity ratio of AGI in the future to be roughly the same as it is for the agents Sam is currently proposing? I would expect that as time passes, the capabilities of such agents will vastly increase while they also get cheaper. This seems to generally be the case with technology, and previous technology had no potential means of self-improving on a short timescale. The potential floor for AI "wages" is also incredibly low compared to humans.
It definitely is worth also keeping in mind that AI labor should be much easier to scale than human labor in part because of the hiring issue, but a relatively high(?) price on initial agents isn't enough to update me away from the massive potential it has to undercut labor.
I'm guessing Zvi is referencing these parts in the definition of "large developer" and a further section later on pages 3 and 5 respectively.
Non-large developers setting out to train a frontier model as described above also have to fill out an SSP but don't have follow paragraphs C or D of the SSP definition (making a detailed test procedure) which wasn't really part of your question but now we know.
ETA: I just now see Expertium's comment where there's a more recent version of the bill, mostly making this comment superfluous.