LESSWRONG
LW

646
Phiwip
310110
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
RTFB: The RAISE Act
Phiwip3mo10

I'm guessing Zvi is referencing these parts in the definition of "large developer" and a further section later on pages 3 and 5 respectively.

"Large developer" means a person that  has  trained  at  least  one frontier  model, the compute cost of which exceeds five million dollars, and has spent over one hundred  million  dollars  in  compute  costs  in aggregate  in training frontier models...

...Any person who is not a large developer, but who sets out to  train a  frontier model that if completed as planned would qualify such person as a large developer (I.E. At the end of the training, such person  will have spent  five million dollars in compute costs on one frontier model and one hundred million dollars in compute costs in aggregate in  training  frontier  models, excluding accredited colleges and universities to the extent such colleges  and  universities  are  engaging  in  academic research) shall, before training such model: 

Non-large developers setting out to train a frontier model as described above also have to fill out an SSP but don't have follow paragraphs C or D of the SSP definition (making a detailed test procedure) which wasn't really part of your question but now we know.

ETA: I just now see Expertium's comment where there's a more recent version of the bill, mostly making this comment superfluous.

Reply
Distillation Robustifies Unlearning
Phiwip3mo1111

Imagine you’re an algebra student and your teacher pretends not to know algebra. Despite the fact that the teacher does know it themselves, you as a student will not learn. 

This is very cool and valuable work but I was also distracted by how funny I found this example.

Reply
Spaghetti Towers
Phiwip3mo10

I wouldn't consider it a common phrase, but I also wouldn't be surprised at all to hear someone say it given a sensible context.

Reply
Rational Agents Cooperate in the Prisoner's Dilemma
Phiwip4mo20

One flaw in the setup is that the person opposing you could generate a random sequence beforehand and simply follow that when choosing options in the "game." I assume the offer to play the game is not still available and/or you would not knowingly choose to play it against someone using this strategy, but if you would I'll take the $25.

Reply
xpostah's Shortform
Phiwip5mo21

I don't think this analogy works on multiple levels. As far as I know, there isn't some sort of known probability that scaling laws will continue to be followed as new models are released. While it is true that a new model continuing to follow scaling laws is increased evidence in favor of future models continuing to follow scaling laws, thus shortening timelines, it's not really clear how much evidence it would be.

This is important because, unlike a coin flip, there are a lot of other details regarding a new model release that could plausibly affect someone's timelines. A model's capabilities are complex, human reactions to them likely more so, and that isn't covered in a yes/no description of if it's better than the previous one or follows scaling laws.

Also, following your analogy would differ from the original comment since it moves to whether the new AI model follows scaling laws instead of just whether the new AI model is better than the previous one (It seems to me that there could be a model that is better than the previous one yet still markedly underperforms compared to what would be expected from scaling laws).

If there's any obvious mistakes I'm making here I'd love to know, I'm still pretty new to the space.

Reply
Cole Wyeth's Shortform
Phiwip5mo40

Can you expand on this? I'm not sure what you mean but am curious about it.

Reply
The case for AGI by 2030
Phiwip5mo10

Two years later, you could get this with Veo 2:

 

The picture(s) here seem to be missing.

Reply
Shortform
Phiwip5mo44

My strong downvotes are giving +1, which is a little confusing.

Reply5
Mis-Understandings's Shortform
Phiwip6mo10

I'm not sure the example you provide is actually an example of appealing to consequences. To me it seems more like looking at an action at different levels of abstraction rather than starting to think about consequences, although I do think the divide can be unclear. I don't do philosophy so my ideas of how things may be defined are certainly messy and may not map on well to their technical usage, but I'm reminded of the bit in the sequences regarding Reductionism, where you certainly can have an accurate model of a plane dealing just with particle fields and fundamental forces, but that doesn't mean the plane doesn't have wings.

This is to say I think that you can have different moral judgements about an action at different levels of abstraction, but that's something that can happen before and/or separately from thinking about the consequences of that action.

Reply
Mis-Understandings's Shortform
Phiwip6mo80

Are you expecting the Cost/Productivity ratio of AGI in the future to be roughly the same as it is for the agents Sam is currently proposing? I would expect that as time passes, the capabilities of such agents will vastly increase while they also get cheaper. This seems to generally be the case with technology, and previous technology had no potential means of self-improving on a short timescale. The potential floor for AI "wages" is also incredibly low compared to humans.

It definitely is worth also keeping in mind that AI labor should be much easier to scale than human labor in part because of the hiring issue, but a relatively high(?) price on initial agents isn't enough to update me away from the massive potential it has to undercut labor.

Reply
Load More
No posts to display.