The study seems to be about what is predicted by experts to be possible, not what is possible afaict?
Is there a mechanism to explicitly run a proposed agreement by the regulator to get their OK?
You'd need the alternative workable approach to not be basically runnable on GPUs, which is maybe plausible, but seems optimistic?
(E.g. anything that can run on a computer would most likely profit quite a bit from the cheap GPU compute even if it's overall more complex and the current optimizations aren't as targeted)
I agree with the general concern, but it'd be clearly a move in the right direction on that front?
With this kind of proposal I'm more worried that it could lead to a unilateral slowdown just after having animated China to be much more aggressive on AI.
This is the "stacked S-curves" effect often seen in the maturing of (usually ordinary) technologies. It's perhaps slightly unusual that it's more pronounced and "discrete" right now (relatively few innovations leading to large amounts of progress).
The other angles are probably already out there, but haven't given the chance to shine while the current paradigm can be sufficiently leveraged, so I'm not very hopeful on progress stalling by itself.
We really need the update! I was going to share this with someone who just now has been hit with the full emotional force of realizing what's going on with AI... but even this right at the beginning doesn't seem so applicable anymore:
> Some combination of ‘we run out of training data and ways to improve the systems, and AI systems max out at not that much more powerful than current ones’ and ‘turns out there are regulatory and other barriers that prevent AI from impacting that much of life or the economy that much’ could mean that things during our life...
Narration doesn't work on this.
I think it's unclear that this was overall bad for Anthropic/Amodei if you factor in the reputational and ideological boost they got ("aura farming" according to roon).
I recently thought something like "community notes, but for the internet" would be awesome, but you'd need a critical mass of people.
Using the kind of thing presented by OP for bootstrapping combined with some mechanism to use (in the near term) humans as the ultimate arbitrators for reliability could be pretty fun.
Sure, assuming the development of your cure doesn't have substantial negative externalities, which is the whole point of the AI debate. I understand that your stance is "the risks are not that high", but it's worth pointing out that this is really a core assumption that the rest of your position is based on.
I'd venture an uninformed guess that in 95 % or so percent of these cases the problem isn't "taking ideas seriously" but rather people deferring proper judgement due to some emotional or social effect.
One point that I tend to believe is true, but that I don't see raised much:
The straightforward argument (machines that are smarter than us might try to take over) is intuitively clear to many people, but for whatever reason many people have developed memetic anti-genes against it. (E.g. talk about the AI bubble, AI risk is all sci-fi, technological progress is good, we just won't program it that way, etc.).
In my personal experience, the people I talk to with a relatively basic education and who are not terminally online are much more intuitivel...
Sorry if that was weirdly obscure. I was asking because the principal reason I go out of my way to avoid rain is that I'm worried my phone would get wet and potentially die (and I've been somewhat sad about having to forego the experience of braving the rain at points). But it's possible that this is not a big issue with current devices (and maybe never was)!
One general point I've heard in this regard, is that Japan's debt is mostly owned by large Japanese companies and so carries a much smaller risk for the government.
Do you carry a smartphone with you in those occasions?
I would not go so far to say simply having proximity to OpenAI is problematic as long as the instructor manages to convey that their decision to associate with a leading AI company is controversial (or at least the more general point, that many reasonable people find what the AI companies do to be highly irresponsible). I somewhat trust that this is handled reasonably in this case.
Overall, I think the worry here is less that people who might only be somewhat safety-pilled give courses like this, and more that people who are very safety pilled don't have the opportunity to do so (due to selection effects in CS/ML academia).
I don't have fully formed thoughts on this, but I think there's a reasonable point to make that if we both grant AIs moral patient-hood/rights and go about creating them at will without thinking this through very well, then we create a moral catastrophe one way or another.
I tentatively disagree with OP that the conclusion is we should just flat-out not grant AIs moral weight (although I think this is a sensible default to fall back to as a modus operandi), but I think it also seems optimistic to assert that if we did so, then this didn't have some kind of horrendous implications for where we're headed and what's currently happening (I'm not saying it does, just that I don't know either way).
I think this would benefit from having examples (maybe just pointing at the top level post/belief that was unpleasantly attacked without calling out specific responses)
I think it's a good argument, but Anthropic doesn't seem quite aligned enough to make it work. E.g. they don't seem to have been pushing for a coordinated Pause to any real extent (and if they don't think this would be a good idea, haven't clarified their position as far as I know).