Am I missing something? It seems like defense surpasses offense in the conventional sense. But if Russia and Ukraine both had nuclear weapons, no drone would be able to prevent that once launched, right? Likewise if I flew an airplane to drop a bunker-buster on a dam (e.g. F-35s are still useful in the Russia/Ukraine war).
Maybe in the limiting case of drone-warfare defense does dominate, but it seems to me that we are some time away from that, in manufacturing capacity alone, if nothing else.
I think Benacerraf's epistemological argument just is a knockdown against almost any form of platonism to me. You don't have to agree because I think at this level of metaphysics there's a lot of intuitions dominating, but for me it makes platonism entirely implausible. I'm not too decided on ontological status right now, maybe I'm some sort of eliminative structuralist or something? But I could be moved. Obviously when I do maths I act as though I'm a platonist and when I talk about morality I act as though I'm a realist, but in both cases I am not.
But yes, I think probably most of the debates between e.g. Woodin, Hamkins, etc about pluralism/non-pluralism are for the most part better thought of as methodological debates with a philosophical undercurrent.
Yeah, like, to be clear I didn't assign a 0% probability at this capability level, but also think I wouldn't have been that high. But you're right it's difficult to say in retrospect since I didn't at the time preregister my guesses on a per-capability-level basis. Still think it's a smaller update than many that I'm hearing people make.
Okay, thanks. This is very useful! I agree that it is perhaps a stronger update against some models of misalignment that people had in 2022, you're right. I think maybe I was doing some typical mind fallacy here.
Interesting to note the mental dynamics I may have employed here. It is hard for me not to have the viewpoint "Yes, this doesn't change my view, which I actually did have all along, and is now clearly one of the reasonable 'misalignment exists' views" when it is an update against other views that have now fallen out of vogue as a result of being updated against over time that have fallen out of my present-day mental conceptions.
I am pretty confused about people who have been around the AI safety ecosystem for a while updating towards "alignment is actually likely by default using RLHF" But maybe I am missing something.
Like 3 years ago, it was pretty obvious that scaling was going to make RLHF "work" or "seem to work" more effectively for a decent amount of time. And probably for quite a long time. Then the risk is that later you get alignment-faking during RLHF training, or at the extreme-end gradient-hacking, or just that your value function is misspecified and comes apart at the tails (as seems pretty likely with current reward functions). Okay, there are other options but it seems like basically all of these were ~understood at the time.
Yet, as we've continued to scale and models like Opus 3 have come out, people have seemed to update towards "actually maybe RLHF just does work," because they have seen RLHF "seem to work". But this was totally predictable 3 years ago, no? I think I actually did predict something like this happening, but I only really expected it to affect "normies" and "people who start to take notice of AI at about this time." Don't get me wrong, the fact that RLHF is still working is a positive update for me, but not a massive one, because it was priced in that it would work for quite a while. Am I missing something that makes "RLHF seems to work" a rational thing to update on?
I mean there have been developments to how RLHF/RLAIF/Constitutional AI works but nothing super fundamental or anything, afaik? So surely your beliefs should be basically the same as they were 3 years ago, plus the observation "RLHF still appears to work at this capability level," which is only a pretty minor update in my mind. Would be glad if someone could tell me that I'm missing something or not?
Ah okay, I think I understand, if I'm remembering my type theory correctly. I think this is downstream of "standard type theory" i.e. type theory created by Löf not accepting the excluded middle? Which does also mean rejecting choice, for sure.
EDIT: But fwiw, I think the excluded middle is much less controversial than Choice (it should technically be strictly less controversial). I think that may be a less interesting post, but I'm sure philosophers have already written that. Though I think a post defending rejecting the excluded middle from a type theory perspective actually could be quite good, because lots of people don't seem to understand the arguments from the other side here, and think they're just being ridiculous.
I think I basically agree that this is how one should consider this.
But I think there is a reasonable defence of the "ZF-universe as somewhat transcendent entities" and that is that we do virtually all of our actual maths in ZF-universes, by saying that ultimately we will be able to appeal to ZF-axioms. This makes ZF-objects pretty different from groups. E.g. I think there's a pretty tight analogy between forcing in ZF and galois extensions (just forcing is much more complicated), but the consequences of forcing for how we do the rest of maths can be somewhat deep (e.g. CH is doomed in ZFC, consequences about Turing computability, etc). So the mystical reputation is somewhat deserved. Woodin would defend some much more complicated and involved version of this as I discussed in my post about the constructible universe.
But I agree ultimately, with our current understanding of ZF-universes that they are just another mathematical object, they just happen to be an object that we use to do other maths with, and we can step outside those objects these days with large cardinal axioms, if we'd like, and analyze other consequences of them. It's pretty similar to how we started viewing logic and logics after Gödel's results (i.e. clearly first-order logic is very useful because of compactness/completeness, but that doesn't mean "Second order logic is wrong.")
Ah I really need to write a megasequence about the large cardinal axioms! They're awesome (I wrote a thesis on them).
If you're talking about surreals or hyperreals, the issue is basically that there's not one canonical model of infinitesimals, you can create them in many different ways. I'll hopefully end up writing more about the surreals and hyperreals at some point, but they don't solve as many issues as you'd hope unfortunately, and actually introduce some other problems.
As a motivating idea here, note that you need the Boolean Prime Ideal Theorem (which requires a weak form of choice to prove) to show that the hyperreals even exist in the first place, if you're starting from the natural numbers as "mathematically/ontologically basic." (maybe there's another way to define them but none immediately come to mind, there is another way to define the surreals, but there are other issues there).
Sure, but Ukraine wanted F-35s right, I assume because they thought they would be useful. As to the rest, it seems like you could claim that America 'seized' Japanese territory after a nuclear strike (rewriting the Japanese constitution, occupation, now a staunch ally, etc). Such a strike only has to break the will of the people fighting, or break the ability of command structures to function effectively, you don't have to glass the entire country you want to invade.