Comments

Sorted by
gwern195

And in a way, they ought to be rolling in even more compute than it looks because they are so much more focused: Anthropic isn't doing image generation, it isn't doing voice synthesis, it isn't doing video generation... (As far as we know they aren't researching those, and definitely not serving it to customers like OA or Google.) It does text LLMs. That's it.

But nevertheless, an hour ago, working on a little literary project, I hit Anthropic switching my Claude to 'concise' responses to save compute. (Ironically, I think that may have made the outputs better, not worse, for that project, because Claude tends to 'overwrite', especially in what I was working on.)

gwern245

Yes, basically. It is well-written and funny (of course), but a lot of it is wrong. What was, say, the last "article explaining Bayes" you saw on LW, which is a central example of his of the staleness and repetition killing LW? Would I find 3 or 4 new articles on how "Bayes's theorem is like a burrito" if I go over to the Main page right now...?* (Personally, I wouldn't mind reviving some more Bayes on LW these days, and I have an idea for one myself.)

And saying we weren't weird to begin with but have gotten weirder...? I have no idea how he could have gotten that idea - trust me when I say that people on LW used to be a lot weirder, or hey, no need to do that - just go crack open a copy of Great Mambo Chicken or ask a question like 'was a larger percentage of LW signed up for cryonics in 2009 or in 2024?' Sorry, everyone who joined post-MoR, but you're just a lot more normal and less weird than the OG LWers like Hanson or Clippy or Yudkowsky or even Roko. (Yes, you still have a shot at a normal life & happiness, but your posts are not remotely as unhinged, so who's to say who's better off in the end?)

* that was rhetorical, but I of course checked anyway and of the first 30 or 40 posts, the only one that even comes close to being about Bayesianism seems to be https://www.lesswrong.com/posts/KSdqxrrEootGSpKKE/the-solomonoff-prior-is-malign-is-a-special-case-of-a which is not very much at all.

gwern51

To copy over my Twitter response:

I think it's a very brave claim to say that the country with some of the consistently highest growth rates in the world and which is far more able & willing to repress savings [and consumption] to drive investment, would obviously lose a GDP growth race so badly as to render it entirely harmless.

gwern60

No, I don't miss it. I think it's just a terrible idea and that if that is the exit plan, I would greatly appreciate hawks being explicit about that, because I expect everyone else to find that (along with most of the other exit plans that would actually work) to be appalling and thus temper their enthusiasm for an arms race.

"OK, let me try this again. I'm just having a little trouble wrapping my mind around this, how this arms race business ends well. None of us are racist genocidal maniacs who want to conquer the world or murder millions of innocent people, which is what your military advantage seems to require in order to actually cash out as any kind of definitive long-term solution to the problem that the CCP can just catch up a bit later; so, why exactly would we execute such a plan if we put ourselves in a position where we are left only with that choice or almost as bad alternatives?"

"Oh, well, obviously our AGIs will (almost by definition) be so persuasive and compelling at brainwashing us, the masters they ostensibly serve, that no matter what they tell us to do, even something as horrific as that, we will have no choice but to obey. They will simply be superhumanly good at manipulating us into anything that they see fit, no matter how evil or extreme, so there will be no problem about convincing us to do the necessary liquidations. We may not know exactly how they will do that, but we can be sure of it in advance and count on it as part of the plan. So you see, it all will work out in the end just fine! Great plan, huh? So, how many trillions of dollars can we sign you up for?"

gwern144

OA has indirectly confirmed it is a right-to-be-forgotten thing in https://www.theguardian.com/technology/2024/dec/03/chatgpts-refusal-to-acknowledge-david-mayer-down-to-glitch-says-openai

ChatGPT’s developer, OpenAI, has provided some clarity on the situation by stating that the Mayer issue was due to a system glitch. “One of our tools mistakenly flagged this name and prevented it from appearing in responses, which it shouldn’t have. We’re working on a fix,” said an OpenAI spokesperson

...OpenAI’s Europe privacy policy makes clear that users can delete their personal data from its products, in a process also known as the “right to be forgotten”, where someone removes personal information from the internet.

OpenAI declined to comment on whether the “Mayer” glitch was related to a right to be forgotten procedure.

Good example of the redactor's dilemma and the need for Glomarizing: by confirming that they have a tool to flag names and hide them, and then by neither confirming or denying that this was related to a right-to-be-forgotten order (a meta-gag), they confirm that it's a right-to-be-forgotten bug.

Similar to when OA people were refusing to confirm or deny signing OA NDAs which forbade them from discussing whether they had signed an OA NDA... That was all the evidence you needed to know that there was a meta-gag order (as was eventually confirmed more directly).

gwern96

It would also be odd as a glitch token. These are space-separated names, so most tokenizers will tokenize them separately, and glitch tokens appear to be due to undertraining but how could that possibly be the case for a phrase like "David Mayer" which has so many instances across the Internet which have no apparent reason to be filtered out by data-curation processes the way the glitch tokens often do?

gwern105

The original comment you wrote appeared to be a response to "AI China hawks" like Leopold Aschenbrenner. Those people do accept the AI-is-extremely-powerful premise...when Trump's daughter is literally retweeting Leopold's manifesto.

But would she be retweeting it if Leopold was being up front about how the victory scenario entails something like 'melt all GPUs and conquer and occupy China perpetually' (or whichever of those viable strategies he actually thinks of, assuming he does), instead of coyly referring to 'decisive military advantage' - which doesn't actually make sense or provide an exit plan?

gwern147

The standard LW & rationalist thesis (which AFAICT you agree with) is that sufficiently superintelligent AI is a magic wand that allows you to achieve whatever outcome you want.

The standard LW & rationalist thesis is accepted by few people anywhere in the world, especially among policy and decision-makers, and it's hard to imagine that it will be widely and uncontroversially accepted anywhere until it is a fait accompli - and even then I expect many people will continue to argue fallbacks about "the ghost in the machine is outsourced human labor" or "you can't trust the research outputs" or "it's just canned lab demos" or "it'll fail to generalize out of distribution". Hence, we need not concern ourselves here with what we think.

So one answer would be to prevent the CCP from doing potentially nasty things to you while they have AGI supremacy. Another answer might be turn the CCP into a nice liberal democracy friendly to the United States. Both of these are within the range of things the United States has done historically when they have had the opportunity.

It is a certainly viable strategy, if one were to execute it fully, rather than partially. But I don't think people are very interested in biting these sorts of bullets, without a Pearl Harbor or 9/11:

HAWK: "Here's our Plan A, you'll love it!

'We should launch an unprovoked and optional AI arms race, whose best-case scenario and 'winning' requires the USA to commit to, halfway around the world, the total conquest, liquidation, and complete reconstruction of the second-largest/most powerful nuclearized country on earth, taking over a country with 4.25x more people than itself, which will fiercely resist this humiliation and colonization, likely involving megadeaths, and trying to turn it into a nice liberal democracy (which we have failed to do in many countries far smaller & weaker than us, eg. Haiti, Afghanistan, or Iraq), and where if we ever fail in this task, that means they will then be highly motivated to do the same to us, and likely far more motivated than we were when we began, potentially creating our country's most bitter foe ever.'"

EVERYONE ELSE: "...what's Plan B?"

gwern20

I bet there are plenty of amusics who understand that other people get a lot out of music emotionally but think that that would be hyperbole: https://en.wikipedia.org/wiki/Amusia#Social_and_emotional

gwern103

Benjamin Todd reports back from "a two-week trip in China" on "Why a US AI 'Manhattan Project' could backfire: notes from conversations in China" (cf Dwarkesh), hitting very similar points about lack of funding/will despite considerable competence, and that:

So what might trigger a wake up? Most people said they didn’t know. But one suggestion was that the fastest way would be a high-profile US state-led AI project (especially if its explicit goal is US dominance…).

This means calls for a US "Manhattan Project" for AGI might easily be self-defeating. If maintaining a technological lead is your goal, better to stfu and hope the status quo persists as long as possible. (Or if you do go ahead, you need much stricter export restrictions.)

Load More