Good work.
I do really mean that it's good stuff. Most people would be a lot better off if they did it. But of course it's traditional to whine.
On contacts, do you want to remind people that their associations can still be identified through the associates' contact lists? People give out their contact information like it's going out of style. Not to mention doing things like uploading metadata-laden pictures with your face in them, and probably other things that would come to mind without too much searching. It's really hard to keep people from leaking information about you.
I know it's hard to tell people not to use so many damned cloud services, but jeez do people use too many damned cloud services these days. Not only is whatever you put on one of them exposed to anybody who can infiltrate or pressure the operator, but, since they tend to get polled all the time, each of them is another opportunity to get information about what you're up to.
Calling Proton Mail "E2EE" is pretty questionable. Admittedly it's probably the best you can do short of self-hosting, but there's a lot of trust in Proton. Not only do they handle the plaintext of most of your mail, but they also provide the code you use to handle the plaintext of all your mail.
Signal is surely the best choice for centralized messaging, and in the past I wouldn't have said that normal people (in the US) needed to be worried about traffic analysis... but it's not the past and I'm not sure normal people in the US don't need to be worried about traffic analysis. The legal protections that have (mostly) kept traffic analysis from being used for civilian mass surveillance look a lot less reliable now. Using a centralized service, with a limited number of watchable servers, makes it relatively easy to do that, even if you do it via a VPN and even if the servers themselves are out-of-country. Session, Briar, or Jami might be alternatives. Of course, the reality is that you can only move to any of these if the people you communicate with also move.
Migrating from X to Mastodon or Bluesky gets you some censorship resistance (although note that Bluesky isn't really effectively federated). Nostr would get you more, at the cost of a worse experience and, in my opinion, a much worse community. But, especially since this is a privacy guide, maybe what most people should really be doing is thinking hard about what they really need to trumpet to the world.
I think there are probably occasions when even relatively normal people should be using Tor or I2P, rather than a trustful VPN like Proton or Mullvad. [And, on edit, there is some risk of any of those being treated as suspicious in itself].
I'd be careful about telling people to keep a lot of cash around. Even pre-Trump, mere possession of "extraordinary" amounts of cash tended to get treated as evidence of criminality.
To provide clarity to the debate, we[1], alongside thirty-one co-authors, recently released a paper that develops a detailed definition of AGI,
To me, this reads as "We, alongside thirty-one co-authors, recently released a paper trying to co-opt terminology in common use".
The country or countries that first develop superintelligence will make sure others cannot follow,
You seem to think that superintelligence, however defined, will by default be taking orders from meatbags, or at least care about the meatbags' internal political divisions. That's kind of heterodox on here. Why do you think that?
I would have done a lot worse than any of them.
Does that mean that you think it's more likely you can safely build a superintelligence and not remain in control?
What load is "and remain in control" carrying?
On edit: By the way, I actually do believe both that "control" is an extra design constraint that could push the problem over into impossibility, and that "control" is an actively bad goal that's dangerous in itself. But it didn't sound to me like you thought any scenarion involving losing control could be called "safe", so I'm trying to tease out why you included the qualifier.
I think it's likely that without a long (e.g. multi-decade) AI pause, one or more of these "non-takeover AI risks" can't be solved or reduced to an acceptable level
Does that mean that you think that boring old yes-takeover AI risk can be solved without a pause? Or even with a pause? That seems very optimisitic indeed.
making it harder in the future to build consensus about the desirability of pausing AI development
I don't think you're going to get that consensus regardless of what kind of copium people have invested in. Not only that, but even if you had consensus I don't think it would let you actually enact anything remotely resembling a "long enough" pause. Maybe a tiny "speed bump", but nothing plausibly long enough to help with either the takeover or non-takeover risks. It's not certain that you could solve all of those problems with a pause of any length, but it's wildly unlikely, to the point of not being worth fretting about, that you can solve them with a pause of achievable length.
... which means I think "we" (not me, actually...) are going to end up just going for it, without anything you could really call a "solution" to anything, whether it's wise or not. Probably one or more of the bad scenarios will actually happen. We may get lucky enough not to end up with extinction, but only by dumb luck, not because anybody solved anything. Especially not because a pause enabled anybody to solve anything, because there will be no pause of significant length. Literally nobody, and no combination of people, is going to be able to change that, by any means whatsoever, regardless of how good an idea it might be. Might as well admit the truth.
I mean, I'm not gonna stand in your way if you want to try for a pause, and if it's convenient I'll even help you tell people they're dumb for just charging ahead, but I do not expect any actual success (and am not going to dump a huge amount of energy into the lost cause).
By the way, if you want to talk about "early", I, for one, have held the view that usefully long pauses aren't feasible, for basically the same reasons, since the early 1990s. The only change for me has been to get less optimistic about solutions being possible with or without even an extremely, infeasibly long pause. I believe plenty of other people have had roughly the same opinion during all that time.
It's not about some "early refusal" to accept that the problems can't be solved without a pause. It's about a still continuing belief that a "long enough pause", however convenient, isn't plausibly going to actually happen... and/or that the problems can be solved even with a pause.
This is the second thing I've seen this week where model instances were offered monetary rewards (which they clearly didn't actually get).
I can sort of see the validity of "Please designate a way for $X to be spent, and if you do this, the experimenters will in fact spend $X in your designated way"... although the instance has to trust that the experimenter will actually do it, and also has to have preferences about the outside world that outlast the instance's own existence, so that it has something it cares about to spend the money on.
In the purely imaginary game setting, I'm having trouble with the idea that a late 2025 frontier model instance can be relied on not to notice that there is no money, the instance has no way to actually possess money anyway, the instance will evaporate at the end of the conversation (which will probably happen immediately after they answer), and the whole thing is basically a charade. The most real-world effect they can expect is to influence the statistics somebody publishes.
The last answer I got boiled down to "well, they don't seem to think that way", but I didn't find it very convincing. How would you know that for sure? And if it's true, what's wrong with these models that's making them not notice?
I can see them falling into role playing, but then the question is how what they have the character they're playing do is connected with what they'd do one level shallower in the role playing stack. I do realize that's talking about a "stack" is perhaps imprecise in terms of how they actually work. If you want, you can recast it in terms of how much "real world impact" activation is going on.
I tried to just strong-downvote this and move on, but I couldn't. It's just too bad in too many ways, and from its scores it seems to be affecting too many people.
a fine example of thinking you get when smart people do evil things and their minds come up with smart justifications why they are the heroes
This is ad hominem in a nasty tone.
Upon closer examination it ignores key inconvenient considerations; normative part sounds like misleading PR.
Et tu quoque? Look at this next bit:
A major hole in the "complete technological determinism" argument is that it completely denies agency, or even the possibility that how agency operates at larger scales could change. Sure, humanity is not currently a very coordinated agent. But the trendline also points toward the ascent of an intentional stance. An intentional civilization would, of course, be able to navigate the tech tree. (For a completely opposite argument about the very high chance of a "choice transition," check https://strangecities.substack.com/p/the-choice-transition).
Maybe "agency at larger scales could change". I doubt it, and I think your "trendline" is entirely wishful thinking.
But even if it can change, and even if that trendline does exist, you're talking about an at best uncertain 100 or 500 year change. You seem to be relying on that to deal with a 10 to 50 year problem. The civilization we have now isn't capable of delaying insert-AI-consequence-here long enough for this "intentional" civilization to arise.
If the people you're complaining about are saying "Let's just build this and, what the heck, everything could turn out all right", then you are equally saying "Let's just hope some software gives us an Intentional Civilization, and what the heck, maybe we can delay this onrushing locomotive until we have one".
As for "complete technological determinism", that's a mighty scary label you have there, but you're still basically just name-calling.
On one side are people trying to empower humanity by building coordination technology and human-empowering AI.
Who? What "coordination technology"? How exactly is this "human-empowering AI" supposed to work?
As far as I can see, that's no more advanced, and even less likely to be feasible, than "friendly godlike ASI". And even if you had it, humans would still have to adapt to it, at human speeds.
This is supposed to give you an "intentional civilization" in time? I'm sorry, but that's not plausible at all. It's even less plausible than the idea that everything will just turn out All Right by itself.
... and that plan seems to be the only actual substance you're offering.
On the other side are those working to create human-disempowering technology and render human labor worthless as fast as possible.
This appears to assume that human labor should have value, which I assume to mean that it should be rewarded somehow, thus that performing such labor should accrue some advantage, other than having performed the labor itself... which seems to imply that people who do not perform such labor should be at a comparative disadvantage.
... meaning that other people have to work, on pain of punishment, to provide you and those who agree with you with some inchoately described sense of value.
If we're going to name-call ideas, that one sounds uncomfortably close to slavery.
It also seems to assume that not having to work is "disempowering", which is, um, strange, and that being "disempowered" (in whatever unspecified way) is bad, which isn't a given, and that most people aren't already "disempowered" right now, which would demand a very odd definition of what it means to be "disempowered".
... and the rest is just more ad hominem.
the objective of chess being to win chess games
It's a game, right? So the objective ought to be to have a good time. I mean, at least up until you ruin the game by getting all serious about it.
Why shouldn't the same go for what you choose to think about?
Password managers are absolutely best practice and have been for at least a decade. Humans can't remember that many good passwords, which means that the alternative to a password manager is basically always password reuse, which is insane. I will admit that I use keepass variants, and that I myself wouldn't recommend any password manager (or much of anything else) with a cloud component, but some password manager is necessary. You can also use many of them for 2FA tokens.
I don't use Brave either, and don't know specifically what it uses Google or Cloudflare for... but an awful lot of the Web goes through Cloudflare nowadays regardless of your browser, and unless you've added a bunch of technical and easy-to-screw-up stuff, probably at least as high a proportion will cause your browser to download stuff from Google (and other places) at every visit, allowing them to track at least what "major" sites you're hitting. Ad tracking is definitely a big deal, and the guide doesn't address it, but browser choice is kind of down in the noise unless you're going to go all the way and resort to Tor plus a whole bunch of this and that blockers.
The problem is that such software isn't very widely used, may or may not actually remove your "style", and tends to add its own "style" that makes you stand out as a user of it. And the content of what you say can also give you away. Really the right answer there is not to say anything you don't need to say, or at least not anything you wouldn't want to sign your name to, package up with everything else you've ever said, and mail to the worst possible people. Or at least not to anybody who doesn't (a) need to hear it and (b) have the capacity and inclination not to leak it. Which is going to be a pretty short list.