LESSWRONG
LW

279
jbash
3112Ω-3625180
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Wei Dai's Shortform
jbash2d20

Confining myself to actual questions...

I guess you might object that your reasoning only applies to value-related claims, not to anything strictly value-neutral: but why not?

Mostly because I don't (or didn't) see this as a discussion about epistemology.

In that context, I tend to accept in principle that I Can't Know Anything... but then to fall back on the observation that I'm going to have to act like my reasoning works regardless of whether it really does; I'm going to have to act on my sensory input as if it reflected some kind of objective reality regardless of whether it really does; and, not only that, but I'm going to have to act as though that reality were relatively lawful and understandable regardless of whether it really is. I'm stuck with all of that and there's not a lot of point in worrying about any of it.

That's actually what I also tend to do when I actually have to make ethical decisions: I rely mostly on my own intuitions or "ethical perceptions" or whatever, seasoned with a preference not to be too inconsistent.

BUT.

I perceive others to be acting as though their own reasoning and sensory input looked a lot like mine, almost all the time. We may occasionally reach different conclusions, but if we spend enough time on it, we can generally either come to agreement, or at least nail down the source of our disagreement in a pretty tractable way. There's not a lot of live controversy about what's going to happen if we drop that rock.

On the other hand, I don't perceive others to be acting nearly so much as though their ethical intuitions looked like mine, and if you distinguish "meta-intuitions" about how to reconcile different degree zero intuitions about how to act, the commonality is still less.

Yes, sure, we share a lot of things, but there's also enough difference to have a major practical effect. There truly are lots of people who'll say that God turning up and saying something was Right wouldn't (or would) make it Right, or that the effects of an action aren't dispositive about its Rightness, or that some kinds of ethical intuitions should be ignored (usually in favor of others), or whatever. They'll mean those things. They're not just saying them for the sake of argument; they're trying to live by them. The same sorts differences exist for other kinds of values, but disputes about the ones people tend to call "ethical" seem to have the most practical impact.

Radical or not, skepticism that you're actually going to encounter, and that matters to people, seems a lot more salient than skepticism that never really comes up outside of academic exercises. Especially if you're starting from a context where you're trying to actually design some technology that you believe may affect everybody in ways that they care about, and especially if you think you might actually find yourself having disagreements with the technology itself.

As to your "(b) there's a bunch of empirical evidence against it" I honestly don't know what you're talking about there.

Nothing complicated. I was talking about the particular hypothetical statement I'd just described, not about any actual claim you might be making[1].

I'm just saying that if there were some actual code of ethics[2] that every "approximately rational" agent would adopt[3], and we in fact have such agents, then we should be seeing all of them adopting it. Our best candidates for existing approximately rational agents are humans, and they don't seem to have overwhelmingly adopted any particular code. That's a lot of empirical evidence against the existence of such a code[4].

The alternative, where you reject the idea that humans are approximately rational, thus rendering them irrelevant as evidence, is the other case I was talking about where "we have a lot of not-approximately-rational agents".


  1. I understand, and originally undestood, that you did not say there was any stance that every approximately rational agent would adopt, and also did you did not say that you were looking for such a stance. It was just an example of the sort of thing one might be looking for, meant to illustrate a fine distinction about what qualified as ethical realism. ↩︎

  2. In the loose sense of some set of principles about how to act, how to be, how to encourage others to act or be, etc blah blah blah. ↩︎

  3. For some definition of "adopt"... to follow it, to try to follow it, to claim that it should be followed, whatever. But not "adopt" in the sense that we're all following a code that says "it's unethical to travel faster than light", or even in the sense that we're all following a particular code when we act as large numbers of other codes would also prescribe. If you're looking at actions, then I think you can only sanely count actions actions done at least partially because of the code. ↩︎ ↩︎

  4. As per footnote 3[3:1][5], I don't think, for example, the fact that most people don't regularly go on murder sprees is significantly evidence of them having adopted a particular shared code. Whatever codes they have may share that particular prescription, but that doesn't make them the same code. ↩︎

  5. I'm sorry. I love footnotes. I love having a discussion system that does footnotes well. I try to be better, but my adherence to that code is imperfect... ↩︎

Reply
Wei Dai's Shortform
jbash3d70

I reject the idea that I'm confused at all.

Tons of people have said "Ethical realism is false", for a very long time, without needing to invent the term "meta-ethics" to describe what they were doing. They just called it ethics. Often they went beyond that and offered systems they thought it was a good idea to adopt even so, and they called that ethics, too. None of that was because anybody was confused in any way.

"Meta-ethics" lies within the traditional scope of ethics, and it's intertwined enough with the fundamental concerns of ethics that it's not really worth separating it out... not often enough to call it a separate subject anyway. Maybe occasionally enough to use the words once in a great while.

Ethics (in philosophy as opposed to social sciences) is, roughly, "the study of what one Should Do(TM) (or maybe how one Should Be) (and why)". It's considered part of that problem to determine what meanings of "Should", what kinds of Doing or Being, and what kinds of whys, are in scope. Narrowing any of those without acknowledging what you're doing is considered cheating. It's not less cheating if you claim to have done it under some separate magisterium that you've named "meta-ethics". You're still narrowing what the rest of the world has always called ethical problems.

When you say "ethical realism is false", you're making a meta-ethical statement. You believe this statement is true, hence you perforce must believe in meta-ethical realism.

The phrase "Ethical realism", as normally used, refers to an idea about actual, object-level prescriptions: specifically the idea that you can get to them by pointing to some objective "Right stuff" floating around in a shared external reality. I'm actually using it kind of loosely, in that I really should not only deny that there's no objective external standard, but also separately deny that you can arrive at such prescriptions in a purely analytic way. I don't think that second one is technically usually considered to be part of ethical realism. Not only that, but I'm using the phrase to allude to other similar things that also aren't technically ethical realism (like the one described below).

But none of the things I'm talking about or alluding to refers to itself. In practice nobody gets confused about that, even without resorting to the term "meta-ethics", and definitely without talking about it like it's a really separate field.

To go ahead and use the term without accepting the idea that meta-ethics qualifies as a subject, the meta-ethical statement (technically I guess a degree 2 meta-ethical statement) that "ethical realism is false" is pretty close to analytic, in that even if you point to some actual thing in the world that you claim implies the Right ways to Be or Do, I can always deny what whatever you're pointing to matters... because there's no predefined standard for standards either. God can come down from heaven and say "This is the Way", and you can simultaneously prove that it leads to infinite universal flourishing, and also provide polls proving within epsilon that it's also a universal human intuition... and somebody can always deny that any of those makes it Right(TM).

But even if we were talking about a more ordinary sort of matter of fact, even if what you were looking for was not "official" ethical realism of the form "look here, this is Obviously Right as a brute part of reality", but "here's a proof that any even approximately rational agent[1] would adopt this code in practice", then (a) that's not what ethical realism means, (b) there's a bunch of empirical evidence against it, and essentially no evidence that it's true, and (c) if it is true, we obviously have a whole lot of not-aproximately-rational agents running around, which sharply limits the utility of the fact. Close enough to false for any practical purpose.


  1. ... under whatever formal definition of rationality you happened to be trying to get people to accept, perhaps under the claim that that definition was itself Obviously Right, which is exactly the kind of cheating I'm complaining about... ↩︎

Reply
An Opinionated Guide to Privacy Despite Authoritarianism
jbash3d30

(Maybe you agree?)

Yep.

Reply
An Opinionated Guide to Privacy Despite Authoritarianism
jbash3d6-1

Yes, but the code is open source and independently audited. I don't see why I should call this out as a trust deficiency in particular.

... which is as good as it gets in most cases. I am not saying I have a better alternative for most people.

But the thing is that supply chains are hard. When you use Proton webmail, do you actually verify that the JavaScript they serve to you is actually the same JavaScript they had audited? Every time? And make sure it doesn't get changed somehow during the session? If you use the Proton proxy, do you actually rebuild it from source at every update? Even with reproducible builds (which I don't know whether they use or not), how many people actually check? Another person's checking does add some value for you, but there's a limit, especially if it's possible to guess who will check and who won't.

Worse, how many "normal" people can actually check? How many can even keep all the issues straight in their minds? Lots of professional programmers get simple PKI issues wrong all the time. PKI is a strict subset of what you have to worry about for supply chain.

So, yeah, you can use that stuff, but by the time you get to the point where you're actually getting much assurance out of the source code access or the audits, I think it's more complicated than self-hosting a mail server (which I've done for probably about 30 years). Of course, with self-hosting your deliverability goes to hell, and you're still relying on an incredible amount of software, but you get some protection from the sheer chaos of the environment; it's usually not so easy for the Bad Guys to actually get the data back reliably and inconspicuously, especially for untargeted surveillance.

Reply1
Wei Dai's Shortform
jbash3d178

Theoretical computer science, and AI theory in particular, is a revolutionary method to reframe philosophical problems in a way that finally makes them tractable.

As far as I can see, the kind of "reframing" you could do with those would basically remove all the parts of the problems that make anybody care about them, and turn any "solutions" into uninteresting formal exercises. You could also say that adopting a particular formalism is equivalent to redefining the problem such that that formalism's "solution" becomes the right one... which makes the whole thing kind of circular.

I submit that when framed in any way that addresses the reasons they matter to people, the "hard" philosophical problems in ethics (or meta-ethics, if you must distinguish it from ethics, which really seems like an unnecessary complication) simply have no solutions, period. There is no correct system of ethics (or aesthetics, or anything else with "values" in it). Ethical realism is false. Reality does not owe you a system of values, and it definitely doesn't feel like giving you one.

I'm not sure why people spend so much energy on what seems to me like an obviously pointless endeavor. Get your own values.

So if your idea of a satisfactory solution to AI "alignment" or "safety" or whatever requires a Universal, Correct system of ethics, you are definitely not going to get a satisfactory solution to your alignment problem, ever, full stop.

What there are are a bunch of irreconcilliably contradictory pseudo-solutions, each of which some people think is obviously Correct. If you feed one of those pseudo-solutions into some implementation apparatus, you may get an alignment pseudo-solution that satisfies those particular people... or at least that they'll say satisfies them. It probably won't satisfy them when put into practice, though, because usually the reason they think their system is Correct seems to be that they refuse to think through all its implications.

Reply
An Opinionated Guide to Privacy Despite Authoritarianism
jbash4d94

I don't know about others, but I was a little put off by the mention of "password managers" in the beginning since that's handling over the keys of your privacy to external powers.

Password managers are absolutely best practice and have been for at least a decade. Humans can't remember that many good passwords, which means that the alternative to a password manager is basically always password reuse, which is insane. I will admit that I use keepass variants, and that I myself wouldn't recommend any password manager (or much of anything else) with a cloud component, but some password manager is necessary. You can also use many of them for 2FA tokens.

Brave uses Google and Cloudflare and that it has built-in telemetry

I don't use Brave either, and don't know specifically what it uses Google or Cloudflare for... but an awful lot of the Web goes through Cloudflare nowadays regardless of your browser, and unless you've added a bunch of technical and easy-to-screw-up stuff, probably at least as high a proportion will cause your browser to download stuff from Google (and other places) at every visit, allowing them to track at least what "major" sites you're hitting. Ad tracking is definitely a big deal, and the guide doesn't address it, but browser choice is kind of down in the noise unless you're going to go all the way and resort to Tor plus a whole bunch of this and that blockers.

using software to rewrite ones messages to get rid of ones personal 'fingerprint'

The problem is that such software isn't very widely used, may or may not actually remove your "style", and tends to add its own "style" that makes you stand out as a user of it. And the content of what you say can also give you away. Really the right answer there is not to say anything you don't need to say, or at least not anything you wouldn't want to sign your name to, package up with everything else you've ever said, and mail to the worst possible people. Or at least not to anybody who doesn't (a) need to hear it and (b) have the capacity and inclination not to leak it. Which is going to be a pretty short list.

Reply1
An Opinionated Guide to Privacy Despite Authoritarianism
jbash5d232

Good work.

I do really mean that it's good stuff. Most people would be a lot better off if they did it. But of course it's traditional to whine.

On contacts, do you want to remind people that their associations can still be identified through the associates' contact lists? People give out their contact information like it's going out of style. Not to mention doing things like uploading metadata-laden pictures with your face in them, and probably other things that would come to mind without too much searching. It's really hard to keep people from leaking information about you.

I know it's hard to tell people not to use so many damned cloud services, but jeez do people use too many damned cloud services these days. Not only is whatever you put on one of them exposed to anybody who can infiltrate or pressure the operator, but, since they tend to get polled all the time, each of them is another opportunity to get information about what you're up to.

Calling Proton Mail "E2EE" is pretty questionable. Admittedly it's probably the best you can do short of self-hosting, but there's a lot of trust in Proton. Not only do they handle the plaintext of most of your mail, but they also provide the code you use to handle the plaintext of all your mail.

Signal is surely the best choice for centralized messaging, and in the past I wouldn't have said that normal people (in the US) needed to be worried about traffic analysis... but it's not the past and I'm not sure normal people in the US don't need to be worried about traffic analysis. The legal protections that have (mostly) kept traffic analysis from being used for civilian mass surveillance look a lot less reliable now. Using a centralized service, with a limited number of watchable servers, makes it relatively easy to do that, even if you do it via a VPN and even if the servers themselves are out-of-country. Session, Briar, or Jami might be alternatives. Of course, the reality is that you can only move to any of these if the people you communicate with also move.

Migrating from X to Mastodon or Bluesky gets you some censorship resistance (although note that Bluesky isn't really effectively federated). Nostr would get you more, at the cost of a worse experience and, in my opinion, a much worse community. But, especially since this is a privacy guide, maybe what most people should really be doing is thinking hard about what they really need to trumpet to the world.

I think there are probably occasions when even relatively normal people should be using Tor or I2P, rather than a trustful VPN like Proton or Mullvad. [And, on edit, there is some risk of any of those being treated as suspicious in itself].

I'd be careful about telling people to keep a lot of cash around. Even pre-Trump, mere possession of "extraordinary" amounts of cash tended to get treated as evidence of criminality.

Reply11
AGI's Last Bottlenecks
jbash11d2-6

To provide clarity to the debate, we[1], alongside thirty-one co-authors, recently released a paper that develops a detailed definition of AGI,

To me, this reads as "We, alongside thirty-one co-authors, recently released a paper trying to co-opt terminology in common use".

Reply
The Perpetual Technological Cage
jbash13d20

The country or countries that first develop superintelligence will make sure others cannot follow,

You seem to think that superintelligence, however defined, will by default be taking orders from meatbags, or at least care about the meatbags' internal political divisions. That's kind of heterodox on here. Why do you think that?

Reply
Can LLMs Coordinate? A Simple Schelling Point Experiment
jbash20d30

I would have done a lot worse than any of them.

Reply
Load More
6jbash's Shortform
10mo
13
6jbash's Shortform
10mo
13
132Good News, Everyone!
3y
23