That gives the Andromeda civilization 280,000 years
That's about one round trip across the Andromeda galaxy at the speed of light.
If I remember right, the present received wisdom is that if you succeed in sending a message like that, you're inviting somebody to wipe you out. So you may get active opposition.
OK, that gets you something. But suppose that you had a twin at Proxima Centauri, with the same tech level as we have. Could you send a message that your twin could receive? One big enough to carry the information in question here? How long would it take, and how much money would each of you have to invest in the equipment?
As I understand it, we're getting pretty good at sensing small signals these days, and we still find it challenging to notice entire planets. Scaling up obviously helps, but the cost scales right along with the capability. You can say, as you do elsewhere, that "advance civilization has larger receivers", but why would they waste resources on building such receivers?
Given the noise floor, how much energy do you think it would take to send that in an omnidirectional broadcast, while still making it narrowband enough to obviously be a signal?
I believe, deeply, that my cause is just and the truth is on my side. And that means we can win.
No, it actually doesn't.
Confining myself to actual questions...
I guess you might object that your reasoning only applies to value-related claims, not to anything strictly value-neutral: but why not?
Mostly because I don't (or didn't) see this as a discussion about epistemology.
In that context, I tend to accept in principle that I Can't Know Anything... but then to fall back on the observation that I'm going to have to act like my reasoning works regardless of whether it really does; I'm going to have to act on my sensory input as if it reflected some kind of objective reality regardless of whether it really does; and, not only that, but I'm going to have to act as though that reality were relatively lawful and understandable regardless of whether it really is. I'm stuck with all of that and there's not a lot of point in worrying about any of it.
That's actually what I also tend to do when I actually have to make ethical decisions: I rely mostly on my own intuitions or "ethical perceptions" or whatever, seasoned with a preference not to be too inconsistent.
BUT.
I perceive others to be acting as though their own reasoning and sensory input looked a lot like mine, almost all the time. We may occasionally reach different conclusions, but if we spend enough time on it, we can generally either come to agreement, or at least nail down the source of our disagreement in a pretty tractable way. There's not a lot of live controversy about what's going to happen if we drop that rock.
On the other hand, I don't perceive others to be acting nearly so much as though their ethical intuitions looked like mine, and if you distinguish "meta-intuitions" about how to reconcile different degree zero intuitions about how to act, the commonality is still less.
Yes, sure, we share a lot of things, but there's also enough difference to have a major practical effect. There truly are lots of people who'll say that God turning up and saying something was Right wouldn't (or would) make it Right, or that the effects of an action aren't dispositive about its Rightness, or that some kinds of ethical intuitions should be ignored (usually in favor of others), or whatever. They'll mean those things. They're not just saying them for the sake of argument; they're trying to live by them. The same sorts differences exist for other kinds of values, but disputes about the ones people tend to call "ethical" seem to have the most practical impact.
Radical or not, skepticism that you're actually going to encounter, and that matters to people, seems a lot more salient than skepticism that never really comes up outside of academic exercises. Especially if you're starting from a context where you're trying to actually design some technology that you believe may affect everybody in ways that they care about, and especially if you think you might actually find yourself having disagreements with the technology itself.
As to your "(b) there's a bunch of empirical evidence against it" I honestly don't know what you're talking about there.
Nothing complicated. I was talking about the particular hypothetical statement I'd just described, not about any actual claim you might be making[1].
I'm just saying that if there were some actual code of ethics[2] that every "approximately rational" agent would adopt[3], and we in fact have such agents, then we should be seeing all of them adopting it. Our best candidates for existing approximately rational agents are humans, and they don't seem to have overwhelmingly adopted any particular code. That's a lot of empirical evidence against the existence of such a code[4].
The alternative, where you reject the idea that humans are approximately rational, thus rendering them irrelevant as evidence, is the other case I was talking about where "we have a lot of not-approximately-rational agents".
I understand, and originally undestood, that you did not say there was any stance that every approximately rational agent would adopt, and also did you did not say that you were looking for such a stance. It was just an example of the sort of thing one might be looking for, meant to illustrate a fine distinction about what qualified as ethical realism. ↩︎
In the loose sense of some set of principles about how to act, how to be, how to encourage others to act or be, etc blah blah blah. ↩︎
For some definition of "adopt"... to follow it, to try to follow it, to claim that it should be followed, whatever. But not "adopt" in the sense that we're all following a code that says "it's unethical to travel faster than light", or even in the sense that we're all following a particular code when we act as large numbers of other codes would also prescribe. If you're looking at actions, then I think you can only sanely count actions actions done at least partially because of the code. ↩︎ ↩︎
As per footnote 3[3:1][5], I don't think, for example, the fact that most people don't regularly go on murder sprees is significantly evidence of them having adopted a particular shared code. Whatever codes they have may share that particular prescription, but that doesn't make them the same code. ↩︎
I'm sorry. I love footnotes. I love having a discussion system that does footnotes well. I try to be better, but my adherence to that code is imperfect... ↩︎
I reject the idea that I'm confused at all.
Tons of people have said "Ethical realism is false", for a very long time, without needing to invent the term "meta-ethics" to describe what they were doing. They just called it ethics. Often they went beyond that and offered systems they thought it was a good idea to adopt even so, and they called that ethics, too. None of that was because anybody was confused in any way.
"Meta-ethics" lies within the traditional scope of ethics, and it's intertwined enough with the fundamental concerns of ethics that it's not really worth separating it out... not often enough to call it a separate subject anyway. Maybe occasionally enough to use the words once in a great while.
Ethics (in philosophy as opposed to social sciences) is, roughly, "the study of what one Should Do(TM) (or maybe how one Should Be) (and why)". It's considered part of that problem to determine what meanings of "Should", what kinds of Doing or Being, and what kinds of whys, are in scope. Narrowing any of those without acknowledging what you're doing is considered cheating. It's not less cheating if you claim to have done it under some separate magisterium that you've named "meta-ethics". You're still narrowing what the rest of the world has always called ethical problems.
When you say "ethical realism is false", you're making a meta-ethical statement. You believe this statement is true, hence you perforce must believe in meta-ethical realism.
The phrase "Ethical realism", as normally used, refers to an idea about actual, object-level prescriptions: specifically the idea that you can get to them by pointing to some objective "Right stuff" floating around in a shared external reality. I'm actually using it kind of loosely, in that I really should not only deny that there's no objective external standard, but also separately deny that you can arrive at such prescriptions in a purely analytic way. I don't think that second one is technically usually considered to be part of ethical realism. Not only that, but I'm using the phrase to allude to other similar things that also aren't technically ethical realism (like the one described below).
But none of the things I'm talking about or alluding to refers to itself. In practice nobody gets confused about that, even without resorting to the term "meta-ethics", and definitely without talking about it like it's a really separate field.
To go ahead and use the term without accepting the idea that meta-ethics qualifies as a subject, the meta-ethical statement (technically I guess a degree 2 meta-ethical statement) that "ethical realism is false" is pretty close to analytic, in that even if you point to some actual thing in the world that you claim implies the Right ways to Be or Do, I can always deny what whatever you're pointing to matters... because there's no predefined standard for standards either. God can come down from heaven and say "This is the Way", and you can simultaneously prove that it leads to infinite universal flourishing, and also provide polls proving within epsilon that it's also a universal human intuition... and somebody can always deny that any of those makes it Right(TM).
But even if we were talking about a more ordinary sort of matter of fact, even if what you were looking for was not "official" ethical realism of the form "look here, this is Obviously Right as a brute part of reality", but "here's a proof that any even approximately rational agent[1] would adopt this code in practice", then (a) that's not what ethical realism means, (b) there's a bunch of empirical evidence against it, and essentially no evidence that it's true, and (c) if it is true, we obviously have a whole lot of not-aproximately-rational agents running around, which sharply limits the utility of the fact. Close enough to false for any practical purpose.
... under whatever formal definition of rationality you happened to be trying to get people to accept, perhaps under the claim that that definition was itself Obviously Right, which is exactly the kind of cheating I'm complaining about... ↩︎
(Maybe you agree?)
Yep.
Yes, but the code is open source and independently audited. I don't see why I should call this out as a trust deficiency in particular.
... which is as good as it gets in most cases. I am not saying I have a better alternative for most people.
But the thing is that supply chains are hard. When you use Proton webmail, do you actually verify that the JavaScript they serve to you is actually the same JavaScript they had audited? Every time? And make sure it doesn't get changed somehow during the session? If you use the Proton proxy, do you actually rebuild it from source at every update? Even with reproducible builds (which I don't know whether they use or not), how many people actually check? Another person's checking does add some value for you, but there's a limit, especially if it's possible to guess who will check and who won't.
Worse, how many "normal" people can actually check? How many can even keep all the issues straight in their minds? Lots of professional programmers get simple PKI issues wrong all the time. PKI is a strict subset of what you have to worry about for supply chain.
So, yeah, you can use that stuff, but by the time you get to the point where you're actually getting much assurance out of the source code access or the audits, I think it's more complicated than self-hosting a mail server (which I've done for probably about 30 years). Of course, with self-hosting your deliverability goes to hell, and you're still relying on an incredible amount of software, but you get some protection from the sheer chaos of the environment; it's usually not so easy for the Bad Guys to actually get the data back reliably and inconspicuously, especially for untargeted surveillance.
I predict that the practical effect of people internalizing this advice would be for them to just go along with the people around them and not make waves.