ACrackedPot

Wiki Contributions

Comments

Instead of further elaborations on my crackpot nonsense, something short:

I expect that there is some distance from a magnetic source between 10^5 meters and 10^7 meters at which there will be magnetic anomalies; in particular, there will be a phenomenon by which the apparent field strength drops much faster than expected and passes through zero into the negative (reversed polarity).

I specifically expect this to be somewhere in the vicinity of 10^6 meters, although the specific distance will vary with the mass of the object.


There should be a second magnetic anomaly somewhere in the vicinity of 10^12 m (So between 10^11 and 10^13), although I suspect at that distance it will be too faint to detect.

More easily detected, because there is a repulsive field at work at these distances - mass should be scarce at this distance from the dominant "local" masses, a scarcity that should continue up to about 10^18 m (between 10^17 and 10^19, although errors really begin to compound here); at 10^18 m, I expect an unusually dense distribution of matter; this value in the vicinity of 10^18 m should be the most common distance between objects in the galaxy.

It should be possible to find large masses (say, black holes) orbiting each other at, accounting for relativistic changes in distance, 10^18m, which we might otherwise expect to fall into one another - that is, there should be unexplainable orbital mechanics between large masses that are this distance apart.


I expect that there is some radius between 10^22 meters and 10^26 meters (vicinity of 10^24) which marks the largest possible size of a galaxy, and some radius between 10^28 and 10^32 (vicinity of 10^30) which marks the most common distance between galaxies.
 

Galaxies which are between the vicinity of 10^24 and the vicinity of 10^30 meters from one another should be moving apart, on average; galaxies which are greater than the vicinity of 10^30 meters from one another should be falling into one another on average.

Galaxies which are approximately 10^30 meters apart should be orbiting one another - neither moving towards nor away.

Yes, but then it sounds like those who have no such altruistic desire are equally justified as those who do. An alternative view of obligation, one which works very well with utilitarianism, is to reject personal identity as a psychological illusion. In that case there is no special difference between "my" suffering and "your" suffering, and my desire to minimize one of these rationally requires me to minimize the other. Many pantheists take such a view of ethics, and I believe its quasi-official name is "open individualism".

Yes.

I think this requires an assumption that there exists on obligation to end our own suffering; I find that a curious notion, because it presupposes that there is only one valid way to exist.

You would prefer that we had the ethical intuitions and views of the first human beings, or perhaps of their hominid ancestors?

What bearing do their ethical intuitions have on me?

(What bearing do my ethical intuitions have on future hominids?)

Where you see neutrality, he would see obligation.

 

In what sense is it an obligation?  By what mechanism am I obligated?  Do I get punished for not living up to it?

You use that word, but the only meaningful source of that obligation, as I see it, is the desire to be a good person.  Good, not neutral.

I disagree, and I think that you are more of a relativist than you are letting on. Ethics should be able to teach us things that we didn't already know, perhaps even things that we didn't want to acknowledge.

This is a point of divergence, and I find that what ethical systems "teach us" is an area full of skulls.  (However, I am, in fact, far LESS of a relativist than I am letting on; I am in fact a variant of absolutist.)

As for someone who murders fewer people than he saves, such a person would be superior to me (who saves nobody and kills nobody) and inferior to someone who saves many and kills nobody.

Question: Would a version of yourself who did not believe in your ethics, and saw "neutral" as a perfectly valid thing to be, be happier than the version of yourself that exists?

Utility, as measured, is necessarily relative.  By this I don't mean that it is theoretically impossible to have an objective measure of utility, only that it is practically impossible; in reality / in practice, we measure utility relative to a baseline.  When calculating the utility of doing something nice for somebody, it is impractical to calculate their current utility, which would include the totality of their entire experience as summed in their current experience.

Rule utilitarianism operates in the same fashion much more straightforwardly, considering utility from an act as the average deviation from a relative position, which I think it is safe to call "normal".

Once we observe that utility is measured from a relative baseline, a normal, then it is meaningful to talk about acts which are of negative utility; the meaningful comparison is not to an absolute number, but to a relative measure, which any given act can fall beneath.

Insofar as we treat utilitarianism as having an absolute number which cannot be measured by which is the important criteria, "badness" itself is meaningless; badness compared to what?  Now, you might say that the correct point of measurement is to the highest-positive-utility act; that utilitarianism says that all acts are measured relative to this.  But this is not a position I believe is universally supported; certainly Karl Popper argued against this view of utilitarianism, proposing the framework of negative utilitarianism (I think he invented it?) as a solution to problems he saw with this worldview.  Negative utilitarianism measures two distinct values, prioritizing one over the other.

Assuming we do stick to a relative measure, once we observe that acts, or the failure to act, can be of negative, neutral, or positive value, with respect to that relative "normal", it is meaningful to talk about negative-utility acts as distinct from an inaction relative to a positive-utility act.  We can then call negative-utility acts "evil", neutral acts "bad" (I think a better term here is "suboptimal"), and good acts "good", and in doing so, recover an important part of the way most human beings experience ethics, which includes a component we call "evil", as distinct from "good", which itself is distinct from "neutral".

Or, to put all this another way - our moral intuitions do not in fact say that goodness and evil are fungible, and in particular, that somebody who murders somebody for every seven lives he saves is anything like a good person, and insofar as utilitarianism doesn't acknowledge this, it fails to actually be a good representation of human ethics, which is to say, the ethics we actually care about.  It should add up to normality, after all.

I suspect there might be a qualia differential.

What is your internal experience of morality?

The tax should, in fact, cause some landlords / landowners to just abandon their land.  This is a critical piece of Georgism; the idea that land is being underutilized, in particular as an investment which is expected to pay off in terms of higher land values / rents later, but also in terms of things like parking lots, where the current value of the use of the land may exceed the current taxes (which include only a portion of the value of the land and the improvements combined) while being lower than the Georgist taxes (which include the entire value of the land and none of the value of the improvements).  In particular Georgist taxes should eliminate current incentive structures which "reward" things like keeping an apartment building empty of tenants.

The point of indifference is not the point of indifference in the current economic system, it is the point of indifference in the Georgist economic system.

Related: https://www.lesswrong.com/posts/57sq9qA3wurjres4K/ruling-out-everything-else

I do not think the linked post goes anywhere near far enough.  In particular, it imagines that people share a common concept-space.  The totality to which thought is arbitrary is, basically, complete.

I'm a crackpot.

Self-identifiably as so. Part of the reason I self-identify as a crackpot is to help create a kind of mental balance, a pushback against the internal pressure to dismiss people who don't accept my ideas: Hey, self, most people who have strong beliefs similar to or about the thing you have strong beliefs about are wrong, and the impulse to rage against the institution and people in it for failing to grasp the obvious and simple ideas you are trying to show them is exactly the wrong impulse.

The "embitterment" impulse can be quite strong; when you have an idea which, from your perspective, is self-evident if you spend any amount of time considering it, the failure of other people to adopt that idea can look like a failure to even consider anything you have said.  Or it can look like everybody else is just unimaginative or unintelligent or unwilling to consider new ideas; oh, they're just putting in their 9-5, they don't actually care anymore.

Framing myself as a crackpot helps anticipate and understand the reactions I get.  Additionally, framing myself as a crackpot serves as a useful signal to somebody reading; first, that if they have no patience for these kinds of ideas, that they should move on.  And second, that if they do have the patience for these kinds of ideas, that I have self-awareness of exactly what kind of idea it is, and am unlikely to go off on insane rants against them for interpreting me as a crackpot, and also that having warned them in advance I am also aware that this may be an imposition and I am not taking their time for granted.  (Also higher level meta signaling stuff that is harder to explain.)

Are you a crackpot?  I have no idea.  However, when people start talking about existential safety, my personal inclination is to tune them out, because they do pattern match to "Apocalyptic Thinkers".  The AI apocalypse basically reads to me like so many other apocalypse predictions.

Mind, you could be right; certainly I think I'm right, and I'm not going to be casting any stones about spreading ideas that you think are correct and important.

However, my personal recommendation is to adopt my own policy: Self awareness.

Why are you using what I presume is your real name here?

I'm not actually interested in whether or not it is your real name, mind; mostly I'd like to direct your attention to the fact that the choice of username was in fact a choice.  That choice imparts information.  By choosing the username that you did, you are, deliberately or not, engaging in a kind of signaling.

In particular, from a particular frame of reference, you are engaging in a particular kind of costly signaling, which may serve to elevate your relative local status, by tying any reputational hits you may suffer as a result of mis-steps here to your real identity.  You are saying "This is me, I am not hiding behind false identities."  The overall effect of this is a costly signal which serves to elevate your status with the tribe here.

If it isn't your real name, why are you using a false identity that looks like a real identity?

Hang up, though.  Let us say instead that you, instead, see false identities as a form of dishonesty; this isn't signaling, this is sticking to principles that are important to you.

Well, if that is the case, another question: Would you use this identity to say something that does have strong reputational costs associated to your real identity?  Let us say that you would, you just don't have any such things to say.

Well, it is convenient for you, some might observe, that you are willing to stand up for principles that don't cost you anything.  (Hence some part of why signaling tends to be costly; it avoids this problem.)

I will observe there is an important political dispute about anonymity on the internet, which has major communal aspects.  The fewer users who insist on privacy, the more that commercial websites can exclude those who do.  Oh, you don't want us tracking you?  You don't get to use our website anymore.  Observe the trend in websites, such as Twitter, of becoming increasingly user-unfriendly to those who are not logged in, or of excluding them altogether.

"Everything is political" is an observation that this phenomenon is, basically, universal.

Once we observe that there -is- a political implication in your choice of username, we must ask whether you -ought- to do anything about it; a lot of people like to skip this question, but it is an important question.  Do you "owe" it to the people who prefer anonymity, to yourself remain anonymous?  The pro-anonymity side would be really well served if everybody was forced to be anonymous; they are certainly better served if the choice is explicitly served (hence the EU rules on website cookies) instead of anonymity being opt-in instead of opt-out.

However, there are also people who don't want to be anonymous, or who don't want to interact with anonymous people; certainly there's the potential for some power imbalances there.

We've happened upon some kind of uneasy mostly-truce, where anonymity is contextual, and violating another person's anonymity is seen as a violation of the cultural norms of the internet.  This truce is eroding; as fewer and fewer people choose to be anonymous, a higher and higher proportion of anonymous actions are those which would impose costs on the speaker if the speaker chose not to be anonymous, which makes anonymity an increasingly sinister-looking choice.

Imagine being a writer in a group of blogs with a common user system, moderating comments.  To begin with, all the blogs allow anonymous comments.  However, after one too many abusive comments, a blog bans anonymous commenters;  some percentage of previously-anonymous commenters value commenting there enough to create accounts, reducing the number of "legitimate" anonymous comments in the ecosystem as a whole.  This makes anonymous comments look worse, prompting the next blog to turn them off, then the next.

Look, the pro-anonymity people say, you're making a choice to oppose an anonymous internet; you're against us.

Well, there's definitely an "is" there.  What's missing is the "ought", the idea that the political implications of an act create individual responsibility.  There's a very deep topic here, relating to the way certain preferences are also natural attractor states, whose satisfaction rules out opposing preferences, but this comment is already long enough.

I get the impression, reading this and the way you and commenters classify people, that the magnitude of days is to some extent just equivalent to an evaluation of somebody's intellectual ability, and the internal complexity of their thoughts.

So if I said your article "Ruling Out Everything Else" is the 10-day version of a 10000-day idea, you might agree, or you might disagree, but I must observe that if you agree, it will be taken as a kind of intellectual humility, yes?  And as we examine the notion of humility in this context, I think it should be noticed the implication of superiority; a 10000 day idea is a superior idea to a 10 day idea.  (Otherwise, what would there to be humble about?)  And if you felt like it was a more profound article than that, you'd find it somewhat offensive, I think.

...

Except that it is equally plausible that none of that is actually true, and that you're pointing at something else, and this interpretation is just one that wasn't ruled out.  If another equally plausible interpretation is correct: A 10-day monk is wrong more often than a 1-day monk, yes?  A 100-day monk is wrong more often than a 10-day monk?  The number of days matter; when other commenters point out that you need to bounce an idea off reality to avoid being wrong, are they criticizing your point, or glimpsing a piece of it?  Is it accurate to say that a significant piece of the idea represented here is that the number of days is in some sense equivalent to a willingness to be wrong about more?

Load More