LESSWRONG
LW

574
cubefox
1973Ω338335
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
5cubefox's Shortform
1y
9
5cubefox's Shortform
1y
9
35Is LLM Translation Without Rosetta Stone possible?
Q
2y
Q
15
18Are Intelligence and Generality Orthogonal?
3y
16
[Thought Experiment] If Human Extinction "Improves the World," Should We Oppose It? Species Bias and the Utilitarian Challenge
cubefox10d20

From the standpoint of hedonic utilitarianism, assigning a higher value to a future with moderately happy humans than to a future with very happy AIs would indeed be a case of unjustified speciesism. However, in preference utilitarianism, specifically person-affecting preference utilitarianism, there is nothing wrong with preferring our descendants (who currently don't exist) to be human rather than AIs.

PS: It's a bit lame that this post had -27 karma without anybody providing a counterargument.

Reply
leogao's Shortform
cubefox11d50

This is also why various artists don't necessarily try to make Tolkien's Orthanc, Barad-dûr, Angband, etc look ugly, but imposing and impressive in some way. Even H.R. Giger's biomechanical landscapes could be described as aesthetic. Or the crooked architecture in The Cabinet of Dr. Caligari (1920). Architecture is art, and art doesn't have to be beautiful or pleasant, just interesting. But presumably nobody would like to actually live in a Caligari-like environment. (Except perhaps people in the goth subculture?)

Reply
The Most Common Bad Argument In These Parts
cubefox13d*20

I don't think this is a fallacy. If it was, one of the most powerful and common informal inference forms (IBE a.k.a. Inference to the Best Explanation / abduction) would be inadmissible. That would be absurd. Let me elaborate.

IBE works by listing all the potential explanations that come to mind, subjectively judging how good they are (with explanatory virtues like simplicity, fit, internal coherence, external coherence, unification, etc) and then inferring that the best explanation is probably correct. This involves the assumption that the probability is small that the true explanation is not among those which were considered. Sometimes this assumption seems unreasonable, in which case IBE shouldn't be applied. That's mostly the case if all considered explanations seem bad.

However, in many cases the "grain of truth" assumption (the true explanation is within the set of considered explanations) seems plausible. For example, I observe the door isn't locked. By far the best (least contrived) explanation I can think of seems to be that I forgot to lock it. But of course there is a near infinitude of explanations I didn't think of, so who is to say there isn't an unknown explanation which is even better than the one about my forgetfulness? Well, it just seems unlikely that there is such an explanation.

And IBE isn't just applicable to common everyday explanations. For example, the most common philosophical justification that the external world exists is an IBE. The best explanation for my experience of a table in front of me seems to be that there is a table in front of me. (Which interacts with light, which hits my eyes, which I probably also have, etc.)

Of course, in other cases, applications of IBE might be more controversial. However, in practice, if Alice makes an argument based on IBE, and Bob disagrees with its conclusion, this is commonly because Bob thinks Alice made a mistake when judging which of the explanations she considered is the best. In which case Bob can present reasons which suggest that, actually, explanation x is better than explanation y, contrary to what Alice assumed. Alice might be convinced by these reasons, or not, in which case she can provide the reasons why she still believes that y is better than x, and so on.

In short, in many or even most cases where someone disagrees with a particular application of IBE, their issue is not with IBE itself, but what the best explanation is. Which suggests the "grain of truth" assumption is often reasonable.

Most examples of bad reasoning, that are common amongst smart people, are almost good reasoning. Listing out all the ways something could happen is good, if and only if you actually list out all the ways something could happen

Well, that's clearly almost always impossible (there are almost infinitely many possible explanations for almost anything), so we can't make an exhaustive list. Moreover, "should" implies "can", so, by contraposition, if we can't list them, it's not the case that we should list them.

, or at least manage to grapple with most of the probability mass.

But that's backwards. IBE is a method which assigns probability to the best explanation based on how good it is (in terms of explanatory virtues) and based on being better than the other considered explanations. So IBE is a specific method for coming up with probabilities. It's not just stating your prior. You can't argue about purely subjective priors (that would be like arguing about taste) but you can make arguments about what makes some particular explanation good, or bad, or better than others. And if you happen to think that the "grain of truth" assumption is not plausible for a particular argument, you can also state that. (Though the fact that this is rather rarely done in practice suggests it's in general not such a bad assumption to make.)

Reply
jacob_drori's Shortform
cubefox17d20

Judging from the pictures, this could also be a quadratic fit.

Reply
Experiments With Sonnet 4.5's Fiction
cubefox21d20

Not sure whether you know this, but on Twitter roon mentioned that GPT-5 (non-thinking? thinking?) was optimized for creative writing. Eliezer dismissed an early story shared by Altman.

Reply
Buck's Shortform
cubefox22d20

By the way, "It seems" and "arguably" seem a bit less defensive than "I think" (which is purely subjective). Arguably.

Reply
Buck's Shortform
cubefox23d72

I hear a lot of scorn for the rationalist style where you caveat every sentence with "I think" or the like.

I think e.g. Eliezer (in the sequences) and Scott Alexander don't hedge a lot, so this doesn't necessarily seem like a rationalist style. I do it a lot though, but I fairly sure it makes readability worse.

Reply
You Should Get a Reusable Mask
cubefox23d31

We don't need to shave ahead of time anyway (we can do it when the pandemic is already here), so it doesn't compete with mental resources now.

Reply
Natália's Shortform
cubefox25d40

Impressive write-up! As a follow-up question, what's currently your favorite (hypothetical) explanation for the actual main cause of high obesity rates? Some environmental contaminant? Something else?

Reply1
Why's equality in logic less flexible than in category theory?
cubefox1mo20

Yes, with Hilbert proof systems, since those have axioms / axiom schemata. (In natural deduction systems there are only inference rules like Modus ponens, no logical axioms.) But semantically, a "primitive" identity symbol is commonly already interpreted to be the real identity, which would imply the truth of all instances of those axiom schemes. Though syntactically, for the proof system, you indeed still need to handle equality in FOL, either with axioms (Hilbert) or special inference rules (natural deduction).

These syntactical rules are weaker in FOL however than the full (semantic) notion of identity. Because they only infer that all "identical" objects have all first-order definable predicates in common, which doesn't cover all possible properties, and which holds also for weaker forms of equality ("first-order equivalence").

Eliezer mentioned the predicate "has finitely many predecessors" as an example of a property that is only second-order definable. So two distinct objects could have all their first-order definable properties in common while not being identical. The first-order theory wouldn't prove that they are different. The second-order definition of identity, on the other hand, ranges over all properties rather than over all first-order definable ones, so it captures real identity.

Reply
Load More
Eurisko
7 months ago
Eurisko
7 months ago
(+13/-56)
Eurisko
7 months ago
Eurisko
7 months ago
(+252/-29)
Message to future AI
8 months ago
(+91)
Scoring Rule
2 years ago
(+15/-12)
Litany of Gendlin
2 years ago