All of Nick_Tarleton's Comments + Replies

Upvoted, but weighing in the other direction: Average Joe also updates on things he shouldn't, like marketing. I expect the doctor to have moved forward some in resistance to BS (though in practice, not as much as he would if he were consistently applying his education).

And the correct reaction (and the study's own conclusion) is that the sample is too small to say much of anything.

(Also, the "something else" was "conventional treatment", not another antiviral.)

1Jay Molstad3y
Well, we can say that 27/30 (90%) patients improved. With a very high level of confidence, we can say that this disease is less fatal than Ebola (which would have killed 26 or so).

I find the 'backfired through distrust'/'damaged their own credibility' claim plausible, it agrees with my prejudices, and I think I see evidence of similar things happening elsewhere; but the article doesn't contain evidence that it happened in this case, and even though it's a priori likely and worth pointing out, the claim that it did happen should come with evidence. (This is a nitpick, but I think it's an important nitpick in the spirit of sharing likelihood ratios, not posterior beliefs.)

Yeah. I regularly model headlines like this as being part of the later levels of simulacra. The article argued that it should backfire, but it also said that it already had. If the article catches on, then it will become true to the majority of people who read it. It's trying to create the news that it's reporting on. It's trying to make something true by saying it is.

I think a lot of articles are like that these days. They're trying to report on what's part of social reality, but social reality depends on what goes viral on twitter/fb/etc, so they work to

... (read more)
I'd say this isn't just a nitpicking, it's pretty directly challenging the core claim. Or at least, if the essay didn't want to be making that it's core claim, it should have picked a different title. (I say that while generally endorsing the article)

if there's a domain where the model gives two incompatible predictions, then as soon as that's noticed it has to be rectified in some way.

What do you mean by "rectified", and are you sure you mean "rectified" rather than, say, "flagged for attention"? (A bounded approximate Bayesian approaches consistency by trying to be accurate, but doesn't try to be consistent. I believe 'immediately update your model somehow when you notice an inconsistency' is a bad policy for a human [and part of a weak-man version of rationalism that harms people who try to follo

... (read more)
The next paragraph applies there: you can rectify it by saying it's a conflict between hypotheses / heuristics, even if you can't get solid evidence on which is more likely to be correct. Cases where you notice an inconsistency are often juicy opportunities to become more accurate.

On the other hand:

We found that viable virus could be detected... up to 4 hours on copper...

Extract from that paper HCOV-19 is a (unusual?) name for the nCOV/SARS-CoV-2 virus responsible for COVID19. It's the one in red.

Here's a study using a different coronavirus.

Brasses containing at least 70% copper were very effective at inactivating HuCoV-229E (Fig. 2A), and the rate of inactivation was directly proportional to the percentage of copper. Approximately 103 PFU in a simulated wet-droplet contamination (20 µl per cm2) was inactivated in less than 60 min. Analysis of the early contact time points revealed a lag in inactivation of approximately 10 min followed by very rapid loss of infectivity (Fig. 2B).

On the other hand:

That paper only looks at bacteria and does not knowably carry over to viruses.

I don't see you as having come close to establishing, beyond the (I claim weak) argument from the single-word framing, that the actual amount or parts of structure or framing that Dragon Army has inherited from militaries are optimized for attacking the outgroup to a degree that makes worrying justified.

This definitely doesn't establish that. And this seems like a terrible context in which to continue to elaborate on all my criticisms of Duncan's projects, so I'm not going to do that. My main criticisms of Dragon Army are on the Dragon Army thread, albeit worded conservatively in a way that may not make it clear how these things are related to the "army" framing. If you want to discuss that, some other venue seems right at this point, this discussion is already way too broad in scope.
Since Benquo says he thinks sports are good, I'd be curious whether he is also worried about sports teams with names that suggest violence. Many teams are named after parties in a violent historical conflict or violent animals: Patriots, Braves, Panthers, Raptors, Bulls, Sharks, Warriors, Cavaliers, Rangers, Raiders, Blackhawks, Predators, Tigers, Pirates, Timberwolves...

Doesn't work in incognito mode either. There appears to be an issue with when accessed over HTTPS — over HTTP it sends back a reasonable-looking 301 redirect, but on port 443 the TCP connection just hangs.

Similar meta: none of the links to currently work due to, well, being to rather than

hmm. I can fix these links, but fyi if you clear your browser cache they should work for you. (If not, lemme know)

Further-semi-aside: "common knowledge that we will coordinate to resist abusers" is actively bad and dangerous to victims if it isn't true. If we won't coordinate to resist abusers, making that fact (/ a model of when we will or won't) common knowledge is doing good in the short run by not creating a false sense of security, and in the long run by allowing the pattern to be deliberately changed.

3clone of saturn5y
I don't think it's that simple. First, if abusers and victims exist then the situation just is actively dangerous. Hypocrisy is unavoidable but it's less bad if non-abusers can operate openly and abusers need to keep secrets than vice versa. Second, I don't think the pattern can be deliberately changed except by creating a sense of security that starts out false but becomes true once enough people have it.

This post may not have been quite correct Bayesianism (... though I don't think I see any false statements in its body?), but regardless there are one or more steel versions of it that are important to say, including:

  • persistent abuse can harm people in ways that make them more volatile, less careful, more likely to say things that are false in some details, etc.; this needs to be corrected for if you want to reach accurate beliefs about what's happened to someone
  • arguments are soldiers; if there are legitimate reasons (that people are responding to) to a
... (read more)

IMO, the "legitimate influence" part of this comment is important and good enough to be a top-level post.

OK, give me some time and maybe I'll post it, expanded with some related notions that are less relevant to the original context but which I think are worth writing about...

This is simply instrumentally wrong, at least for most people in most environments. Maybe people and an environment could be shaped so that this was a good strategy, but the shaping would actually have to be done and it's not clear what the advantage would be.

My consistent experience of your comments is one of people giving [what I believe to be, believing that I understand what they're saying] the actual best explanations they can, and you not understanding things that I believe to be comprehensible and continuing to ask for explanations and evidence that, on their model, they shouldn't necessarily be able to provide.

(to be upfront, I may not be interested in explaining this further, due to limited time and investment + it seeming like a large tangent to this thread)

0Said Achmiz5y
I never said that I was talking about conversations here on LessWrong. I do interact with people—even “rationalists”!—elsewhere.

I don't see how we anything like know that deep NNs with ‘sufficient training data’ would be sufficient for all problems. We've seen them be sufficient for many different problems and can expect them to be sufficient for many more, but all?

A tangential note on third-party technical contributions to LW (if that's a thing you care about): the uncertainty about whether changes will be accepted, uncertainty about and lack of visibility into how that decision is made or even who makes it, and lack of a known process for making pull requests or getting feedback on ideas are incredibly anti-motivating.

This is probably the single most important obstacle to making a better LW on the technical side.

Other possible implications of this scenario have been discusesd on LW before.

This shouldn't lead to rejection of the mainstream position, exactly, but rejection of the evidential value of mainstream belief, and reversion to your prior belief / agnosticism about the object-level question.

Solving that problem seems to require some flavor of Paul's "indirect normativity", but that's broken and might be unfixable as I've discussed with you before.

Do you have a link to this discussion?

Yes, see this post. Most of the discussion happened in PM exchanges, but I think you can still get the idea. Feel free to PM me for explanations if you want.

Why not go a step further and say that 1 copy is the same as 0, if you think there's a non-moral fact of the matter? The abstract computation doesn't notice whether it's instantiated or not. (I'm not saying this isn't itself really confused - it seems like it worsens and doesn't dissolve the question of why I observe an orderly universe - but it does seem to be where the GAZP points.)

Hrm... The whole exist vs non exist thing is odd and confusing in and of itself. But so far it seems to me that an algorithm can meaningfully note "there exists an algorithm doing/perceiving X", where X represents whatever it itself is doing/perceiving/thinking/etc. But there doesn't seem there'd be any difference between 1 and N of them as far as that.

I wonder if it would be fair to characterize the dispute summarized in/following from this comment on that post (and elsewhere) as over whether the resolutions to (wrong) questions about anticipation/anthropics/consciousness/etc. will have the character of science/meaningful non-moral philosophy (crisp, simple, derivable, reaching consensus across human reasoners to the extent that settled science does), or that of morality (comparatively fuzzy, necessarily complex, not always resolvable in principled ways, not obviously on track to reach consensus).

Where Recursive Justification Hits Bottom and its comments should be linked for their discussion of anti-inductive priors.

(Edit: Oh, this is where the first quote in the post came from.)

Measuring optimization power requires a prior over environments. Anti-inductive minds optimize effectively in anti-inductive worlds.

(Yes, this partially contradicts my previous comment. And yes, the idea of a world or a proper probability distribution that's anti-inductive in the long run doesn't make sense as far as I can tell; but you can still define a prior/measure that orders any finite set of hypotheses/worlds however you like.)

I agree with the message, but I'm not sure whether I think things with a binomial monkey prior, or an anti-inductive prior, or that don't implement (a dynamic like) modus ponens on some level even if they don't do anything interesting with verbalized logical propositions, deserve to be called "minds".

Have Eliezer's views (or anyone else's who was involved) on the Anthropic Trilemma changed since that discussion in 2009?

I wonder if it would be fair to characterize the dispute summarized in/following from this comment on that post (and elsewhere) as over whether the resolutions to (wrong) questions about anticipation/anthropics/consciousness/etc. will have the character of science/meaningful non-moral philosophy (crisp, simple, derivable, reaching consensus across human reasoners to the extent that settled science does), or that of morality (comparatively fuzzy, necessarily complex, not always resolvable in principled ways, not obviously on track to reach consensus).
8Eliezer Yudkowsky10y
There's no brief answer. I've been slowly gravitating towards, but am not yet convinced, by the suspicion that making a computer out of twice as much material causes there to be twice as much person inside. Reason: No exact point where splitting a flat computer in half becomes a separate causal process, similarity to behavior of Born probabilities. But that's not an update to the anthropic trilemma per se.

So my guess is that a given dollar is probably more valuable at CFAR right this instant, and we hope this changes very soon (due to CFAR having its own support base)...

an added dollar of marginal spending is more valuable at CFAR (in my estimates).

Is this still your view?

I didn't, and still don't... but now I'm a little bit disturbed that I don't, and want to look a lot more closely at Hermione for ways she's awesome.

Quirrell scans, to me, as more awesome along the "probably knows far more Secret Eldrich Lore than you" and "stereotype of a winner" axes, until I remember that Hermione is, canonically, also both of those things. (Eldrich Lore is something one can know, so she knows it. And she's more academically successful than anyone I've ever known in real life.)

So when I look more closely, the thing my brain is valuing is a script it follows where Hermione is both obviously unskillful about standard human things (feminism, kissing boys, Science M... (read more)

Upvoted; whatever its relationship to what the OP actually meant, this is good.

Saying that it's good because it's vague, because it's harder to screw up when you don't know what you're talking about, is contrary to the spirit of LessWrong.

Reminding yourself of your confusion, and avoiding privileging hypotheses, by using vague terms as long as you remember that they're vague doesn't seem so bad.

I kept expecting someone to object that "this Turing machine never halts" doesn't count as a prediction, since you can never have observed it to run forever.

Technically speaking, you can observe the loop encoded in the Turing machine's code somewhere -- every nonhalting Turing machine has some kind of loop. The Halting theorems say that you cannot write down any finite program which will recognize and identify any infinite loop, or deductively prove the absence thereof, in bounded time. However, human beings don't have finite programs, and don't work by deduction, so I suspect, with a sketch of mathematical grounding, that these problems simply don't apply to us in the same way they apply to regular Turing machines. EDIT: To clarify, human minds aren't "magic" or anything: the analogy between us and regular Turing machines with finite input and program tape just isn't accurate. We're a lot closer to inductive Turing machines or generalized Turing machines. We exhibit nonhalting behavior by design and have more-or-less infinite input tapes.
Actually, I think "This Turing machine halts" is the more obviously troublesome one. It gives no computable expectation of when we might observe a halt. (Any computable probability distribution we would assign to halting time would put too much mass at small times as compared to the true distribution of halting times.)
If you take this objection seriously, then you should also take issue with predictions like "nobody will ever transmit information faster than the speed of light", or things like it. After all, you can never actually observe the laws of physics to have been stable and universal for all time. If nothing else, you can consider each as being a compact specification of an infinite sequence of testable predictions: "doesn't halt after one step", "doesn't halt after two steps",... "doesn't halt after n steps".

Then the statement "this Turing machine halts for every input" doesn't count as a prediction either, because you can never have observed it for every input, even if the machine is just "halt". And the statement "this Turing machine eventually halts" is borderline, because its negation doesn't count as a prediction. What does this give us?

"This Turing machine won't halt in 3^^^3 steps" is a falsifiable prediction. Replace 3^^^3 with whatever number is enough to guarantee whatever result you need. Edit: But you're right.

More sympathetically, people might (well, I'm sure some people do) see avoiding stereotype-based jokes as a step towards there being things you can't say, and prefer some additional risk of saying harmful things to moving in that direction (possibly down a slippery slope).

But the blogger's position is one that is often met with hostility round these parts, for reasons that are unclear to me.

I think some of it is a defensive reaction to perceived possible vaguely-defined moral demands/condemnation. Here's a long comment I wrote about that in a different context.

Also simple contrarianism, though that's not much of an explanation absent a theory of why this is the thing people are contrarian against.

the parts of social engineering that I think LW is worst at.

What are those?

More sympathetically, people might (well, I'm sure some people do) see avoiding stereotype-based jokes as a step towards there being things you can't say, and prefer some additional risk of saying harmful things to moving in that direction (possibly down a slippery slope).
On the object level, it isn't a success of rational discussion that assertions like "privilege is a social dynamic which exists" turn immediately to the defensive reaction you mentioned. Reversing the discrimination is an extreme remedy, and like all extreme remedies, it gets deserved push-back. But there's no sustained discussion of middle ground positions. Although I may be mindkiled about this, I think that I am open to discussion of less extreme ways of reducing the pernicious effects of the privilege social dynamic. But even if one thinks that this social dynamic is not pernicious, it booggles my mind that people don't acknowledge the dynamic occurs.

It/s a lesswrongian prejjudice that the only game anyone would want to play is Highly Competent But Criminally Underappreciated Backroom Boffin.

Yes. The general case of this prejudice is probably something like 'behavior morally should be evaluated according to its stated far-mode purpose; other purposes are possible and important, but dirty'. Of course, this has the large upside of making us seriously evaluate things according to their stated purpose at all....

Out of curiosity, what are the connotations of the word "rube" that make you suspicious?

Low status, contemptibility, etc. I expect making status hierarchies salient to make people less rational (hence fully generic suspicion), and I had the specific hypothesis that you might see people using 'signaling' models as judging others as contemptible and be offended by this.

Relatedly, I dislike calling the behavior in question "pandering", since I expect using condemnatory terms for phenomena to make them aversive to look at closely, and to... (read more)

Now that you mention it, I think this does occur, although I think most of the judgement is directed at the 'signaller' (or in my language 'panderer') for being vain or duplicitous, although I don't like saying I'm offended by it ("Offense is a sign of a weak and bourgeois mind" says my inner Dali.) I think that 'pandering' does carry the connotations of how 'signalling' is used, but I'm happy to accept alternatives. One I can think of right away is "appealing to", and I'd be happy to switch from 'pandering' to 'appealing' if you like.

I have a hard time telling whether you're trying to say that 'signaling' models are inaccurate, or just that calling them 'signaling' is misleading. I agree with the latter insofar as 'signaling' means this specific economic model, because the behaviors in question aren't directed at economically rational agents. I also can't tell if you dislike models that postulate stupidity (the strong status connotations of the word "rube" make me suspicious).

If you mean the former: I think you greatly overestimate median rationality in your take on the manag... (read more)

I just mean the latter. I think explanations involving pandering can work. The trouble I have with models that postulate stupidity, is that they need people to be stupid in a convenient direction. Stupidity is a much larger target than intelligence after all. I think explanation involving pandering work if you can explain (like you did with the affect hueristic) why these tricks will work on people. Out of curiosity, what are the connotations of the word "rube" that make you suspicious?

I and the one person currently in the room with me immediately took "by all means necessary" to suggest violence. I think you're in a minority in how you interpret it.

OK, I'll update on that.

Police seek and preserve public favour not by catering to public opinion, but by constantly demonstrating absolute impartial service to the law.

I know this is meant to be an ideal for the police, but it could also be read as a descriptive claim about public favor, and it's worth noting that that claim is sometimes false: how often do people approve of police bashing the heads of $OUTGROUP?

This is true-- and it's also the case that sometimes the law supports abuse of an outgroup. I don't know enough about Peele's era to have an opinion about how those issues played out for his police force.

"Apply decision theory to the set of actions you can perform at that point" is underspecified — are you computing counterfactuals the way CDT does, or EDT, TDT, etc?

This question sounds like a fuzzier way of asking which decision theory to use, but maybe I've missed the point.

Can you give an example of circular preferences that aren't contextual and therefore only superficially circular (like Benja's Alice and coin-flipping examples are contextual and only superficially irrational), and that you endorse, rather than regarding as bugs that should be resolved somehow? I'm pretty sure that any time I feel like I have intransitive preferences, it's because of things like framing effects or loss aversion that I would rather not be subject to.

I'd like to know what you think of this (unfortunately long) piece arguing (persuasively IMO) that Mystery/Roissy-style PUA is solving the wrong problem and a memetic hazard.

The right thing for these guys to do would be to deal with these core issues of low self-worth feelings and their inferiority feelings so that they can fix them once and for all. What pickup teaches them to do however is not to fix feelings but instead to switch from their current faulty coping strategy, which is surrender, to another faulty coping strategy of overcompensation. Using

... (read more)
Ursula Le Guin, The Dispossessed
Don't know about him, but I fully agree with it; I've read a fair amount of rants about this problem. I've also had my own story of dealing with low self-worth and alienation, although it didn't end in a heterosexual relationship :) (sorry about the troll toll, btw)

An elite intellectual community can^H^H^H has to mostly reject newcomers, but those it does accept it has to invest in very effectively (while avoiding the Objectivist failure mode).

I think part of the problem is that LW has elements of both a ground for elite intellectual discussion and a ground for a movement, and these goals seem hard or impossible to serve with the same forum.

I agree that laziness and expecting people to "just know" is also part of the problem. Upvoted for the quote.

I'm not entirely sure that expecting people to "just know" is a huge problem here, as on the Internet appropriate behavior can be inferred relatively easily by reading past posts and comments-- hence the common instruction to "lurk more." One could construe this as a filter, but if so, who is it excluding? People with low situational awareness?

I'm just wondering whether this script (something/someone is responsible for the good/bad stuff that happens to me) is equivalent to an alief in supernatural.

I'm not sure this is a meaningful question. "Alief" is a very fuzzy category.

Possibly (this is total speculation) Eliezer is talking about the feeling of one's entire motivational system (or some large part of it), while you're talking about the feeling of some much narrower system that you identify as computing morality; so his conception of a Clippified human wouldn't share your terminal-ish drives to eat tasty food, be near friends, etc., and the qualia that correspond to wanting those things.

9Eliezer Yudkowsky11y
The Clippified human categorizes foods into a similar metric of similarity - still believes that fish tastes more like steak than like chocolate - but of course is not motivated to eat except insofar as staying alive helps to make more paperclips. They have taste, but not tastiness. Actually that might make a surprisingly good metaphor for a lot of the difficulty that some people have with comprehending how Clippy can understand your pain and not care - maybe I'll try it on the other end of that Facebook conversation.

How much does the perception that science and engineering became uncool come from bias in what gets recorded, and in particular the fact that most of us attended high school within the last decade or two?

The three interpretations I mean are:

  • (1) People's behavior is accurately predicted by modeling them as status-maximizing agents.
  • (2) People's subjective experience of well-being is accurately predicted by modeling it as proportional to status.
  • (3) A person is well-off, in the sense that an altruist should care about, in proportion to their status.

Is that clearer?

Yes, thank you. As far as I can tell, (1) and (2) are closest to the meaning I inferred. I understand that we can consider them separately, but IMO (2) implies (1). If an agent seeks to maximize its sense of well-being (as it would reasonable to assume humans do), then we would expect the agent to take actions which it believes will achieve this effect. Its beliefs could be wrong, of course, but since the agent is descended from a long line of evolutionarily successful agents, we can expect it to be right a lot more often that it's wrong. Thus, if the agent's sense of well-being can be accurately predicted as being proportional to its status (regardless of whether the agent itself is aware of this or not), then it would be reasonable to assume that the agent will take actions that, on average, lead to raising its status.
A parallel point: Corporations do not act directly, they always act through their officers, directors, and misc employees. Yet it is perfectly coherent to say "Papa John's Pizza, Inc. negligently hit my car." Every knows that means something like "A Papa John's delivery driver drove negligently and hit my car." In short, the usage you complain of is isomorphic to "Powerful members of past society have been unwilling or unable to take the necessary steps to prevent human suffering." Pretending you misunderstood me is logically rude.

Konkvistador believes that humans are driven primarily by their desire to achieve a higher status, and that this is in fact one of our terminal goals.

This needs to be considered separately as (1) a descriptive statement about actions (2) a descriptive statement about subjective experience (3) a normative statement about the utilitarian good. It seems much more accurate as (1) than (2) or (3), and I think Konkvistador means it as (1); meanwhile, statements about "quality of life" could mean (2) or (3) but not (1).

I don't understand what (1) means, can you explain ?

If we measure quality of life solely in terms of status

Is there a reason we might want to do this? It feels like your comments in this thread unjustifiably privilege this model.

Again, as far as I understand, Konkvistador believes that humans are driven primarily by their desire to achieve a higher status, and that this is in fact one of our terminal goals. If we assume that this is true, then I believe my comments are correct. Is that actually true, though ? Are humans driven primarily by their desire to achieve a higher status (in addition to the desires directly related to physical survival, of course) ? I don't know, but maybe Konkvistador has some evidence for the proposition -- assuming, of course, that I'm not misinterpreting his viewpoint.

I've heard people say the meta-ethics sequence was more or less a failure since not that many people really understood it, but if these last posts were taken as a perequisite reading, it would be at least a bit easier to understand where Eliezer's coming from.

Agreed, and disappointed that this comment was downvoted.

Load More