mike_hawke

pearls before swine >:[

Wiki Contributions

Comments

Is this different than logical validity? If not, how do they relate?
https://en.wikipedia.org/wiki/Validity_(logic)
https://www.lesswrong.com/tag/valid-argument 

I could believe that adding the word "local" might help people communicate the concept more clearly, but I'm curious if it's doing anything else here.

there’s the ever-present, gnawing worry that haunts me, whispering that I might be fundamentally mistaken about something else.

 

I think negative visualization is useful for this. I made a list of implications for my beliefs & actions conditioning on the totally hypothetical case in which a particular political opinion of mine (no, I won't say which one) is wrong.  

I noticed that I had some bucket errors along the lines of "I will have to admit to those nasty outgroup memers that I've been evil+dumb all along, and accept their righteous judgment!" Once I had written it explicitly, the correction pretty much wrote itself: good-vs-evil is oversimplified at best, being wrong doesn't make you dumb, and hateful memers deserve no one's attention, regardless of what faction anyone is in.

I liked the length, readability, and importance; happy to spend my reading budget on this.

Here are some thoughts I had:

  • You said, "the belief that persistent good faith disagreements are common would seem to be in bad faith!" and this tripped my alarm for gratuitous meta-leveling. Is that point essential to your thesis? Unless I read too quickly, it seems like you gave a bunch of reasons why that belief is wrong, then pointed out that it would seem to be in bad faith, but then didn't really flesh out what the agenda/angle was. Was that intentional? Am I just stumbling over a joke I don't understand?
  • I would be interested to read a whole post about how full-contact psychoanalysis can go well or poorly. I've seen it go well, but usually in ways that are noticeably bounded, so I think I'll challenge the word "full" here. You meant this as an idealization/limiting case, right?
  • I feel like there is an implicit call to action here, which may not be right for everyone. I anticipate early adopters of Assuming Bad Faith to pay noticeable extra costs, and possibly also late adopters. I don't have anything in particular in mind, just Chesterton's Fence and hidden order type heuristics, plus some experience seeing intentional norm-setting go awry. 

Important but frustrating rationalist skill: getting halfway through a comment and then deleting it because you realized it was wrong

What are your thoughts about this objection to evals? Have you already addressed it somewhere?

if we can squash every scary AI that is not quite smart enough to do a treacherous turn, and we don't structurally eliminate treacherous turns, then the first deployed AI that causes major damage will do it via a treacherous turn. We have no warning shots.

Less importantly, why are these things in quotation marks?:

[...]doing things like “incorporate a company” or “exploit arbitrages in stock prices” or “design and synthesize DNA” without needing any human assistance or oversight.

A prairie is qualitatively different than a billiard table or an asteroid belt: If you tried to use basic kinematics and free body diagrams to describe a prairie ecosystem, you would find that most of the interesting action was left unexplained. To handwave away air resistance and viscosity is to handwave away all the birds. To handwave away friction is to handwave away basically every other mobile life form. And I think it only gets worse if you move from a prairie to a rainforest--floating spores, flying snakes, geckos, soft but breakable eggs, all manner of sticky appendages, etc.
Simple dynamics don’t even get you a decent first approximation of these systems unless you zoom way out and take very coarse averages. (“The biomass generally stays within roughly 10m of ground level, because of gravity.” “These tightly coupled populations of predators and prey roughly trace out this orbit in phase space every X time interval.“) (But I'm interested in counterexamples if you have them.)
...
Anyway, this feels related to the fact that we can’t develop good models for human interactions, either descriptive or prescriptive. When I try to do virtue ethics, I find that all my virtues turn to swiss cheese after a day’s worth of exception handling. When I try to take actions based on first principles of game theory I end up feeling like a maladjusted sociopath. When I try to incorporate the good parts of economic/evopsych cynicism into my view of human affairs, I end up with more questions than answers.

…the question does sometimes haunt me, as to whether in the alternative Everett branches of Earth, we could identify a distinct cluster of “successful” Earths, and we’re not in it.


This Failing Earth, Eliezer Yudkowsky

 

Does anyone else wonder similar things about the EA/rationality scene? If we could scan across Tegmark III, would we see large clusters of nearby Earths that have rationality & EA communities that embarrass us and lay bare our own low standards?

I wonder if this post would have gotten a better reception if the stooge had been a Scientologist or a conspiracy theorist or something, instead of just a hapless normie.

I assume that the whole flat earth thing will lose its contrarian luster and fall out of style in the next few years. But suppose that's wrong. How soon until there are significant numbers of flat-earther kids enrolling in kindergarten? Will they be like existing fringe religious minorities? Will they mostly be homeschooled? My real best guess is that flat-earthers don't have kids so this won't happen.

Some smart, scrupulous, rational news junkie should write a periodical report on the state of anti-epistemology. I sort of worry that memeplexes, including anti-epistemic ones, have tipping points whereat they become popular (or dominant) very suddenly. 

I followed a link to an article about how Facebook was used to facilitate a genocide in Myanmar. I got a few paragraphs into it and then thought, "Wait, the New York Times is telling me a scandalous but murky story about Big Tech and world events...and I’m just condensing that as 'known facts of public record.' Isn’t this Gell-Mann amnesia?"

So then I felt myself searching for reasons why the NYT could be trusted more about this kind of thing, but found it difficult to come up with a single specific reason that I actually believed. So then I supposed that it was worth reading anyway, since the basic facts were important, and I wasn’t at that much risk from whatever biased framing the NYT might take. But I realized that I didn't really believe that either--I imagined the future in which I turn out to have been utterly misled by the article, and that hypothetical future felt entirely plausible.

So I didn’t read it.

It was an effortful and unrewarding decision, but I endorse it, and I’m hopeful that it will be easier next time. For news stories of this sort, I expect to fall short of my own epistemic standards unless I check 3 or 4 diverse sources. But I didn’t want to do an hour of responsible research, I wanted to spend a leisurely 10 minutes on a single, highly consumable, authoritatively-voiced article and then enjoy the feeling of being informed.

Load More