Wiki Contributions


I don't think I ever ran into that when I was younger. Meeting in houses is the original way Christians met, so I think it would be weird to complain about it. I found it pretty common for people to make fun of the opposite. If you're spending your church money on a big fancy building, does that really show your dedication to church teachings like charity*?

Also, people might accuse a really small church group of being culty, but a small church group with a big fancy building feels much cultier than the same group meeting in a house.

I was only really exposed to Evangelical Christianity so it's possible this is very different among other groups like Catholics.

* Churches typically justify this in terms of practicality (more spaces to work with) and marketing evangelism.

You should probably take reverse-causation into account here. I doubt the effect of the school is nearly as strong as you think, since people who want finance jobs are drawn to the schools known for getting people finance jobs. Add to that that the schools known for certain things are the outliers. If you go to a random state school, the students are going to have much more varying interests.

Any chance you can link to that discussion? I'm really curious.

When people talk about p(doom) they generally mean the extinction risk directly from AI going rogue. The way I see it, that extinction-level risk is mostly self-replicating AI, and an AI that can design and build silicon chips (or whatever equivalent) can also build guns, and an AI designed to operate a gun doesn't seem more likely to be good at building silicon chips.

I do worry that AI in direct control of nuclear weapons would be an extinction risk, but for standard software engineering reasons (all software is terrible), not for AI-safety reasons. The good news is that I don't really think there's any good reason to put nuclear weapons directly in the hands of AI. The practical nuclear deterrent is submarines and they don't need particularly fast reactions to be effective.

Answer by Brendan LongApr 07, 202430

While military robots might be bad for other reasons, I don't really see the path from this to doom. If AI powered weaponry doesn't work as expected, it might kill some people, but it can't repair or replicate itself or make long-term plans, so it's not really an extinction risk.

I don't think there's anything misleading about that. Building AI that kills everyone means you never get to build the immortality-granting AI.

You could imagine a similar situation in medicine: I think if we could engineer a virus that spreads rapidly among humans and rewrites our DNA to solve all of our health issues and make us smarter would be really good, and I might think it's the most important thing for the world to be working on; but at the same time, I think the number of engineered super-pandemics should remain at zero until we're very, very confident.

It's worth noticing that MIRI has been working on AI safety research (trying to speed up safe AI) for decades and only recently got into politics.

You could argue that Eliezer and some other rationalist are slowing down AGI and that's bad because they're wrong about the risks, but that's not a particularly controversial argument here (for example, see this recent highly-upvoted post). There's less (recent) posts about how great safe AGI would be, but I assume that's because it's really obvious.

I would be more worried about getting kicked out of parties because you think "the NRC is a good thing".

More seriously, your opinion on this doesn't sound very e/acc to me. Isn't their position that we should accelerate AGI even if we know it will kill everyone, because boo government yay entropy? I think rationalists generally agree that speeding up the development of AGI (that doesn't kill all of us) is extremely important, and I think a lot of us don't think current AI is particularly dangerous.

To be fair, the one-in-a-million legislators who make it to the federal level probably are very good at politics. It's kind of unreasonable to hold them the the standard of knowing (and demonstrating their knowledge of) things about economics or healthcare when their job is to win popularity contests by saying transparently ridiculous things.

I'm not downvoting because this was downvoted far enough, but downvoting doesn't mean you think the post has committed a logical fallacy. It means you want to see less of that on LessWrong. In this case, I would downvote because complaining about the voting system isn't interesting or novel.

I realized after asking that my default prompt makes ChatGPT really verbose so I changed the prompt to:

Identify types of human cells using the following marker genes. Identify one cell type for each row. Only provide the cell type name and no other commentary.

And it gave me:

  1. Embryonic stem cells
  2. Induced pluripotent stem cells
  3. Endoderm
  4. Granulosa cells
  5. Oocytes
  6. Pituitary gland cells
  7. Germ cells
  8. Leydig cells
  9. Neurons
  10. Meiotic cells
  11. Sertoli cells
  12. Neural progenitor cells

For 9 it's actually interesting that if I let it give commentary it says:

CASC3, PGAP1, SLC6A16, CNTNAP4, NPHP1 - This set of genes does not point to a well-defined cell type but could suggest Neuronal Cells or specific types of Neural Precursors based on the presence of neural development and function genes.

Load More