This is more about expanding the question with slightly more specific questions:
Currently it seems like there are many people who are not scared enough, but I wonder if sentiment could quickly go the other way?
A worst-case scenario for societal collapse is that some "essential" workers are infected and others decide that it is too risky to keep working, and there are not enough people to replace them. Figuring out which sectors might be most likely to have critical labor shortages seems important.
An example of a "labor" shortage might be a lack of volunteers for blood donations.
Other than that, logistical supply bottlenecks seem more of an issue?
It seems likely that supply will be more important than demand until the recovery phase and then a big question will be to what extent do people make a persistent change in their preferences. Going without stuff for a while might cause some reconsideration about how important it actually is. An example might be that more people learn to cook and decide they like it, or maybe they try Soylent or whatever. Or, perhaps exercising in a gym is less important for people who get into an exercise routine at home or outside?
Maybe private ownership of cars and suburban living (enforcing social distance) get a boost, along with increased remote work making it more practical. The costs of lower density living might not seem so pressing?
Yeah, I don't see it changing that drastically; more likely it will be a lot of smaller and yet significant changes that make old movies look dated. Something like how the airports changed after 9/11, or more trivially, that time when all the men in America stopped wearing hats.
I'm wondering what's a way to keep better tabs on what people are talking in the rationalist community without reading everything? There is a lot of speculation, but sometimes very useful signal.
I feel like I'm reasonably in touch from reading Slate Star Codex and occasionally checking in here, and yet the first thing I saw that really got my attention was "Seeing the Smoke" getting posted on Hacker News. I guess I'm not following the right people yet?
I'm wondering if anyone can recommend some recordings that they like on YouTube or Spotify of this sort of music? I don't know if I've heard it before.
I'm just a lurker, but as an FYI, on The Well, hidden comments were marked <hidden> (and clickable) and deleted comments were marked <scribbled> and it seemed to work out fine. I suppose with more noise, this could be collapsed to one line: <5 scribbled>.
I mean things like using mathematical proofs to ensure that Internet-exposed services have no bugs that a hostile agent might exploit. We don't need to be able to build an AI to improve defences.
I think odds are good that, assuming general AI happens at all, someone will build a hostile AI and connect it to the Internet. I think a proper understanding the security mindset is that the assumption "nobody will connect a hostile AI to the Internet" is something we should stop relying on. (In particular, maintaining secrecy and internatonal cooperation seems unlikely. We shouldn't assume they will work.)
We should be looking for defenses that aren't dependent of the IQ level of the attacker, similar to how mathematical proofs are independent of IQ. AI alignment is an important research problem, but doesn't seem directly relevant for this.
In particular, I don't see why you think "routing through alignment" is important for making sound mathematical proofs. Narrow AI should be sufficient for making advances in mathematics.
Even if there's no "friendly part," it seems unlikely that someone who learns the basic principles behind building a friendly AI will be unable to build an unfriendly AI by accident. I'm happy that we're making progress with safe languages, but there is no practical programming language in which it's the least bit difficult to write a bad program.
It would make more sense to assume that at some point, a hostile AI will get an Internet connection, and figure out what needs to be done about that.
I'm happy to see a demonstration that Eliezer has a good understanding of the top-level issues involving computer security.
One thing I wonder though, is why making Internet security better across the board isn't a more important goal in the rationality community? Although very difficult (for reasons illustrated here), it seems immediately useful and also a good prerequisite for any sort of AI security. If we can't secure the Internet against nation-state level attacks, what hope is there against an AI that falls into the wrong hands?
In particular, building "friendly AI" and assuming it will remain friendly seems naive at best, since it will copied and then the friendly part will be modified by hostile actors.
It seems like someone with a security mindset will want to avoid making any assumption of friendliness and instead work on making critical systems that are simple enough to be mathematically proven secure. I wonder why this quote (from the previous post) isn't treated as a serious plan: "If your system literally has no misbehavior modes at all, it doesn't matter if you have IQ 140 and the enemy has IQ 160—it's not an arm-wrestling contest."
We are far from being able to build these systems but it still seems like a more plausible research project than ensuring that nobody in the world makes unfriendly AI.
Thanks! Bug filed. Regarding the Intercom chat bubble, I did post one comment a while back (accidentally in the wrong chat room for Lesswrong), but got no response, and I don't see any other responses in either chat room. Also, the indicator always says "away". To the naive user it looks abandoned. Are you sure it's working? Maybe the old chat room should be deleted?