I don't think I see the problem. Chevron deference is, as you say, about whether courts defer to agencies interpretations statutes. It comes up when an agency thinks one interpreation is best, and a court thinks a different interpretation is the best reading of the statute, but that the agencies prefered interpreation is still a plausible reading of the statute. In that case, under Chevron, the court defers to the agencies interpreation. Do away with Chevron, and the court will follow what it thinks is the best reading of the statute. This is, I should note, the background of what courts usually do and did before Chevron. Chevron is an anomaly.
In terms of implications, I think it is true that agencies will tend to interpret their mandates broadly, and so doing away with Chevron deference will, at the margin, reduce the scope of some agencies powers. But I don't see how it could lead to the end of the administrative state as we know it. Agencies will still have jobs to do that are authorized statute, and courts will still let agencies do those jobs.
So what does AI regulation look like? If it looks like congress passing a new statute to either create a new agency or authorize an existing agency to regulate AI, then whether Chevron gets overturned seems irrelevant - congress is quite capable of writing a statute that authorizes someone to regulate AI, with or without Chevron. If it looks like an existing agency reading an existing statute correctly to authorize it to regulate some aspect of AI, then again, that should work fine with or without Chevron. If, on the other hand, it looks like an existing agency over-reading an existing statute to claim authority it does not have to regulate AI, then (1) that seems horribly undemocratic, though if the fate of humanity is on the line then I guess that's ok, and (2) maybe the agency does it anyway, and it takes years to get fought out in court, and that buys us the time we need. But if the court ruling causes the agency to not try to regulate AI, or if the years long court fight doesn't buy enough time, we might actually have a problem here. I think this argument needs more details fleshed out. What particular agency do we think might over-read what particular statute to regulate AI? If we aren't already targeting a particular agency with arguments about a particular statute, and have a reasonable chance of getting them to regulate for AI safety rather than AI ethics, then worrying about the courts seems pointless.
In the models making the news and scaring people now, there aren't identified separate models for modeling the world and seeking the goal. It's all inscrutible model weights. Maybe if we understood those weights better we could separate them out. But maybe we couldn't. Maybe it's all a big jumble as actually implemented. That would make it incoherent to speak about the relative intelligence of the world model and the goal seeker. So how would this line of thinking apply to that?
For the less cryptographically inclined, or those predicting the failure of computing technology, there is always the old school method: write your prediction on a peace of paper, literally seal it in an envelope, and mail it to yourself. The postal marking they put over the stamp includes the date.
I think many people should be less afraid of lawsuits, though I'm not sure I'd say "almost everyone."
I wouldn't draw much from the infrequency of lawsuits being filed. Many disputes are resolved in the shadow of the legal system, without an actual court being involved. For example, I read a number of cases in law school where one person sued another after a car accident. Yet when I actually got into a car accident myself, no lawsuit was ever filed. I talked to my insurance company, the other driver presumably talked to their insurance company, the two companies talked to each other, money flowed around, things were paid for. Much more efficient than bringing everybody into a courtroom, empaneling a jury, and paying lawyers in fancy suits to make arguments. The insurance companies knew what the law was, knew who would have to pay money to who, and so they were able to skip over the whole courtroom battle step, and go directly to the payment step. This is what usually happens when an area of law is mature - the potential parties, sometimes with good advice from their attorneys, reach similar conclusions about the likely outcome of a potential lawsuit, and this allows them to reach an agreement outside of court. Lawsuits are much more likely to happen when the law is more ambiguous, and therefor the parties can have significantly different estimations of the outcome of the suit. So the frequency of lawsuits is often a measure of how much disagreement there is about an area of law. Other times it reflects a requirement to actually go to court to do something (like debt collection or mortgage foreclosure). But I don't think it is a good measure of the likelihood of having to pay out money for some arguable violation of the law.
Also, many contracts contain arbitration clauses, which also prevent conflicts from making it into a courtroom.
The notion of lawyers being overly conservative I think is also an incomplete description of that dynamic. A good lawyer will tell you how much you can expect a potential lawsuit to cost, and therefor whether it is more or less than the expected benefit of the action. If your lawyer won't do this, you should fire them and hire someone else. As an illustration, think about universities violating the free speech and due process rights of their students, and getting sued for it. They do this because the cost of not doing it (in public relations, angry students/faculty/donors, Title IX lawsuits) is more than the cost of a potential constitutional lawsuit, and they know it. How do they know it? Because their lawyers told them so.
I think sometimes people don't want to take the advice of lawyers they perceive as overly conservative, even when they should. People trying to build something or make a deal will often get very excited about it, and only want to see the ways it can go well. Lawyers have seen, or at least studied, many past conflicts, and so they can often see more clearly what conflicts might arise as a result of some project, and advise clients on how to avoid them. That is often what clients pay lawyers for. But to the client, it can often feel like the lawyer putting an unnecessary damper on the shiny project they are excited about.
There is also the moral aspect. Laws often have a moral point behind them. Sometimes when people refrain from doing things to avoid being sued, they are refraining from doing immoral things. And sometimes when people disregard legal advice, do a thing, and get sued, they actually did an immoral thing. To take an example that I watched closely at the time, and that connects to one of Alyssa's examples, during the 2014-2015 school year Rolling Stone published an article, based on a single young woman's account, of gang rape being used as a form of fraternity initiation at UVA. Rolling Stone did not do the sort of fact checking that is standard in journalism. (If memory serves the Columbia School of Journalism put out a report detailing the failures here). Over the course of several months, the story fell apart, it turned out to be a complete fabrication. And Rolling Stone was sued, and had to pay out. I can imagine Rolling Stone's lawyers advising them not to publish that article without doing some more fact checking, and those lawyers would have been right on the law. But more fact checking also would have been the morally correct thing to do. Even in the case of abuse/rape, defamation law does have a moral point to make - you shouldn't make up stories about being abused/raped and present them as the truth.
Finally, as an ex-lawyer, I unreservedly endorse Alyssa's advice not to take on six figures of debt to go to law school without researching the job market.
When you steal a newspaper from a kiosk, you are taking paper and ink that do not belong to you. The newspaper is harmed because it now has less paper and ink. When you bypass a paywall, the newspaper still has all the same computers and servers that it had before, it hasn't lost any physical object.
When I hear the words "intelligence" and "wisdom", I think of things that are necessarily properties of individual humans, not groups of humans. Yet some of the specifics you list seem to be clearly about groups. So at the very least I would use a different word for that, though I'm not sure which one. I also suspect that work on optimizing group decision making will look rather different from work on optimizing individual decision making, possibly to the point that we should think of them as separate cause areas.
When I think about some of humanities greatest advances in this area, I think of things like probability theory and causal inference and expected values - things that I associate with academic departments of mathematics and economics (and not philosophy). This makes me wonder how nascent this really is?
I find this position rather disturbing, especially coming from someone working at a university. I have spent the last sixish years working mostly with high school students, occasionally with university students, as a tutor and classroom teacher. I can think of many high school students who are more ready to make adult decisions than many adults I know, whose vulnerability comes primarily from the inferior status our society assigns them, rather than any inherent characteristic of youth.
As a legal matter (and I believe the law is correct here), your implication that someone acts in loco parentis with respect to college students is simply not correct (with the possible exception of the rare genius kid who attends college at an unusually young age). College students are full adults, both legally and morally, and should be treated as such. College graduates even more so. You have no right to impose a special concern on adults just because they are 18-30.
I think one of the particular strengths of the rationalist/EA community is that we are generally pretty good at treating young adults as full adults, and taking them and their ideas seriously.
Shooting a civilian is murder, whether or not the action is correct.
Shooting a civilian is not murder if it is self-defense or defense of others, which I think is a very good approximation to the set of circumstances where shooting a civilian is the correct choice.
It did show up in the podcast, which I believe is just filtered by upvotes?