With David Sacks being the AI/Crypto czar, we likely won't be getting any US regulation on AI in the next years.
It seems to me like David Sacks perspective on the issue is that AI regulation is just another aspect of the censorship industrial complex.
To convince him of AI regulation, you would likely need to have an idea about how to do AI regulation without furthering the censorship industrial complex. The lack of criticism of the censorship industrial complex in the AI safety discourse now is a big problem because there are no available policy proposals.
From my conversations with Vassar, I think there's a sense of "There's a lot that's possible to do in the world, if you just ignore social conventions" that's downstream from being accepting what Vassar says. A person who previously didn't take any psychedelics because of social conventions, might become more open to taking psychedelics and thinking about whether it makes sense to take them.
Michael Vassar has lots of different ideas and is someone who's willing to share his ideas in a relatively unfiltered way. Some of them are ideas for experiments that could be done.
Without knowing concrete facts of what happened (I only talked to Michael when he was in Berlin):
Let's say, Michael suggest that doing a certain "psychological technique" might be a valuable experiment. Alice, did the experiment and it had outcome. Michael thinks it had a bad outcome. Alice, however think the outcome is great and continues doing the technique.
If you conclude from that that Michael is bad, because he proposed an experiment that had a bad outcome, you are judging people who are experimenting with the unknown for their love of experimenting with the unknown.
If you want to criticize Michael because he's to open to experimentation, do that more explicitly because then you need to actually argue the core of the issue. Michael is person who thinks that various Chesterton's fences are no reason to avoid experimentation.
Michael also is very open about talking to anyone even if the person might be "bad", so you might also criticize him for speaking with Olivia in the first place instead of kicking Olivia out from he conversations he had.
Given that Ziz was actually a student at CFAR, calling Ziz a CFARian and blaming CFAR for Ziz would make a lot more sense than blaming Michael for Olivia. Jessica suggests that Olivia was also trying to study from Anna Salomon, so probably Olivia was at CFAR at some point, so might also be called a CFARian.
How do you know that Michael Vassar or Jessica Taylor have been aggressive about asserting their point of view in the presence of people who take psychedelics?
What kind of student teacher relationship did Vassar and Olivia had and for what amount of time did they have it?
Did you come to "conspiratorial interpretations" of the behavior of your family in that process?
But I have observed this all directly.
This post feels like it's written on an unnecessarily high level of abstraction. What are the actual events you observed directly? What did you see with your own eyes or hear with your own ears?
Did you do any targeted work to change beliefs while under the influence of drugs?
Especially, processes like belief reporting or internal double cruxt that were facilitated by another person?
Elizabeth wrote in Truthseeking is the ground in which other principles grow about how it's good to have pursue goals with good feedback loops to stay aligned.
It seems to me like SecureBio focusing on a potential pandemic is a goal where the feedback loop is worse than if you would focus on the normal variation in viruses. Knowing which flu viruses and coronaviruses varients are the the most common and growing the most, seems like straightforward problem that could be solved by NAObservatory.
What's the core reason why the NAObservatory currently doesn't provide that data and when in the future would you expect that kind of data to be easily accessible from the NAObservatory website?
If you take early writing of Eliezer, the idea is AI should be aligned with Coherent Extrapolated Volition. That's a different goal from aligning AI with the views of credentialed experts or the leadership of AI companies.
"How do you regulate AI companies so that they aren't enforcing Californian values on the rest of the United States and the world?" is an alignment question. If you have a good answer to that question, it would be easier to convince someone worried about those companies having enforced Californian values via censorship industrial complex doing the same thing with AI to regulate AI companies.
If you ignore the alignment questions that people like David Sachs care about, it's hard to convince them that you are sincere about the other alignment questions.