I should have clarified a bit, I was using the term 'military industrial complex' to try to narrow in on the much more technocratic underbelly of the American Defence/Intelligence community or private contractors. I don't have any special knowledge of the area so forgive me, but essentially DARPA and the like or any agency with a large black budget.
Whatever they are doing does not need to have any connection to whatever the public facing government says in press briefings. It is perfectly possible that right now a priority for some of these agencies is funding a massive AI project, while the WH laughs off AI safety-that is how classified projects work. It illustrates the problem a bit actually, in that the entire system is set up to cover things up for national defence, in which case, having a dialogue about AI Risk is virtually impossible.
Why is there so little mention of the potential role of the military industrial complex in developing AGI rather than a public AI lab? The money is available, the will, the history (ARPANET was the precursor to the internet). I am vaguely aware there isn't much to suggest the MIC is on the cutting edge of AI-but there wouldn't be if it were all black budget projects. If that is the case, it presumably implies a very difficult situation because the broader alignment community would have no idea when crucial thresholds were being crossed.
Disclaimer: I myself am a newer user from last year.
I think trying to change downvoting norms and behaviours could help a lot here and save you some workload on the moderation end. Generally, poor quality posters will leave if you ignore and downvote them. Recently, there has been an uptick in these posts and of the ones I have seen many are upvoted and engaged with. To me, that says users here are too hesitant to downvote. Of course, that raises the question of how to do that and if doing so is undesirable because it will broadly repel many new users some of whom will not be "bad". Overall though I think encouraging existing users to downvote should help keep the well-kept garden.
No, that was just a joke Lex was making. I don't know the exact timestamps but in most of the instances where he was questioned on his own positions or estimations on the situation Lex seemed uncomfortable to me, including the alien civilisation example. At one point I recall actually switching to the video and Lex had his head in his hands, which body language wise seems pretty universally a desperate pose.
There were definitely parts where I thought Lex seemed uncomfortable, not just limited to specific concepts but when questions got turned around a bit towards what he thought. Lex started podcasting very much in the Joe Rogan sphere of influence, to the extent that I think he uses a similar style, which is very open and lets the other person speak/have a platform but is perhaps at the cost of being a bit wishy-washy. Nevertheless it's a huge podcast with a lot of reach.
This is why I don't place much confidence in projections about how the population will be affected by TAI from people like Sam Altman either. You have to consider they are very likely to be completely out of touch with the average person and so have absolutely terrible intuitions about how they respond to anything, let alone forecasting long term implications for them stemming from TAI. If you get some normal people together and make sure they take the proposition of TAI and everything it entails seriously, (such as widespread joblessness), I suspect you would encounter a lot more fear/apprehension around the kind of behaviours and ways of living that is going to produce.
I think what stands out to me the most is big tech/big money now getting involved seriously. That has a lot of potential for acceleration just because of funding implications. I frequent some financial/stock websites and have noticed AI become not just a major buzzword, but even some sentiments along the lines of 'AI could boost productivity and offset a potential recession in the near future'. The rapid release of LLM models seems to have jump started public interest in AI, what remains to be seen is what the nature of that interest manifests as. I am personally unsure if it will mostly be caution and regulation, panic, or the opposite. The way things are nowadays I guess there will be a significant fraction of the public completely happy with accelerating AI capabilities and angry at anyone who disagrees.
I think it may be necessary to accept that at first, there may need to be a stage of general AI wariness within public opinion before AI Safety and specific facets of the topic are explored. In a sense, the public has not yet fully digested the 'AI is a serious risk' or perhaps even 'AI will be transformative to human life' in the relatively near term future. I don't think it is very likely that is a phase that can simply be skipped and it will probably be useful to get as many people broadly on topic before the more specific messaging, because if they are not then they will reject your messaging immediately, perhaps becoming further entrenched in the process.
If this is the case then right now sentiments along the lines of general anxiety about AI are not too bad, or at least they are better than dismissive sentiment.
I had never heard of Standpoint epistemology prior to this post but have encountered plenty of thinking that seems similar to what it espouses. One thing I can not figure out at all how this functionally differs to surveying a specific demographic on an issue. How, exactly, is whatever this is more useful? In fact to me it seems likely to be functionally worse in that for a survey the sample size is small and there is absolutely no control group, as someone else pointed out, we don't get any sense of what any other group responds with given the same questions.
I don't really have an issue with the proposition that there is value in considering different groups experiences, what I do have an issue with is why it seems bound to devolve into a myopic consideration of a very small number of people's experiences.
The best evidence that addresses both your claims would probably come from the military, since they have both state of the art sensors+ reliable witnesses. The recent surge in UFO coverage is almost all related to branches of the military (mostly Navy?) so the simple explanation is, it's classified to varying degrees. My understanding is that there is the publicly released stuff which is somewhat underwhelming, then some evidence Congress and the like has seen during briefings, and then probably more hush hush stuff above that for non civilians. The members of Congress who were briefed seem to have continued making noise on the topic so presumably there is more convincing evidence not yet public.
I have no idea where Hanson got those figures from, but from your post it seems like you would be able to rule most civilian sightings out anyway because there is no such thing as a perfectly reliable human witness, and to date camera and sensor quality available to the average person is actually pretty poor (especially compared to government/military hardware).