Wei_Dai

Wei_Dai's Comments

Have epistemic conditions always been this bad?

(I was waiting for a mod to chime in so I don't have to, but ...)

If such a person wanders across this place and finds a lot of discussion of theoretical computer science and decision theory, they will keep wandering.

I believe this is one of the reasons for confining political topics to "personal blogposts" which are not shown by default on the front page. My understanding is that they're prepared to impose further measures to reduce engagement with political discussions if they start to get out of hand. I guess (this is just speaking for myself) that if worst comes to worst we can always just impose a hard ban on political topics.

(By "worst comes to worst" I mean in the sense of political discussions getting out of hand on LW. A worse problem, that I worry more about, is LW getting "canceled" by outsiders, in which case even banning political topics may be too late. I think we may want to pre-emptively impose more safeguards for that reason, like maybe making object-level political posts only visible to users over some karma threshold?)

The Epistemology of AI risk

It’s unfortunate I used the word “optimism” in my comment, since my primary disagreement is whether the traditional sources of AI risk are compelling.

May I beseech you to be more careful about using "optimism" and words like it in the future, because I'm really worried about strategy researchers and decision makers getting the wrong impression from AI safety researchers about how hard the overall AI risk problem is, and for some reason I keep seeing people say that they're "optimistic" (or other words to that effect) when they mean optimistic about some sub-problem of AI risk instead of AI risk as a whole, but they don't make that clear. In many cases it's pretty predictable that people outside technical AI safety research (or even inside, like in this case) would often misinterpret that as being optimistic about AI risk.

Have epistemic conditions always been this bad?

Here's the latest story I found about Texas schools and evolution. After reading it, I think the religious influence described is trivial compared to what's happening in "progressive" school districts. (I'm not going to link to or describe in detail what I'm seeing, for fear of drawing unwanted attention, but I'll send it to you via PM.)

Moral public goods

I’m not convinced this is the case. Do you have some comparisons of international spending on different public goods, or lobbying for such spending?

I don't think such a comparison would make sense, since different public goods have different room for funding. For example the World Bank has a bigger budget than the WHO, but development/anti-poverty has a lot more room for funding (or less diminishing returns) than preventing global pandemics.

My sense that there's little effort at coordination for global poverty comes from this kind of comparison:

US unilateral foreign aid (not counting private charitable donations):

Total U.S. official development assistance, known as ODA, rose to $26.8 billion in 2008 from $21.78 billion in 2007 and $23.5 billion in 2006.

US donation to the World Bank (which is apparently determined by negotiation among the members) in 2007: $3.7 billion (this covers 2 years I believe).

Lanrian mentioned an effort to coordinate foreign aid (ODA) but the effort seems very weak compared to other public good coordination efforts, because there is no enforcement mechanism (not even public shaming, as when was the last time you heard anything about this?). According to this document:

But no other DAC country has met the target since it was established, and the weighted average of DAC members’ ODA has never exceeded 0.4% of GNP.

I guess "public goods" is part of what's happening, given that some non-zero level of coordination exists, but it seems like a relatively small part and I'm not sure that it explains what you want it to explain, or even what it is that you want to explain (since you didn't answer the question I asked previously about this).

ETA: I added a statement to my top level comment to correct "I don’t think I’ve ever heard of any efforts to coordinate internationally on foreign aid."

Have epistemic conditions always been this bad?

I didn't forget about Christianity but think we'd have to go back pretty far to see as much influence from it in journalism, academia, K-12 education (our main epistemic institutions) as we see today from leftist ideology. Curious if you have a different take on this.

"blacklisting communists" was in response to a rather obvious real threat, and I'd be worried if the same or similar dynamics is now happening absent such a threat (as that implies the bad dynamics and epistemic conditions it imposes might never go away or might keep recurring for no good reason).

think the next step is “actually go do real empiricism” before trying to Do Something About It.

Sure, and I'm hoping that someone has ideas about how to do such empiricism, for people in our positions (i.e., not an academic who might be able to apply for a grant to study this).

Have epistemic conditions always been this bad?

I looked at the Loyalty Oaths that people were compelled to sign in the 1950s and honestly they don't seem that bad by comparison to today's equivalent:

Typically, a loyalty oath has wording similar to that mentioned in the U.S Supreme Court decision of Garner v. Board of Public Works:[6]

I further swear (or affirm) that I do not advise, advocate or teach, and have not within the period beginning five (5) years prior to the effective date of the ordinance requiring the making of this oath or affirmation, advised, advocated or taught, the overthrow by force, violence or other unlawful means, of the Government of the United States of America or of the State of California and that I am not now and have not, within said period, been or become a member of or affiliated with any group, society, association, organization or party which advises, advocates or teaches, or has, within said period, advised, advocated or taught, the overthrow by force, violence or other unlawful means of the Government of the United States of America, or of the State of California. I further swear (or affirm) that I will not, while I am in the service of the City of Los Angeles, advise, advocate or teach, or be or become a member of or affiliated with any group, association, society, organization or party which advises, advocates or teaches, or has within said period, advised, advocated or taught, the overthrow by force, violence or other unlawful means, of the Government of the United States of America or of the State of California . . . .

To the extent there were excesses during the Red Scares, it seems kind of understandable given that Communists literally just violently took over a number of the biggest countries on Earth. ETA: If we now have a tendency to impose similar excesses on ourselves even in the absence of such threats, that bodes ill for future epistemic conditions.

Satanic Panic, from skimming the Wikipedia page, apparently had little or no influence on academia and government. Do you really think it's comparable in seriousness to what's happening today? (If so I'll take a closer look.)

Have epistemic conditions always been this bad?

Also, given that many views that EA endorse could easily fall outside of the window of what’s considered appropriate speech one day (such as reducing wild animal suffering, negative utilitarianism, genetic enhancement), it is probably better to push for a blanket acceptance of free speech rather than just hope that future people will tolerate our ideas.

I think it was better to push for a blanket acceptance of free speech, but now that we're already in the process of sliding down the slippery slope, I'm pretty skeptical this makes sense now. Not sure if you also meant "was", but if not, can you explain more? For example would you endorse making LW a "free speech zone" or try to push for blanket acceptance of free speech elsewhere?

Have epistemic conditions always been this bad?

As for the exceptions, I see no reason to believe they’re particularly more widespread now than in the past (for instance, my parents have stories of weaponized conformity in EST meetings they briefly attended in the 70s).

Do you think that epistemic conditions were better in the 90s/00s? If yes, maybe it's just that I spent most of my teenage/adult life in that period and think of it as normal, when it was actually a rare golden age? If no, any idea why things feel so much worse to me recently?

I’m not interested in getting deeply into this conversation here; it would take pages of writing to say everything I think, and that writing would be relatively slow because I’d have to measure my words in various ways to make it through this minefield.

I'd be really interested in getting the full case from you. Maybe you could consider writing an article for some larger publication to make the effort worthwhile?

The Epistemology of AI risk

My feeling is that the current ways that the most prominent AI risk people make their cases don't emphasize the disjunctive nature of AI risk enough, and tend to focus too much on one particular line of argument that they're especially confident in (e.g., intelligence explosion / fast takeoff). As you say, "If they decide to hear out a first round of arguments but don’t find them compelling enough, they drop out of the process." Well that doesn't tell me much if they only heard about one line of argument in that first round.

The Epistemology of AI risk

For what it’s worth, I have “engaged with the arguments” but am still skeptical of the main arguments. I also don’t think that my optimism is very unusual for people who work on the problem, either.

I'm curious if you've seen The Main Sources of AI Risk? Have you considered all of those sources/kinds of risks and still think that the total AI-related x-risk is not very large?

Load More