who am I?

Posts

Sorted by New

Wiki Contributions

Comments

While writing well is one of the aspects focused on by the OP, your reply doesn't address the broader point, which is that EY (and those of similar repute/demeanor) juxtaposes his catastrophic predictions with his stark lack of effective exposition and discussion of the issue and potential solutions to a broader audience. To add insult to injury, he seems to actively try to demoralize dissenters in a very conspicuous and perverse manner, which detracts from his credibility and subtly but surely nudges people further and further from taking his ideas (and those similar) seriously. He gets frustrated by people not understanding him, hence the title of the OP implying the source of his frustration is his own murkiness, not a lack of faculty of the people listening to him. To me, the most obvious examples of this are his guest appearances on podcasts (namely Lex Fridman's and Dwarkesh Patel's, the only two I've listened to). Neither of these hosts are dumb, yet by the end of their respective episodes, the hosts were confused or otherwise fettered and there was palpable repulsion between the hosts and EY. Considering these are very popular podcasts, it is reasonable to assume that he agreed to appear on these podcasts to reach a wider audience. He does other things to reach wider audiences, e.g. his twitter account and the Time Magazine article he wrote. Other people like him do similar things to reach wider audiences. 

Since I've laid this out, you can probably predict what my thoughts are regarding the cost-benefit analysis you did. Since EY and similar folk are predicting outcomes as unfavorable as human extinction and are actively trying to recruit people from a wider audience to work towards their problems, is it really a reasonable cost to continue going about this as they have?

Considering the potential impact on the field of AI alignment and the recruitment of individuals who may contribute meaningfully to addressing the challenges currently faced, I would argue that the cost of improving communication is far beyond justifiable. EY and similar figures should strive to balance efficiency in communication with the need for clarity, especially when the stakes are so high.

I am pleasantly surprised that someone wrote a post on this issue. For anyone who has listened to EY's appearances on the Lex Fridman and/or Dwarkesh Patel podcasts, it is blindingly obvious that EY's ego occludes any semblance of clarity, effort, or persuasion in his exposition of AI alignment (AIA). When his interlocutors pose a genuine inquiry about AIA or why he thinks a certain way, there is a 50% chance he will respond with some abstract, lurid hyperbole that doesn't answer the question neither directly nor indirectly and a 50% chance he will nitpick some part of the question that he views as impertinent or wrong in some minute way, misconstrue the question and become defensive because he detects dissent, or state "you would need to have read/understood ____ to understand my response to that question", none of which answer the question. After listening to these, I began to wonder whether AIA is more challenging than getting EY to give a direct answer to a question.

This should serve as a poignant reminder not only to EY but to any of us who think we are smarter than average. No matter how smart you are (within reasonable limits of human intellect) or perceive yourself to be, the truest measure of intelligence is not just in the ideas we conceive but also in our ability to effectively communicate and collaborate with others.

As you partially pointed out, it doesn't look like you two are disagreeing with each other here, which is why reading these posts made me double over in laughter. Both of you have, in multiple instances, made either hidden or explicit assumptions that change your answer as to whether information is free/beneficial given a set of circumstances, and are arguing using answers to different circumstances/assumptions. Both of your answers seem to agree when the circumstances and assumptions are the same for the respective answer.

I read the part of the cited press release that regarded the migratory waterfowl problem. To use this study as evidence of the existence of what you are calling scope insensitivity is a perfunctory maneuver. In the section, no mention is made of the framing of the question, the population from which participants were sampled, how they were assigned to their respective groups, and so forth. Were they informed that there would be a linear relationship between the amount of money spent and the extent of relief provided to the waterfowl, as is often the case in similar conundrums involving pedestrian quantities? Factors outside the scope of experimental design can be reasonably assumed to influence the answers of the participants as well. I see that the authors included attempts to attribute significance to explanatory variables but left out some possible variables that should be obvious. Do these people care at all about migratory waterfowl? How much money do they think a single waterfowl’s life is worth? Do they think saving migratory waterfowl will have a significantly positive effect on anything other than migratory waterfowl? What is the most valuable collection of cash they’ve ever seen or possessed in a single instance? Are they aware this is just an experimental survey, and their answers will have no influence on actual migratory waterfowl populations? I could devise 10,000 more questions relevant to this scenario that could drive these peoples' decisions towards apparent insensitivity that are not based on innate, indiscriminate insensitivity. Indeed, I’d wager that if a stranger approached me and asked, “if you had to, how much money would you spend to save 5,000 chickens in Whocaresville?”, my answer would probably not change whether it was 5,000 chickens or 5 trillion chickens. Not because I’m insensitive to the magnitudes being presented to me, but because I do not particularly care about chickens (especially those that have no tangible relationship to me), nor have I ever seen more than about $1,000 in cash that belongs to me (thus my internal scale of salient monetary values would inevitably stint my upper-limit on spending, regardless of the amount of chickens), nor do I have any idea how much money I should spend per chicken even if I did care, and most of all, I would likely realize that the question is entirely fantastical. My answer would likely change if I was asked how much money I’d spend on funding varying amounts of gene therapy research labs, for instance. The questions and arguments I’ve posed here equally apply to the Toronto study, and even the human life studies. One must realize that I have not claimed that so-called scope insensitivity does not exist, but that these studies provide tenuous evidence at best of its existence because of the lack of defensible generalizability of their results. If the claim being made was that people are insensitive to the magnitude of imagined results (such is the case in all these studies that posit made-up scenarios) that they have no personal reason to care about, then I would agree based on the presented evidence. Paradoxically, your concluding statement addresses effective altruists; scope insensitivity would be least likely to betide this group of people, since effective altruists are spending real money in the real world and causing real changes that they ostensibly care about. At the very least, we have little reason to assume it’d betide them off the basis of the aforementioned studies because the studies meet none of the conditions in the previous sentence.

A tangential rant on your comment regarding visualizing things: A specious statement has been made here, but I must admit you’re not the first to claim this nor will you be the last, and it’s probably not your fault that you think this. What law precludes every member of Homo sapiens from visualizing large quantities of objects? If you can visualize yourself flying in a helicopter over a football field blanketed by a single layer of individually visible chickens, you have successfully “visualized” roughly 57,000 chickens. Mind you, this asks a different question than those asking of the abstract interpretation of such quantities, which I believe to be a more important and useful question; this is what you may have been getting at, but you have not made it easy to infer that if that is the case. Regardless of this small digression, I see what you’re insinuating with this point. I wonder then if there has been a study investigating the effect of participants being exposed to visuals of gargantuan quantities prior to answering questions of spending.

I've now spent over an hour writing a comment that no one will read or care about, about an issue that doesn't matter. Clicking submit now...