All of Raemon's Comments + Replies

Frame Control

Note: I'd be excited to frontpage and curate a post similar-to-this. This particular post feels a bit too embedded in a live conflict for that to feel right to me. 

I recognize that it's pretty hard to write a post like this without examples and the best examples will often necessarily involve recent conflict / live-politics / be-a-bit-aiming-to-persuade. I'm sure shipping this out the door was already a sizeable chunk of effort. But I think there could be a hypothetically idealized split-into-two posts version, where one post simply outlined the model, and the other post applied it to recent events.

I feel like this is clearly frontpage material, so I would second Aella's questions about what changes would make that make sense.

I'm slightly confused, because (unless I'm missing one) only one of my examples given was in reference to the live conflict. Unless maybe you mean the generalized timing of the post as a whole, or the other examples given for other events/people unrelated to the community but still ongoing? I am probably not down to post another two separate posts, as writing this was a lot of effort, and I'd probably feel sad if someone else did it for me. Would it just make more sense for me to unlink or remove the one example?

Frame Control

I actually originally wrote "Manipulators can be weak", and changed it at the last minute (not sure why)

Frame Control

I'm particularly frustrated by the thing where, inevitably, the concept of frame control is going to get weaponized (both by people who are explicitly using it to frame control, and people who are just vaguely ineptly wielding it as a synonym for 'bad').

I don't have a full answer. But I'm reminded of a comment by Johnswentworth that feels like it tackles something relevant. This was originally a review of Power Buys You Distance From the Crime. Hopefully the quote below gets across the idea:

When this post first came out, I said something felt off about it.

... (read more)
9Aella1dI like the rule, and if it's possible to come up with engagement guidelines that have asymmetrical results for frame control I would really like that. I couldn't think of any clear, overarching while writing this post, but will continue to think about this. And you're right in that the concept of frame control will get inevitably weaponized. I am afraid of this happening as a result of my post, and I'm not really sure how to handle that.

I think it would be helpful for the culture to be more open to persistent long-running disagreements that no one is trying to resolve. If we have to come to an agreement, my refusal to update on your evidence or beliefs in some sense compels you to change instead, and can be viewed as selfish/anti-social/controlling (some of the behaviors Aella points to can be frame control, or can be a person who, in an open and honest way, doesn't care about your opinion). If we're allowed to just believe different things, then my refusal to update comes across as much ... (read more)

I'm particularly frustrated by the thing where, inevitably, the concept of frame control is going to get weaponized (both by people who are explicitly using it to frame control, and people who are just vaguely ineptly wielding it as a synonym for 'bad').

I think a not-sufficient-but-definitely-useful piece of an immune system that ameliorates this is:

"New concepts and labels are hypotheses, not convictions."

i.e. this essay should make it more possible for people to say "is this an instance of frame control?" or "I'm worried this might be, or be tantamount t... (read more)

Frame Control

This is an important concept that is tricky to describe. Some thoughts:

Minor vs Major Frame Control

Lots of relationships and minor interactions have low-key frame control going on pretty frequently. I think it's useful to be able to name that without implying that it's (necessarily) that big a deal. I find myself wanting separate words for "social moves that control the frame", "moves that control the frame in subtle ways", "move that control the frame pervasively in a way that is unsettlingly unhealthy." 

This is harder because even the most pervasive... (read more)

6MalcolmOcean1dI'd edit "victims" to "weak" in the second header, since I think that expresses your point way clearer. You're not just pointing at the common-ish (& true!) refrains of "abusers are traumatized" or "abusers were once victims" but more specifically "abusers may be doing a bunch of frame control from the role of weak & vulnerable person".
Rational Humanist Music

I didn't actually listen to this song at the time, and much later I discovered it and it is indeed excellent.

I currently translate AGI-related texts to Russian. Is that useful?
Answer by RaemonNov 27, 202111

I think this is pretty useful. I feel a bit awkward that I kinda have no idea how good the translations are, but I expect it's worthwhile and potentially quite important.

How To Get Into Independent Research On Alignment/Agency

Curated. This post matched my own models of how folk tend to get into independent alignment research, and I've seen some people whose models I trust more endorse the post as well. Scaling good independent alignment research seems very important.

I do like that the post also specifies who shouldn't be going to independent research.

Split and Commit

Curated. I've gotten value from the Split and Commit concept over the years and am glad to see a more succinct writeup. I think "have multiple hypotheses" and "have at least a rough sense of what you might do in worlds where either hypothesis is true" seems like a useful heuristic to avoid some common human rationality foibles.

I felt like the opening examples were a bit distractingly political and I think there are probably some ways to improve, but that felt relatively minor.

4Zach Stein-Perlman5dI'm curious what examples you or others who found the opening examples distracting would prefer. Something like those examples is standard for describing moral progress, at least in my experience, so I'm curious if you would frame moral progress differently or just use other examples.
Yudkowsky and Christiano discuss "Takeoff Speeds"
Raemon5d36Ω17

So... I totally think there are people who sort of nod along with Paul, using it as an excuse to believe in a rosier world where things are more comprehensible and they can imagine themselves doing useful things without having a plan for solving the actual hard problems. Those types of people exist. I think there's some important work to be done in confronting them with the hard problem at hand.

But, also... Paul's world AFAICT isn't actually rosier. It's potentially more frightening to me. In Smooth Takeoff world, you can't carefully plan your pivotal act ... (read more)

Explicit and Implicit Communication

[Admin note]. The CIA strategy doc link no longer works, so I updated it to point to https://web.archive.org/web/20200214014530/https://www.cia.gov/news-information/featured-story-archive/2012-featured-story-archive/CleanedUOSSSimpleSabotage_sm.pdf

(Meanwhile: the previous link has one of the most ominous 404 pages I've ever seen)

[linkpost] Why Going to the Doctor Sucks (WaitButWhy)

I think the biggest wins are among people who have something subtly wrong with them that can be fixed.

In Strategies for Personal Growth terms, you have a problem that can be fixed with healing.

A Brief Introduction to Container Logistics

Curated. Some things I appreciated here:

  1. I generally think people with a lot of experience in a field writing up an introduction to that field is a type of LessWrong post I'd like to see more of on current margins.
  2. I especially like such posts when they're tied into understanding a potentially actionable problem or plan (such as how this post was a response to Zvi's proposal about identifying and acting on low-hanging-civilizational-adequacy-fruit).
[Book Review] "Sorceror's Apprentice" by Tahir Shah

Oh neat. I’ve been on adventures and think they are good for me and found this post to roughly hold constant my desire for more of them.

Competence/Confidence

How’d you feel about ‘dancing’, or ‘flirting’ as skills where confidence matters separately from competence?

Competence/Confidence

I initially was glazing over looking at the graphs without knowing what to do. Then I remembered I was supposed to be applying this to a particular skill. Then I tried doing that, but it was effortful and I stopped.

Options: have more handholding about what it means to apply a graph to a skill, and/or handholding on what sort of skills are relevant. (I think Logan’s example was lower relevant than usual)

[Book Review] "Sorceror's Apprentice" by Tahir Shah

This was a very entertaining read. I’m… trying to figure out if I also learned something, but whether I did or not this was a well written version of whatever it is.

I have learned that I do not want to go on an adventure.

"Acquisition of Chess Knowledge in AlphaZero": probing AZ over time

Copy of abstract for the too-lazy-to-click:
 

What is being learned by superhuman neural network agents such as AlphaZero? This question is of both scientific and practical interest. If the representations of strong neural networks bear no resemblance to human concepts, our ability to understand faithful explanations of their decisions will be restricted, ultimately limiting what we can achieve with neural network interpretability. In this work we provide evidence that human knowledge is acquired by the AlphaZero neural network as it trains on the game

... (read more)
Forecasting: Zeroth and First Order

(fixed, properly using iframe now)

2jsteinhardt11dAwesome, thanks a lot!
Forecasting: Zeroth and First Order

I added the OP as a linkpost url, and added the iframe as an image. I'll look more into how we handle iframes and see if there's a better option there.

4Raemon11d(fixed, properly using iframe now)
Coordination Skills I Wish I Had For the Pandemic

A subskill of numerical-emotional-literacy here that's maybe worth highlighting is "being comfortable with orders of magnitude." 

One of issues I saw was that even after microcovid came out, some people were hung up on getting the numbers exactly right, like they were trying to have certainty of exactly what was happening to them.  When really what was most important was having a rough sense of what order of magnitude of risk they were taking. A thing that'd have saved a lot of cognitive energy is doing a little up-front calculations to get a sens... (read more)

Coordination Skills I Wish I Had For the Pandemic

Alas – I think I had hoped to come up with 2-3 different examples of object-level-knowledge or skills that'd have been useful but it was harder to find succinct handles for them. Ended up just ending the sentence there.

Ngo and Yudkowsky on alignment difficulty

Pivotal in this case is a technical term (whose article opens with an explicit bid for people not to stretch the definition of the term). It's not (by definition) limited to 'solving the alignment problem', but there are constraints on what counts as pivotal.

divia's Shortform

I think that's still "a turn" in some sense. Things still happen in discrete steps (i.e. people make their decision, then reveal their decision), instead of a continuous back-and-forth.

divia's Shortform

Something that's come up for me since our last chat was (starting to) read Elinor Ostrom's Governance of the Commons, which leans very heavily into "the actual games people are actually playing have very little in common with the simplified games people use as their core metaphors." Writing a good book review of that is on my list of things to do.

(Elinor's book is basically a very long, methodical rant at all the people either saying "Tragedy of the commons, therefore, Government Ownership Of Things" or "Tragedy of the commons, therefore, private property rights.")

Speaking of Stag Hunts

Are you opening them in incognito browsers? They seem to work straightforwardly for me in non-logged-in browsers and don't know what might be different for you.

2Ben Pace20dThis is groundhog day Ray; we just found out that it doesn't work on Opera and Firefox [https://www.lesswrong.com/posts/D5BP9CxKHkcjA7gLv/speaking-of-stag-hunts?commentId=36yHW5YwruFMyBTAX] . (And apparently Chrome Incognito on Windows? I'm confused about the exact line there, because it works on my Chrome Incognito on Mac.)
2Daniel Kokotajlo21dI don't think so? It's possible that it did and I forgot.
Zoe Curzi's Experience with Leverage Research

I had thought about saying this earlier, for fairness/completeness, but didn't get around to it. I've heard some people feeling wary of speaking positively of Leverage out of vague worry of reprisal.

So... I do want to note 

a) I got a lot of personal value from interacting with Geoff personally. In some sense I'm an agent who tries to do ambitious things because of him. He looked at my early projects (Solstice in particular), he understood them, and told me he thought they were valuable. This was an experience that would later feed into my thoughts in ... (read more)

Piggybacking with additional accurate (albeit somewhat-tangential) positive statements, with a hope of making it seem more possible to say true positive and negative things about Leverage (since I've written mostly negative things, and am writing another negative thing as we speak):

The 2014 EA Retreat, run by Leverage, is still by far the best multi-org EA or rationalist event I've ever been to, and I think it had lots of important positive effects on EA.

Speaking of Stag Hunts

Something that was previously seemed some-manner-of-cruxy between me and Duncan (but I'm not 100% sure about the flavor of the crux) is "LessWrong who's primary job is to be a rationality dojo" vs "LessWrong who's primary job is to output intellectual progress." 

Where, certainly, there's good reason to think the Intellectual Progress machine might benefit from a rationality dojo embedded in it. But, that's just one of the ideas for how to improve rate-of-intellectual progress. And my other background models point more towards other things as being mor... (read more)

3Duncan_Sabien21dStrong agreement with this, assuming I've understood it. High confidence that it overlaps with what Vaniver laid out, and with my interpretation of what Ben was saying in the recent interaction I described under Vaniver's comment. EDIT: One clarification that popped up under a Vanvier subthread: I think the pendulum should swing more in the direction laid out in the OP. I do not think that the pendulum should swing all the way there, nor that "the interventions gestured at by the OP" are sufficient. Just that they're something like necessary.
Speaking of Stag Hunts

Some thoughts on resource bottlenecks and strategy.

There's a lot I like about the set of goals Duncan is aiming for here, and IMO the primary question is one of prioritization.

I do think some high-level things have changed since 2018-or-so. Back when I wrote Meta-tations on Moderation, the default outcome was that LW withered and died, and it was really important people move from FB to LW. Nowadays, LW seems broadly healthy, the team has more buy-in, and I think it's easier to do highly opinionated moderation more frequently for various reasons.

On the othe... (read more)

2Chris_Leong17dI guess where I'd like to see more moderator intervention would largely be in directing the conversation. For example, by creating threads for the community to discuss topics that you think it would be important for us to talk about.

Small addition: LW 1.0 made it so you had to have 10 karma before making a top-level post (maybe just on Main? I don't remember but probably you do). I think this probably matters a lot less now that new posts automatically have to be approved, and mods have to manually promote things to frontpage. But I don't know, theoretically you could gate fraught discussions like the recent ones to users above a certain karma threshold? Some of the lowest-quality comments on those posts wouldn't have happened in that case.

Something that was previously seemed some-manner-of-cruxy between me and Duncan (but I'm not 100% sure about the flavor of the crux) is "LessWrong who's primary job is to be a rationality dojo" vs "LessWrong who's primary job is to output intellectual progress." 

Where, certainly, there's good reason to think the Intellectual Progress machine might benefit from a rationality dojo embedded in it. But, that's just one of the ideas for how to improve rate-of-intellectual progress. And my other background models point more towards other things as being mor... (read more)

Speaking of Stag Hunts

I assumed this post was mostly aimed at the LW team (maybe with some opportunity for other people to weigh in). I think periodically posting posts dedicated to arguing the moderation policy should change is fine and good.

Worth noting that I've had different disagreements you with and Duncan. In both cases I think the discussion is much subtler than "increase standards: yay/nay?". It matters a lot which standards, and what they're supposed to be doing, and how different things trade off against each other.

2Duncan_Sabien22dI feel a little shy about the word "aimed" ... I think that I have aimed posts at the LW team before (e.g. my moderating LessWrong post) but while I was happy and excited about the idea of you guys seeing and engaging with this one, it wasn't a stealth message to the team. It really was meant for the broader LW audience to see and have opinions on.
4Said Achmiz22dYes, yes, that’s all fine, but the critical point of contention here is (and previously has been) the fact that increasing standards (in one way or another) would result in many current participants leaving. To me, this is fine and even desirable. Whereas the LW mod team has consistently expressed their opposition to this outcome…
Picture Frames, Window Frames and Frameworks

Man, on one hand this post feels like it communicates something important to me. But also, a couple years later I sure am mostly like "man, I wrote this to satisfy two particular people, and I spent like 10+ hours on it, and it mostly didn't seem like it helped anyone more than Noticing Frames already did."

I think it was a genuine thing-worth-noticing that "frame" was actually three different metaphors, but I probably could have communicated the whole thing in 3 paragraphs and 3 pictures.

Curious if anyone found this post actively helpful.

EfficientZero: human ALE sample-efficiency w/MuZero+self-supervised

Update: I originally posted this question over here, then realized this post existed and maybe I should just post the question here. But then it turned out people had already started answering my question-post, so, I am declaring that the canonical place to answer the question.

EfficientZero: human ALE sample-efficiency w/MuZero+self-supervised

Can someone give a rough explanation of how this compares to the recent Deepmind atari-playing AI:

https://www.lesswrong.com/posts/mTGrrX8SZJ2tQDuqz/deepmind-generally-capable-agents-emerge-from-open-ended?commentId=bosARaWtGfR836shY#bosARaWtGfR836shY

And, for that matter, how both of them compare to the older deepmind paper:

https://deepmind.com/research/publications/2019/playing-atari-deep-reinforcement-learning

Are they accomplishing qualitatively different things? The same thing but better?

Update: I originally posted this question over here, then realized this post existed and maybe I should just post the question here. But then it turned out people had already started answering my question-post, so, I am declaring that the canonical place to answer the question.

What's the difference between newer Atari-playing AI and the older Deepmind one (from 2014)?

Sorry, was being kinda lazy and hoping someone had already thought about this.

This was the newer Deepmind one:

https://www.lesswrong.com/posts/mTGrrX8SZJ2tQDuqz/deepmind-generally-capable-agents-emerge-from-open-ended?commentId=bosARaWtGfR836shY#bosARaWtGfR836shY

I was motivated to post by this algorithm from China I heard about today:

https://www.facebook.com/nellwatson/posts/10159870157893559

I think this is the older deepmind paper:

https://deepmind.com/research/publications/2019/playing-atari-deep-reinforcement-learning

3flodorner24dThe first thing you mention does not learn to play Atari, and is in general trained quite differently from Atari-playing AI's (as it relies on self-play to kind of automatically generate a curriculum of harder and harder tasks, at least for the some of the more competitive tasks in XLand).
Feature Selection

Curated. I found this post to do a neat job of "be pretty compelling as fiction", "illustrate concepts about information theory", and "be interestingly surprisingly hard sci-fi". Hitting all three notes at once is something I'd like to see more of at current margins on LessWrong.

5Ruby1moAlso conveying the simultaneous alienness and reliability of a classifier/optimizer.
Feature Selection

I liked how throughout the piece some things seemed like a conceit of the narrative device, but, no, it all just checked out in the end.

I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

(I haven't caught up on the entire thread, apologies if this is a repeat)

Assuming the "qualia is a misguided pseudoconcept" is true, do you have a sense of why people think that it's real? i.e. taking the evidence of "Somehow, people end up saying sentences about how they have a sense of what it is like to perceive things. Why is that? What process would generate people saying words like that?" (This is not meant to be a gotcha, it just seems like a good question to ask)

1Lance Bush1moNo worries, it's not a gotcha at all, and I already have some thoughts about this. I was more interested in this topic back about seven or eight years ago, when I was actually studying it. I moved on to psychology and metaethics, and haven't been actively reading about this stuff since about 2014. I'm not sure it'd be ideal to try to dredge all that up, but I can roughly point towards something like Robbins and Jack (2006) as an example of the kind of research I'd employ to develop a type of debunking explanation for qualia intuitions. I am not necessarily claiming their specific account is correct, or rigorous, or sufficient all on its own, but it points to the kind of work cognitive scientists and philosophers could do that is at least in the ballpark. Roughly, they attempt to offer an empirical explanation for the persistent of the explanatory gap (the problem of accounting for the consciousness by appeal to physical or at least nonconscious phenomena). Its persistence could be due to quirks in the way human cognition works. If so, it may be difficult to dispel certain kinds of introspective illusions. Roughly, suppose we have multiple, distinct "mapping systems" that each independently operate to populate their own maps of the territory. Each of these systems evolved and currently functions to facilitate adaptive behavior. However, we may discover that when we go to formulate comprehensive and rigorous theories about how the world is, these maps seem to provide us with conflicting or confusing information. Suppose one of these mapping systems was a "physical stuff" map. It populates our world with objects, and we have the overwhelming impression that there is "physical stuff" out there, that we can detect using our senses. But suppose also we have a "important agents that I need to treat well" system, that detects and highlights certain agents within the world for whom it would be important to treat appropriately, a kind of "VIP agency mapping system" that
Unlock the Door

I think this depends radically on where you live, how obvious the frontdoor's unlocked status is, etc.

A different way this can be handled btw is to install an electronic lock with a combination-code, which you can give to your friends.

I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

Have we even checked tho? (Maybe the answer is yes, but it hadn't occurred to me before just now that this was a dimension people might vary on. Or, actually I think it had, but I hadn't had a person in front of me actually claiming it)

1Lance Bush1moSee above; I posted a link to a recent study. There hasn't been much work on this. While my views may be atypical, so too might the views popular among contemporary analytic philosophers. A commitment to the notion that there is a legitimate hard problem of consciousness, that we "have qualia," and so on might all be idiosyncrasies of the specific way philosophers think, and may even result from unique historical contingencies, such that, were there many more philosophers like Quine and Dennett in the field, such views might not be so popular. Some philosophical positions seem to rise and fall over time. Moral realism was less popular a few decades ago, but as enjoyed a recent resurgence, for instance. This suggests that the perspectives of philosophers might result in part from trends or fashions distinctive of particular points in time.
I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

Note that it's plausible to me that this is a Typical Mind thing and actually there's just a lot of people going around without the perception of phenomenal consciousness.

Like, Lance, do you not feel like you experience that things seem ways? Or just that they don't seem to be ways in ways that seem robustly meaningful or something?

1Lance Bush1moI don't know what that means, so I'm not sure. What would it mean for something to seem a certain way? I don't think it's this. It's more that when people try to push me to have qualia intuitions, I can introspect, report on the contents of my mental states, and then they want me to locate something extra. But there never is anything extra, and they can never explain what they're talking about, other than to use examples that don't help me at all, or metaphors that I don't understand. Nobody seems capable of directly explaining what they mean. And when pressed, they insist that the concept in question is "unanalyzable" or inexplicable or otherwise maintain that they cannot explain it. Despite his fame, the majority of students who take Dennett's courses that I encountered do not accept his views at all, and take qualia quite seriously. I had conversations that would last well over an hour where I would have one or more of them try to get me to grok what they're talking about, and they never succeeded. I've had people make the following kinds of claims: (1) I am pretending to not get it so that I can signal my intellectual unconventionality. (2) I do get it, but I don't realize that I get it. (3) I may be neurologically atypical. (4) I am too "caught in the grip" of a philosophical theory, and this has rendered me unable to get it. One or more of these could be true, but I'm not sure how I'd find out, or what I might do about it if I did. But I strangely drawn to a much more disturbing possibility, that an outside view would suggest is pretty unlikely: (5) all of these people are confused, qualia is a pseudoconcept, and the whole discussion predicated on it is fundamentally misguided I find myself drawn to this view, in spite of it entailing that a majority of people in academic philosophy, or who encounter it, are deeply mistaken. I should note, though, that I specialize in metaethics in particular. Most moral philosophers are moral realists (about 60%) a
4Jemist1moHaving now had a lot of different conversations on consciousness I'm coming to a slightly disturbing belief that this might be the case. I have no idea what this implies for any of my downstream-of-consciousness views.
9TAG1moBut the qualiaphilic claim is typical, statistically. Even if Lance's and Denett's claims to zombiehood are sincere, they are not typical.
Deleted comments archive?

A) However bad you think the current content you see on the site is, I assure you the content from new users we delete is worse. (for comparison, recall that the entire rest of the internet exists, and the state that it's in. Many new users just haven't internalized the site culture at all)

B) I think it's plausible we should raise standards higher than we currently have, but doing a good job of it requires a lot more attention and manpower than we currently have.

2Said Achmiz1moOh, ok, I see. I definitely had something different in mind when I read “poor epistemics” and “low content” in Ruby’s comment, but if you guys are talking about the sort of stuff that random drive-by trolls and such post, yeah, that makes a lot more sense. Thanks for clarifying!
I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

I do think that's a central unifying piece. Relevant pieces include What An Algorithm Feels Like From the Inside, and "Intelligence, Preferences and Morality have to come from somewhere, from non-mysterious things that are fundamentally not intelligence, preferences, morality, etc. You need some way to explain how this comes to be, and there are constraints on what sort of answer makes sense."

I think much of the sequences are laying out different confusions people have about this and addressing them.

I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

I don't have a strong take on whether his position is true, but I do think a lot of the sequences are laying out background that informs his beliefs.

4Rafael Harth1moDoes this come down to the thing Scott has described here? [https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=tF9DpnXayLNLP7jj8] I.e. If so I can repeat that I'm a huge fan of the sequences, I agree with almost everything in them, even though I think humans are atoms. On the other hand, it has been years since I've read them (and I had much fewer philosophical thoughts & probably worse reading comprehension than I do now). It's possible that there is more background in there than I recall.
An Unexpected Victory: Container Stacking at the Port of Long Beach

The "good news" is that the red queen race has been running for a long time already, so I don't necessarily think we'd make anything worse by joining in. Just, this is one particular instance of the red-queen-race-that-is-running, and should be evaluated as such.

An Unexpected Victory: Container Stacking at the Port of Long Beach

One thing that sticks out as a concern for scaling this up is the attention economy. It seems like this worked, in part, by Ryan getting a bunch of people ready to signal boost is tweet, and then leveraging that to get attention from a bunch of people to call the governor or whatever (I haven't recapped the details atm).

But, that basic technique is used all the time. And the problem is that people are doing it in multiple directions, sometimes at cross purposes. It's also pretty easy to convince me that a given cause is "good", but then the issue is how it... (read more)

This is an important dimension of the problem; a rambly explanation of my intuitions about this:

It seems to me that if the basic technique of recruiting attention is used all the time, it cannot be a distinctive feature of the success in this case; almost all forms of attention appeals fail, and I go as far as to say the very largest fail the most frequently.

My model of how attention works in problems like this is that it has a threshold, after which further attention doesn't help. This is how special interests work in politics: it doesn't matter whether s... (read more)

3Duncan_Sabien1moYeah. This reminds me of e.g. the guy who bought a billboard with his last few thousand dollars, to try to get a job. It worked! But it seems dangerously likely to spark a red queen race.
Load More