We've just shipped the user @mentions feature, and it seemed good to use it here to make sure prize-winners know to update their payment information. (To receive a prize you'll need a PayPal account, and to enter your email and/or paypal address here) [edit: was initially wrong link, but fixed now]
Folks listed below, I'll be sending prize info shortly.
@A Ray , @Akash , @DirectedEvolution , @1a3orn @Daniel Kokotajlo @nostalgebraist @Alex_Altair , @Leon Lang @Rafael Harth @adamShimi @Valentine @Yoav Ravid @LoganStrohl @Zack_M_Davis @DragonGod @Vanessa Kosoy @Neel Nanda @Steven Byrnes @TurnTrout @Vaniver @Coafos @Alex Flint @Zach Stein-Perlman @Adam Jermyn @the gears to ascenscion @Darmani @Eli Tyre @johnswentworth @Double @Srdjan Miletic @Unnamed @iceman
(Note: admins are allowed to send lots of @mentions, but generally our plan is to limit it to 3 for most users for most contexts. Please do not @ people in a spammy fashion. Users can disable @mention notifications if they're annoying. We'll be looking into various options to ensure it does't get out of hand or annoying. Let us know if there are any issues)
Update payment info at:
Just to confirm can we still review posts in hopes of receiving prize money?
There is a post I wanted to review but never got around to and eventually gave up on, but I would be motivated to write the review that I never wrote if we can still win the prize money for reviews.
[I'm a poor student without a job, and $100 for a couple hours work is a pretty big deal for me.]
Added my paypal info.
I did not expect to receive a prize lol; I never set out to write a review. 😅
You did! (it was a self-review, but I still think those are valuable)
https://www.lesswrong.com/posts/9cbEPEuCa9E7uHMXT/catching-the-spark?commentId=7Np68dFByXquQH8Gf
Warning: Long rant incoming, one you probably won't benefit from reading unless you are Raemon, and in fact I'm a bit embarrassed to have written it:
I admit I feel some dismay at seeing Nostalgebraist's review and especially Shimi/Collman/Gyrodiot's reviews appear on this list. I respect all of these people as thinkers and upvoted their reviews, IIRC, and also I am genuinely honored and flattered that they not only read my post but took the time to review it. I won't object if you pay them money for their reviews; I wish them well. In fact I'll feel guilty if this comment of mine gets in the way of their reward, and I hope that it doesn't.
But am having to do some serious soul-searching upon receiving the evidence that their reviews have stood the test of time and helped you understand my original post -- because I think they both miss the point of the original post. Now I'm wondering what I did wrong, how I could have been so unclear in the OP, that so many people misunderstood...
Quoting from the original post:
I describe a hypothetical scenario that concretizes the question “what could be built with 2020’s algorithms/ideas/etc. but a trillion times more compute?” Then I give some answers to that question. Then I ask: How likely is it that some sort of TAI would happen in this scenario? This second question is a useful operationalization of the (IMO) most important, most-commonly-discussed timelines crux: “Can we get TAI just by throwing more compute at the problem?” I consider this operationalization to be the main contribution of this post; it directly plugs into Ajeya’s timelines model and is quantitatively more cruxy than anything else I know of. The secondary contribution of this post is my set of answers to the first question: They serve as intuition pumps for my answer to the second, which strongly supports my views on timelines.
I literally said right at the front (admittedly behind spoiler screen) what the main and secondary points of the post were. And the subtitle said it too: "Big Timelines Crux Operationalized."
the Shimi/Collman/Gyrodiot review most seriously misunderstands the OP; see this quote from the review:
The relevance of this work appears to rely mostly on the hypothesis that the +12 OOMs of magnitude of compute and all relevant resources could plausibly be obtained in a short time frame. If not, then the arguments made by Daniel wouldn’t have the consequence of making people have shorter timelines.
The main point of the post was to focus the discussion on the big crux, not to argue for short timelines. The secondary point was an intuition pump for short timelines -- but it does NOT depend on it being at all plausible for us to achieve +12 OOMs in the real world anytime soon! I said very clearly that the +12 OOMs thing was a hypothetical, involving magic! I brought this up in the comments; see discussion. You quote a passage that seems to be making the same mistake:
Another issue with this hypothesis is that it assumes, under the hood, exactly the kind of breakthrough that Daniel is trying so hard to remove from the software side. Our cursory look at Ajeya’s report (focused on the speed-up instead of the cost reduction) showed that almost all the hardware improvement forecasted came from breakthrough into currently not working (or not scalable) hardware. Even without mentioning the issue that none of these technologies look like they can provide anywhere near the improvement expected, there is still the fact that getting these orders of magnitude of compute requires many hardware breakthroughs, which contradicts Daniel’s stance on not needing new technology or ideas, just scaling.
To be fair to the authors, I didn't spell out as much as I could have why it doesn't matter if we ever achieve +12 OOMs in real life anytime soon. I mean I did spell it out, but I didn't spell it out in as much detail as I could have -- I relied on the readers being somewhat familiar with Ajeya's model I guess. In response to a conversation with Adam Shimi after the review went up, I wrote the "Master Argument" google doc which you may have seen by now. It explains Ajeya's model and then explains how having 80% by +12 gets you t much shorter timelines than just 50%. The key, I guess, is that if you move 30% of your mass from above 12 to below 12, unless you are crazy you will move a bunch of it to the 0-6 OOM range. You won't pile it all up in the 6-12 OOM range. In retrospect I should have said more about that in the OP.
Anyhow. On to Nostalgebraist's review:
...to be honest I'm not sure I understand it. The part of it where it's talking about what the main point of Fun With +12 OOMs is... well, maybe it's something interesting that I said, and maybe it's equivalent to the main point under some transformation, but it's certainly not how I think of the main point. I think the main point is "here's this big timelines crux we all should be debating: What is the probability that +12 OOMs would be enough?" and the secondary point is "Here are some intuition pumps that +12 OOMs would be enough."
Part of Nostalgebraist's review was a critique of my secondary point. That part I agree with; there's a LOT more that needs to be said (and a lot more I could have said, believe me!) about why +12 OOMs is probably enough, than just the 5 intuition pumps I gave. There's a lot more I could do to make those 5 pumps pump harder, too. I hope someone one day finds the time to write all that stuff.
Side note: Zach Stein-Perlman's review of Fun with +12 OOMs is great, I think he understood the original post quite well. The others... again, I appreciate them, they said some interesting things and some useful things, but it annoys me that they don't seem to have understood the main point. And, as I said at the beginning, it makes me a bit defensive and soul-searchy. What did I do wrong? I thought I was being so clear, signposting everything, etc.!?! Yet multiple smart people I respect read it closely enough that they were motivated to review it, and came away with a different impression!
I think Nostalgebraist's review might not deserve this reaction from me, actually. Like I said, maybe what they think the main takeaway is, is also what I thought it was, just described differently. And anyhow it's possible that they understood perfectly what I thought the main takeaway was, and just disagreed with me about it -- maybe they think that the most interesting and novel contribution isn't what I thought it was! Fair enough. I may be making a mistake by dragging them into this. I probably shouldn't be wasting time writing this anyway. But their review of Ajeya's Bio Anchors report also rankled me in the same way, but more so -- I think it misunderstood the whole point of the report, and I feel more confident in this claim than in the claims I made above.
Thanks for sharing. I definitely appreciate it all as user-feedback.
I think I have some high-level thoughts that don't depend much on the details of this particular post and these particular reviews, and then some object-level thoughts.
At a high level:
By default, serious in-depth reviews are a lot of work, and AFAICT fairly unrewarding. A lot of what I was trying to do with this post and prizes is correct an ecosystem incentives-issue where people aren't rewarded for doing a sort of "intellectual grunt work" that's important but underappreciated. (Part of what I appreciated about Shimi-et-al was them initiating a process for peer review in general, not just for this one particular post)
In general my posting a review here means I got something out of it, but not that I endorse everything in it. I'm also doing all this with a bit of limited time and trying to cover a lot of breadth, so I'm not too surprised if there are significant criticisms to be made of some reviews.
I also think, well, if the system is working, reviewers should sometimes say things the authors don't like, and that's okay. I wouldn't argue the current system is that great (including this post and prizes, and my current approach to aggregating them). But I don't currently think anything necessarily went wrong here.
But, being misunderstood sucks, and I do empathize/sympathize. I've appreciated your work on the review this year and I definitely appreciate +12 OOMs as a post. (I noticed +12 OOMs getting a disproportionate amount of review attention, and in the culture-I-hope-for this feels like a compliment, even if parts of the process are frustrating)
Some object-level thoughts:
I agree that Shimi-et-al's argument about "The relevance of this work appears to rely mostly on the hypothesis that the +12 OOMs of magnitude of compute and all relevant resources could plausibly be obtained in a short time frame" isn't a fair characterization of what you wrote. (In an ideal world I'd have read more of the back-and-forth-between you and Shimi on their review, and incorporated that into my commentary here)
I think I mostly appreciated their review for digging into the details of the examples in the second half.
I had stated that Zach Perlman's review made a similar point to Nostalgebraist's. Looking back, I'm not sure whether I stand by that. I don't think I'd have derived Nostalgebraist's point of "The impetus to ask "what does future compute enable?" rather than "how much compute might TAI require?" influenced my own view of Bio Anchors" from Zach's if that's all I had to go on.
I said, reading Nostalgebraist's review "I feel like I understand the point for the first time." I did notice that he didn't frame it the same way you did, and I'm not sure whether I endorse my phrasing. Maybe Nostalebraist's interpretation is more of it's own thing than a point you made. But, I did feel like it added another layer to your post, and somehow made things feel more crisp to me as a useful meta-level-insight than Zach's (or your) summary.
I may have more thoughts, but wanted to post this for now.
Just chiming in to say huge +1 to the idea of rewarding people for doing reviews, it's an awesome and very pro-social thing to do and I'm honored that so many people chose my post to review. I endorse rewarding Shimi et al, and Nostalgebraist, in particular.
Also: I happen to be having a related conversation that also gives some context on how I conceived of the OP at least & what I hoped to accomplish with it.
I had written this previously in The Review Phase comment threads, but to make it easier for people to see the individual reviews I gave prizes to:
This comment is incomplete, and I will likely edit some of the prizes slightly. But, I'd fallen behind on awarding prizes for reviews and I want to highlight Yes, the Lightcone team will give you money for reviewing stuff. So, I wanted to ship this rough version for now to give a sense of what sort of reviews I found valuable.
Thanks to the large number of people who've stepped up to review so far. (I'd still be excited for EffortReview that explore the details of some of the higher ranked posts)
Next round of prize announcements:
- $200 to Nostalgebraist for this review of Fun with +12 OOMs of Compute. I think I didn't really get the underlying point of the post until I read this.
- $200 to AllAmericanBreakfast for both his review and followup comment on Core Pathways of Aging.
- $200 to Akash for his review of ARC's first technical report: Eliciting Latent Knowledge, breaking out a lot of details about how ELK affected Akash and people in his cohort, and ways he thinks ELK could be improved.
- $150 to Alex Altair's review of Taboo Outside View.
- $150 to Yoav Ravid for their review of What do GDP Growth Curves Really Mean? I really liked that they did a mini-distillation of the older good comments about the post.
- $150 to la3orn for their self-review of EfficientZero: How It Works, in particular for grading their conclusions and predictions.
- $100 to A Ray's review of Coase's "Nature of the Firm" on Polyamory for explaining something subtly off about
- $100 to A Ray for his review of Politics is way too meta
- $100 to A Ray for his review of Cryonics Signup Guide #1: Overview
- $100 to Leon Lang for his fairly detailed review of Utility Maximization = Description Length Minimization.
- $100 to Valentine for his self review of Creating a Truly Formidable Art.
- $100 to Zach Davis for his self-review of Feature Selection, in particular for situating it more clearly in the context of his other work, and previous work it was building on.
- $100 to Dragon God for his review of EfficientZero: How It Works, which dug into some details and found a potential error.
- $100 to Vanessay Kosoy for their self-review of Infra-Bayesianism. I appreciated her laying out what developments had progressed in the intervening year.
- $100 to Neel Nanda for his self-review of Intentionally Making Close Friends
- $100 to Steven Byrnes review of Robin Hanson's Grabby Aliens model explained
- $100 to Turntrout for his self-review of Lessons I've Learned From Self-Teaching. I love that he had some data on how his attempts had faired.
- $100 to Adam Shimi for for his review of Biology-Inspired AGI Timelines: The Trick That Never Works. I found this particularly valuable for breaking out some of the mistakes Eliezer is pointing out in a fairly distilled fashion. (I personally had trouble actually taking away the points from the post itself)
- $100 to Vaniver for his review of ARC's first technical report: Eliciting Latent Knowledge.
- $100 to la3orn for their review of Coordination Schemes are Capital Investments.
- $100 to Rafael Harth for their review of [Book review] Gödel, Escher, Bach: an in-depth explainer
- $100 to MondSemmel for his review of Where did the 5 micron number come from? Nowhere good?
- $100 to Logan Strohl for their self-review of Catching the Spark. I like that Logan brought a lot more experience to bear on an earlier topic, including noticing a place where their original frame was maybe 50% wrong.
- $100 to Daniel Kokotajlo for his self review of What 2026 looks like.
- $75 to Alex Flint for his self-review of Agency in Conway's Game of Life. I appreciated the link to the Game of Life community where people were researching a question Alex had posted. (My impression is they were working on that independent of Alex's work)
- $75 to Adam Shimi for his review of What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes.
- $75 to Rafael Hart's review of Dear Self: We Need to Talk About Social Media. I particularly was interested in the articulation of how "vibe" comes across in posts, and why that matters. I did feel a bit surprised that Rafael found the particular vibe in Elizabeth's piece and not in many other LessWrong pieces (maybe he did, and just noticed it here?), but I found it a helpful frame for thinking about what value I get from posts.
- $100 to Leon Lang for their review of the Telephone Theorem.
Honorable Mentions:
These reviews didn't go into much detail, but added at least a couple new arguments or frames that I found useful, and each get $50
- Zach Stein-Perlman for their review of Fun with +12 OOMs of Compute. I appreciated him laying out the specific reasoning for how it improved his thinking.
- Adam Jermyn for their review of Your Cheerful Price.
- gears to ascension for their review of Your Cheerful Price.
- AllAmericanBreakfast for their review of Oliver Sipple. I particularly liked the evidence-evaluation investigation of "on one hand, it'd be weird to have two unrelated phenomena happening at the same time, so maybe the story just checks out. On the other hand, the two phenomena are only promotedto our attention because of
- Daniel Koko for his self-review of Birds, Brains, Planes and AI.
- Darmani for their self-review of Leaky Delegation: You Are Not A Commodity, with details about which of their attempts at employing their own advice hadn't worked out
- Eli Tyre for his self-review of How do we prepare for the final crunch time?
- John Wentworth on Highlights from the Autobiography of Andrew Carnegie
- Double's review of Against Neutrality About Creating Happy Lives.
- Alex Altair for his review of Lessons I've Learned From Self Teaching.
- Valentine for his review of Gravity Turn
- Daniel Kokotajlo for his review of Comments on Carlsmith's “Is power-seeking AI an existential risk?”
- Daniel Kokotaljo for his review of Fixing The Good Regulator Theorem
- A Ray for his review of Rationalism Before the Sequences. Reading the review got me thinking more about the value of good bibiographies.
- Coafos for his review of Social Behavior Curves, Equilibria and Radicalism
- Coafos for his review of Curing Insanity with Malaria.
- Srdjan Miletic for his review of https://www.lesswrong.com/posts/gziZACDg6EBpGZbJe/almost-everyone-should-be-less-afraid-of-lawsuits?commentId=i6B94ofbtKBdKridn
- Akash for his review of How To Get Into Independent Research On Alignment/Agency
- Unnamed for for their review of Taboo Outside View
- Srdjan Miletic for their review of Almost everyone should be less afraid of lawsuits
- iceman's review of Unnatural Categories are Optimized for Deception. I didn't feel like I learned new things from it exactly, but I do appreciate iceman laying out his reasoning, what he found valuable, and some specific reasoning that he may have done better because of it.
More more round of prize updates:
I'm hopefully sending the prizes to our accountants today, and they should be resolved within a week or so
Oh, also meant to give a $200 prize to @Akash for his review of PR is Corrosive, Reputation is not (along with the full post he wrote, inspired by it. I'm thinking of this as roughly $100 for the review itself and another $100 for the post)
We've had a ton of reviews for the The LessWrong 2021 Review, and I wanted to take a moment to: a) share highlights from reviews that felt particularly valuable to me, b) announce the prizes so far, to give people a better idea of how prizes are awarded.
Tl;dr on prizes: we've awarded $4,425 in prizes so far. I've been aiming to pretty consistently give at least a $50-$100 prize for all reviews that put in some effort, and am happy to pay more for good reviews. The largest prize so far has been $200 but I'd be excited to give $500+ to in-depth reviews of posts that were ranked highly in the preliminary voting.
If you'd consider doing reviews but are uncertain about the payoff, PM me and most likely we can work out something out.
Reviews I liked
Nostalgebraist on Fun with +12 OOMs of Compute
After reading Nostalgebraist's review of Fun with +12 OOMs of Compute, I felt like I actually understood the main point for the first time:
(I think Zach Stein-Perlman's review makes a similar point more succinctly, although I found nostalgebraist's opening section better for driving this point home).
Nostalgebraist notes that the actual details of the post are pretty handwavy:
AdamShimi, Joe_Collman, Gyrodiot on Fun with 12 OOMs of Compute
Meanwhile, Adam Shimi had previously led a round of in-depth peer review in 2021, including delving into the same 12 OOMs post. Adam, Joe and Gyrodiot's Review of "Fun with +12 OOMs of Compute" was also nominated for the LessWrong 2021 Review as a top-level post, but I wanted to draw attention to it here as a
Some of their notes on the details:
I recommend reading their post in full. I'm likely to give this post a retroactive "review prize" that's more in the $400 - $1000 range but I haven't finished thinking about what amount makes sense.
Habryka on the 2021 MIRI Dialogues
Alex Ray on Coase's "Nature of the Firm" on Polyamory
Alex Ray left a few interesting reviews. I found his discussion on 1a3orn's essay, "Coase's "Nature of the Firm" on Polyamory", pretty valuable from the standpoint of "what makes an analogy work, or not?".
gears of ascension and Adam Jermyn on Your Cheerful Price
Eliezer's Your Cheerful Price has the highest number of reviews so far – it seemed like a lot of people had the post directly influence their lives. Several reviews mostly said 'yep, this was useful', But I'm including two reviews that went into some details:
gears of ascension:
Adam Jermyn:
Adam Shimi on What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes
AllAmericanBreakfastDirectedEvolution on Core Pathways of AgingAllAmericanBreakfastDirectedEvolutionhad both a high-level review and some nitty-gritty details examining John Wentworth's piece of the mechanics of aging.The high level overview:
And then digs into a claim that senolytics rapidly wear off:
John Wentworth replies, and I think the back-and-forth was worthwhile.
La3orn's Self Review of EfficientZero: How It Works
I particularly liked this for grading how his predictions bore out.
Akash on ARC's first technical report: Eliciting Latent Knowledge
Prizes
Prize Philosophy
It's a fair amount of work to review things, and my aim has been to pay people roughly commensurate with the time/value they're putting in. My policy has been to give $50 prizes for reviews which give at least some details on the gears of why a person liked or didn't like a post, $100 for reviews that made a pretty good faith effort to evaluate a post or provide significant new information, then more (using my judgment) if the review seems to be some combination of higher-value and and higher effort.
So far no one has written any particularly in-depth reviews during the Review Phase. In my ideal world, the top-scoring essays all receive a fairly comprehensive review that engages with both the gritty details of the post, and the high level "how does this fit into the surrounding landscape?". If you're interested in contributing high-effort review for the top-ranked posts so far, you can view the top-ranked posts here and see if any of them call out to you. I'd be happy to pay $300-$1000 for very comprehensive reviews.
In the past 1-2 years there have been some posts that were extensively reviewing other posts (i.e. Reviews of “Is power-seeking AI an existential risk?”, MIRI comments on Cotra's "Case for Aligning Narrowly Superhuman Models", and Review of "Fun with +12 OOMs of Compute", etc). I am leaning towards awarding those posts with retroactive prizes but am still thinking through it.
Prizes So Far
Here's the overall prize totals so far. You can see the complete list of review-prizes here, which includes some comments on what I found valuable.
Thanks to everyone has reviewed so far. Reminder that I'd like the top-scoring posts to get more in-depth reviews.