by plex
1 min read

5

This is a special post for quick takes by plex. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
82 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]plex*5826

Re: Ayahuasca from the ACX survey having effects like: 

  • “Obliterated my atheism, inverted my world view no longer believe matter is base substrate believe consciousness is, no longer fear death, non duality seems obvious to me now.”

[1]There's a cluster of subcultures that consistently drift toward philosophical idealist metaphysics (consciousness, not matter or math, as fundamental to reality): McKenna-style psychonauts, Silicon Valley Buddhist circles, neo-occultist movements, certain transhumanist branches, quantum consciousness theorists, and various New Age spirituality scenes. While these communities seem superficially different, they share a striking tendency to reject materialism in favor of mind-first metaphysics.

The common factor connecting them? These are all communities where psychedelic use is notably prevalent. This isn't coincidental.

There's a plausible mechanistic explanation: Psychedelics disrupt the Default Mode Network and adjusting a bunch of other neural parameters. When these break down, the experience of physical reality (your predictive processing simulation) gets fuzzy and malleable while consciousness remains vivid and present. This creates a powerful i... (read more)

Reply3111

This suggests something profound about metaphysics itself: Our basic intuitions about what's fundamental to reality (whether materialist OR idealist) might be more about human neural architecture than about ultimate reality. It's like a TV malfunctioning in a way that produces the message "TV isn't real, only signals are real!"

In meditation, this is the fundamental insight, the so called non-dual view. Neither are you the fundamental non-self nor are you the specific self that you yourself believe in, you're neither, they're all empty views, yet that view in itself is also empty. For that view comes from the co-creation of reality from your own perspective yet why should that be fundamental?

Emptiness is empty and so can't be true and you just kind of fall down into this realization of there being only circular or arbitrary properties of experience. Self and non-self are just as true and living from this experience is wonderfully freeing. 

If you view your self as a nested hierarchical controller and you see through it then you believe that you can't be it and so you therefore cling onto what is next most apparent, that you're the entire universe but that has to be false as well... (read more)

2plex
Nice! I haven't read a ton of Buddhism, cool that this fits into a known framework.  Yeah, ~subjective experience.
3[anonymous]
I sent the following offer (lightly edited) to discuss metaphysics to plex. I extend the same to anyone else.[1] (Note: I don't do psychedelics or participate in the mentioned communities[2], and I'm deeply suspicious of intuitions/deep-human-default-assumptions. I notice unquestioned intuitions across views on this, and the primary thing I'd like to do in discussing is try to get the other to see theirs.) 1. ^ Though I'll likely lose interest if it seems like we're talking past each other / won't resolve any cruxy disagreements. 2. ^ (except arguably the qualia research institute's discord server, which might count because it has psychedelics users in it) 3. ^ (Questioning with the goal of causing the questioned one to notice specific assumptions or intuitions to their beliefs, as a result of trying to generate a coherent answer) 4. ^ From an unposted text:
2Noosphere89
Alright, I'll try to answer the questions: 1. I think qualia is rescuable, in a sense, and my specific view is that they exist as a high-level model. As far as what that qualia is, I think it's basically an application of modeling the world in order to control something, and thus qualia, broadly speaking is your self-model. As far as my exact views on qualia, the links below are helpful: https://www.lesswrong.com/posts/FQhtpHFiPacG3KrvD/seth-explains-consciousness#7ncCBPLcCwpRYdXuG https://www.lesswrong.com/posts/NMwGKTBZ9sTM4Morx/linkpost-a-conceptual-framework-for-consciousness 1. My general answer to these question is probably computation/programs/mathematics, with the caveat that these notions are very general, and thus don't explain anything specific about our world. I personally agree with this on what counts as real: What breathes fire into the equations of our specific world is either an infinity of computational resources, or a very large amount of computational resources. As far as what mathematics is, I definitely like the game analogy where we agree to play a game according to specified rules, though another way to describe mathematics is as a way to generalize all of the situations you encounter and abstract from specific detail, and it is also used to define what something is.
2plex
Let's do most of this via the much higher bandwidth medium of voice, but quickly: 1. Yes, qualia[1] is real, and is a class of mathematical structure.[2] 2. (placeholder for not a question item) 1. Matter is a class of math which is ~kinda like our physics. 2. Our part of the multiverse probably doesn't have special "exists" tags, probably everything is real (though to get remotely sane answers you need a decreasing reality fluid/caring fluid allocation). 3. Math, in the sense I'm trying to point to it, is 'Structure'. By which I mean: Well defined seeds/axioms/starting points and precisely specified rules/laws/inference steps for extending those seeds. The quickest way I've seen to get the intuition for what I'm trying to point at with 'structure' is to watch these videos in succession (but it doesn't work for everyone):   1. ^ experience/the thing LWers tend to mean, not the most restrictive philosophical sense (#4 on SEP) which is pointlessly high complexity (edit: clarified that this is not the universal philosophical definition, but only one of several meanings, walked back a little on rhetoric) 2. ^ possibly maybe even the entire class, though if true most qualia would be very very alien to us and not necessarily morally valuable
[-]plex*33-25

@Daniel Kokotajlo I think AI 2027 strongly underestimates current research speed-ups from AI. It expects the research speed-up is currently ~1.13x. I expect the true number is more likely around 2x, potentially higher.

Points of evidence:

  1. I've talked to someone at a leading lab who concluded that AI getting good enough to seriously aid research engineering is the obvious interpretation of the transition to a faster doubling time on the METR benchmark. I claim advance prediction credit for new datapoints not returning to 7 months, and instead holding out at 4 months. They also expect more phase transitions to faster doubling times; I agree and stake some epistemic credit on this (unsure when exactly, but >50% on this year moving to a faster exponential).
  2. I've spoken to a skilled researcher originally from physics who claims dramatically higher current research throughput. Often 2x-10x, and many projects that she'd just not take on if she had to do everything manually.
  3. The leader of an 80 person engineering company which has the two best devs I've worked with recently told me that for well-specified tasks, the latest models are now better than their top devs. He said engineering is no
... (read more)

We did do a survey in late 2024 of 4 frontier AI researchers who estimated the speedup was about 1.1-1.2x. This is for their whole company, not themselves.


This also matches the vibe I’ve gotten when talking to other researchers, I’d guess they’re more likely to be overestimating than underestimating the effect due to not adjusting enough for my next point. Keep in mind that the multiplier is for overall research progress rather than a speedup on researchers’ labor, this lowers the multiplier by a bunch because compute/data are also inputs to progress.

5Daniel Kokotajlo
That said, we just talked to another frontier AI company researcher who said the speedup was 2x. I disagree with them but it's a data point at least.
2plex
Okay, that updates me some. I'm curious about what your alternate guess about the transition to the faster exponential on the METR long-horizon tasks, and whether you expect that to hold up or be not actually tracking something important? (also please note that via me you now also have a very recent datapoint of a frontier AI researcher who thinks the METR speed-up of ~2x was mostly due to AI accelerating research) Edit: How late in 2024? Because the trendline was only just starting to become apparent even right near the end and was almost invisible a couple months early, it's pretty plausible to me that if you re-ran that survey now you would get different results. The researchers inside will have had a sense somewhat before releases, but also lag on updating is real.
8elifland
This was from Nov 2024 to Mar 2025 so fairly recent. I think the transition to faster was mostly due to the transition to reasoning models and perhaps the beginnings of increased generalization from shorter to longer time horizons. Edit: the responses are from between Nov 2024 and Mar 2025. Responses are in increasing order: 1.05-1.1, 1.15, 1.2, 1.3, 2. The lowest one is the most recent but is from a former not current frontier AI researcher.
2plex
The switch to reasoning models does line up well, probably more cleanly. Moved that to main hypothesis, thanks. Having some later responses makes it less likely they missed the change, curious if the other responses were closer to Dec or March. I would guess the not-current-researcher one being excluded probably makes sense? The datapoint from me is not exactly 2x on this, but 'most of an approximately 2x', so would need revisiting with the exact question before it could be directly included, and I'd imagine you'd want the source. I still have some weight on higher research boost from AI than your model is expecting, due to other lines of evidence, but not putting quite as much weight on it. 
2elifland
Most of the responses were in Nov.
2plex
That seems like stale data, given how these graphs look. Even with the updates you caused, I'm happy to offer an even odds token bet ($100?) that a rerun of a similar survey would give significantly higher average (at least +0.2 over the predicted 1.13x, or about the AI you expect in Dec 2025). I'd be even more happy if the question asked about the researcher's own productivity, as that seems like something they'd have better vision of, but would be pretty noisy with small sample so reasonable to stick with original question.
2elifland
You mean the median would be at least 1.33x rather than the previous 1.2x? Sounds about right so don't feel the need to bet against. Also I'm not planning on doing a follow-up survey but would be excited for others to.
2plex
Your website lists  * April 2025 as 1.13x * August 2025 as 1.21x * December 2025 as 1.30x * December 2024 as 1.05x (which seems contradicted by your survey, if the replies were in November) If you think today's number is ~1.33x we're ~7 months ahead of schedule vs the listed forecast, unless I'm really misreading something. Also, re: "would be excited for others to.", is the survey public or easy to share if someone wanted to use the same questions? And I'd bet 1:4 for the current number is actually >1.5x, if that's more interesting. You've updated me to not have that as the main expectation, but still seems pretty plausible. Obviously depends on someone rerunning the survey, and reasonable that you've got your hands full with other things right now.
2elifland
I also realized that I believe that confusingly the survey asks about speedup vs. no post-2022 AIs, while I believe the scenario side panel is for no post-2023 AIs, which should make the side panel numbers lower, unclear exactly how much given 2023 AIs weren't particularly useful.
2plex
I can switch the number to 2023?
2elifland
Yup, seems good
2plex
Okay, switched. I'm curious about why you didn't set the baseline to "no AI help", especially if you expect pre-2024 AI to be mostly useless, as that seems like a cleaner comparison than asking people to remember how good old AIs were?
2elifland
No AI help seems harder to compare to since it's longer ago, it seems easiest to think of something close to today as the baseline when thinking about future speedups. Also for timelines/takeoff modeling it's a bit nicer to set the baseline to be more recent (looks like for those we again confusingly allowed 2024 AIs in the baseline as well rather than just 2023. Perhaps I should have standardized that with the side panel).
2plex
I think this risks people underappreciating how much progress is being sped up, my naive read of the UI was the numbers were based on "no AI" and I'd bet most readers would think the same at a glance. Changing the text from "AI provides the following speedups:" to "AI provides the following speedups from a baseline of 2022/3 AI:" would resolve this (I would guess common) misreading.
2elifland
Yup feel free to make that change, sounds good
2plex
Clarification: 1. Change to the form to ask about without AI assistence? 2. Change to the website to refer to "AI provides the following speedups from a baseline of 2022/3 AI:"? (I don't have write access) (assuming 1 for now, will revert if incorrect)
2elifland
Oh I misunderstood you sorry. I think the form should have post-2023, not sure about the website because it adds complexity and I'm skeptical that it's common that people are importantly confused by it as is.
2elifland
I think the survey is an overestimate for the reason I gave above, I think this stuff is subtle and researchers are likely to underestimate the decrease from labor speedup to progress speedup, especially in this sort of survey where it didn't involve discussing with them verbally. Based on their responses to other questions in the survey seems like at least 2 people didn't understand the difference between labor and overall progress/productivity. Here is the survey: https://forms.gle/6GUbPR159ftBQcVF6. The question we're discussing is: "[optional] What is the current productivity multiplier on algorithmic progress due to AI assistance?" Edit: Also we didn't spend large amounts of time on these early numbers, they're not meant to be that precise but just rough best guesses.
2plex
Wait, actually, I want to double click on this. What was the process that caused you to transform the number you got from the survey (1.2x) to the number on the website (1.05x)? Is there a question that could be asked which would not require a correction? Or which would have a pre-registered correction?[1] 1. ^ Bonus: Was this one pre-registered?
4elifland
I'm not sure what the exact process was, tbh my guess is that they were estimated mostly independently but likely sanity checked with the survey to some extent in mind. It seems like they line up about right, given the 2022 vs. 2023 difference, the intuition regarding underadjusting for labor->progress, and giving weight to our own views as well rather than just the survey, given that we've thought more about this than survey takers (while of course they have the advantage of currently doing frontier AI research). I'd make less of an adjustment if we asked people to give their reasoning including the adjustment from labor speedup to overall progress speedup and only included people who gave answers that demonstrated good understanding of this consideration and a not obviously unreasonable adjustment level.
2plex
Alright, my first pass guess would have been algorithmic progress seems like the kind of thing that eats a much smaller penalty than most forms org-level progress, not none but not a 75% reduction, and not likely more than a 50% reduction, but you guys have the track record. Cool, added a nudge to the last question.
2elifland
I think it's not worth getting into this too much more as I don't feel strongly about the exact 1.05x, but I feel compelled to note a few quick things: 1. I'm not sure exactly what you mean by eating a smaller penalty but I think the labor->progress penalty is quite large 2. The right way to think about 1.05x vs. 1.2x is not a 75% reduction, but instead what is the exponent for which 1.05^n=1.2 3. Remember the 2022 vs. 2023 difference, though my guess is that the responses wouldn't have been that sensitive to this Also one more thing I'd like to pre-register: people who fill out the survey who aren't frontier AI researchers will generally report higher speedups because their work is generally less compute-loaded and sometimes more greenfieldy or requiring less expertise, but we should give by far the most weight to frontier AI researchers.
2plex
(feel free to not go any deeper, appreciate you having engaged as much as you have!) 1. Yup, was just saying my first-pass guess would have been a less large labour->progress penalty. I do defer here fairly thoroughly. hmm, seems true if you're expecting the people to not have applied a correction already, but less true if they are already making a correction and you're estimating how wrong their correction is? And yup, agree with that preregistration on all counts.
2plex
That resolves the inconsistency. I do worry that dropping a 20% speed-up to a 5% one, especially if post hoc, might cover up some important signal, but I'm sure you've put dramatically more cycles into thinking about this than me. Thanks for the survey, would it make sense to just pass this form around so the numbers go to the same place and you'll check, or should I make a copy and send results if I get them?
2elifland
I think a copy would be best, thanks!
2plex
This survey looks like it's asking something different? It's asking about human range, no mention of speed-up from AI.
2elifland
Look at the question I mentioned above about the current productivity multiplier
2plex
Oh, yup, missed that optional question in my ctrl-f. Thanks!

How do you reconcile these observations (particularly 3 and 4) with the responses to Thane Ruthenis's question about developer productivity gains?

It was posted in early March, so after all major recent releases besides o3 (edit: and Gemini 2.5 Pro).  Although Thane mentions hearing nebulous reports of large gains (2-10x) in the post itself, most people in the comments report much smaller ones, or cast doubt on the idea that anyone is getting meaningful large gains at all.  Is everyone on LW using these models wrong?  What do your informants know that these commenters don't?


Also, how much direct experience do you have using the latest reasoning models for coding assistance?

(IME they are good, but not that good; to my ears, "I became >2x faster" or "this is better than my best devs" sound like either accidental self-owns, or reports from a parallel universe.)

If you've used them but not gotten these huge boosts, how do you reconcile that with your points 3 and 4?  If you've used them and have gotten huge boosts, what was that like (it would clarify these discussions greatly to get more direct reports about this experience)?

4Noosphere89
A flag is that to the extent that the 4 month doubling time is based on RL with verifiable rewards/RL on CoT, this may not hold for long, because the paper provides evidence that RL doesn't actually increase capabilities indefinitely, and puts a pretty harsh limit on how far RL can scale (but see @Jozdien's response to the paper below): https://www.lesswrong.com/posts/s3NaETDujoxj4GbEm/tsinghua-paper-does-rl-really-incentivize-reasoning-capacity#Mkuqt7x7YojpJuCGt (OG post) https://www.lesswrong.com/posts/s3NaETDujoxj4GbEm/tsinghua-paper-does-rl-really-incentivize-reasoning-capacity#Mkuqt7x7YojpJuCGt (Jozdien's response)
2plex
Nice, so if we return to 7 month doubling time in the not too distant future that's compatible with reasoning models being the cause, but not AI accelerating development. Cool, looking forward to seeing how this unfolds, and set up a market.
3Mis-Understandings
A contextualization of people toting big personal speedup numbers.  People get way more productive by rethinking their workflow, especially in research, not all the time, but like it was not an unprecedented story in 2015.  Do you remember when people were talking about 10x engineers in the 2010s.  Discovering that in a new workflow, you are the 10x engineer is not unprecedented.  The question is the rate of (try new thing)-> clicks with workflow so output jumps, higher.  Sometimes, people got 10x more productive from some change before any of this, so understand that any change in workflow has a noise floor even at these productivity leaps. 
2faul_sname
I expect that the leaders of many software development companies would report similar things, especially if they were talking to someone who might at some point talk to their board or to potential investors. I expect most venture-funded software development companies, and especially most that will not reach positive net revenue in the foreseeable future, have internal channels dedicated to "what we are doing with AI" that the leadership team is watching intently and passing anything even slightly positive on to their board.
2Cole Wyeth
Recent analysis hasn’t shown much economic impact
4plex
I'm not claiming economic impact, I'm claiming AI research speed-up. I expect a software-only singularity, economic indicators may well not show dramatic changes before human disempowerment is inevitable.
5Cole Wyeth
It seems unlikely to me that software engineers are getting a vast multiplier from this technology while no one else is getting much. 
4plex
This is not idle speculation, this is something I have checked. I've worked with a lot of devs and spoken to them about their AI use. I've spoken to two people who lead large (80+ person) dev teams, one of them recently let go 30 devs specifically because those people were not integrating AI into their workflows fast enough, so they were moving too slow compared to people who had. Another said AI meant engineering was essentially no longer a bottleneck. Also: many other professions are using AI a lot. This mostly looks like people semi-automating their own job, which doesn't show up that much in economics statistics. Please be aware if your read on the situation is more than a couple of months old, it's stale data. The world is moving fast now.
5Cole Wyeth
I have never heard anything like this, and I am not persuaded by this anecdotal evidence. It seems pretty hard to believe on various levels. How do you (or how did he) know all 30 people were not integrating AI into their workflows fast enough? If it is really such a huge force multiplier that integration is the primary driver behind dev productivity, why don't I find AI very useful for any serious project, despite trying it every week or so? Will he regret his decision in a few months? Do you have any statistics to back this anecdote up?    I'm working out of LISA right now, so I doubt my read on the situation is more than a couple of weeks old.
3plex
It's a fair few anecdotes, plus some things like 25% of google's code being written by AI in October, and comparing October models with today's, how much of claude tokens is spent on code from their report, etc. I think I'll tap out from this, don't think trying to persuade you here is a sensible focus.
2Cole Wyeth
You seem to be referring to comments from the CEO that more than 25% of code at Google is written by AI (and reviewed by humans). I’m not sure how reliable this number is, and it remains to be seen whether this is sustainable. It also doesn’t seem like a vast productivity boost (though it would be pretty significant, probably more than I expect, so would update me). 
4Viliam
I guess the professions that benefit most would be limited by two factors: * they work with virtual stuff, not material objects * mistakes don't matter, because they are easy to notice Translators get a huge multiplier, because you can skim the automatic translation and notice when something feels off. Software engineers can use unit tests. Who else is in this group?
2Cole Wyeth
Many bugs will not be caught by unit tests.
2Viliam
Yes, but enough bugs caught can be enough to switch the equation from "this is not worth doing" to "worth doing, even if we need to check everything twice".
[-]plex153

The new Moore's Law for AI Agents (aka More's Law) has accelerated at around the time people in research roles started to talk a lot more about getting value from AI coding assistants. AI accelerating AI research seems like the obvious interpretation, and if true, the new exponential is here to stay. This gets us to 8 hour AIs in ~March 2026, and 1 month AIs around mid 2027.[1]

I do not expect humanity to retain relevant steering power for long in a world with one-month AIs. If we haven't solved alignment, either iteratively or once-and-for-all[2], it's looking like game over unless civilization ends up tripping over its shoelaces and we've prepared.

  1. ^

    An extra speed-up of the curve could well happen, for example with [obvious capability idea, nonetheless redacted to reduce speed of memetic spread].

  2. ^

    From my bird's eye view of the field, having at least read the abstracts of a few papers from most organizations in the space, I would be quite surprised if we had what it takes to solve alignment in the time that graph gives us. There's not enough people, and they're mostly not working on things which are even trying to align a superintelligence.

My own experience is that if-statements are even 3.5's Achilles heel and 3.7 is somehow worse (when it's "almost" right, that's worse than useless, it's like reviewing pull requests when you don't know if it's an adversarial attack or if they mean well but are utterly incompetent in interesting, hypnotizing ways)... and that METR's baselines more resemble a Skinner box than programming (though many people have that kind of job, I just don't find the conditions of gig economy as "humane" and representative of what how "value" is actually created), and the sheer disconnect of what I would find "productive", "useful projects", "bottlenecks", and "what I love about my job and what parts I'd be happy to automate" vs the completely different answers on How Much Are LLMs Actually Boosting Real-World Programmer Productivity?, even from people I know personally...

I find this graph indicative of how "value" is defined by the SF investment culture and disruptive economy... and I hope the AI investment bubble will collapse sooner rather than later...

But even if the bubble collapses, automating intelligence will not be undone, it won't suddenly become "safe", the incentives to create real AGI i... (read more)

6Garrett Baker
Note the error bars in the original
2plex
Looking on trend for the new exponential, next prediction is a few more months of this before we transition to a faster exponential. And clarification re: the locally invalid react: I'm not trying to defend or justify my conclusion that the faster exponential is due to software automation speeding up research. I'm raising something to attention which will likely just click if you've been following devs talk about their AI use over the past year or two, and would take more bandwidth to argue for extensively than makes sense for me to spend here.
[-]plex150

Life is Nanomachines

In every leaf of every tree
If you could look, if you could see
You would observe machinery
Unparalleled intricacy
 
In every bird and flower and bee
Twisting, churning, biochemistry
Sustains all life, including we
Who watch this dance, and know this key

Illustration: A magnified view of a vibrant green leaf, where molecular structures and biological nanomachines are visible. Hovering nearby, a bird's feathers reveal intricate molecular patterns. A bee is seen up close, its body showcasing complex biochemistry processes in the form of molecular chains and atomic structures. Nearby, a flower's petals and stem reveal the dance of biological nanomachines at work. Human silhouettes in the background observe with fascination, holding models of molecules and atoms.

6niplav
Related: Video of the death of a single-celled Blepharisma.
[-]plex*125

Rationalists try to be well calibrated and have good world models, so we should be great at prediction markets, right?

Alas, it looks bad at first glance:

I've got a hopeful guess at why people referred from core rationalist sources seem to be losing so many bets, based on my own scores. My manifold score looks pretty bad (-M192 overall profit), but there's a fun reason for it. 100% of my resolved bets are either positive or neutral, while all but one of my unresolved bets are negative or neutral.

Here's my full prediction record:

The vast majority of my losses are on things that don't resolve soon and are widely thought to be unlikely (plus a few tiny not particularly well thought out bets like dropping M15 on LK-99), and I'm for sure losing points there. but my actual track record cached out in resolutions tells a very different story.

I wonder if there are some clever stats that @James Grugett @Austin Chen  or others on the team could do to disentangle these effects, and see what the quality-adjusted bets on critical questions like the AI doom ones would be absent this kind of effect. I'd be excited to see the UI showing an extra column on the referrers table showing cashed out ... (read more)

4habryka
These datapoints just feel like the result of random fluctuations. Both Writer and Eliezer mostly drove people to participate on the LK-99 stuff where lots of people were confidently wrong. In-general you can see that basically all the top referrers have negative income:  Among the top 10, Eliezer and Writer are somewhat better than the average (and yaboi is a huge outlier, which my guess is would be explained by them doing something quite different than the other people). 
4plex
Agree, expanding to the top 9[1] makes it clear they're not unusual in having large negative referral totals. I'd still expect Ratia to be doing better than this, and would guess a bunch of that comes from betting against common positions on doom markets, simulation markets, and other things which won't resolve anytime soon (and betting at times when the prices are not too good, because of correlations in when that group is paying attention). 1. ^ Though the rest of the leaderboard seems to be doing much better
2Garrett Baker
The interest rate on manifold makes such investments not worth it anyway, even if everyone had reasonable positions to you.

A couple of months ago I did some research into the impact of quantum computing on cryptocurrencies, seems maybe significant, and a decent number of LWers hold cryptocurrency. I'm not sure if this is the kind of content that's wanted, but I could write up a post on it.

7ChristianKl
Writeup of good research is generally welcome on LessWrong.
2Document
Did you?
6plex
I think I have a draft somewhere, but never finished it. tl;dr; Quantum lets you steal private keys from public keys (so all wallets that have a send transaction). Upgrading can protect wallets where people move their coins, but it's going to be messy, slow, and won't work for lost-key wallets, which are a pretty huge fraction of the total BTC reserve. Once we get quantum BTC at least is going to have a very bad time, others will have a moderately bad time depending on how early they upgrade.
[-]plex61

My current guess as to Anthropic's effect:

  1. 0-8 months shorter timelines[1]
  2. Much better chances of a good end in worlds where superalignment doesn't require strong technical philosophy[2] (but I put very low odds on being in this world)
  3. Somewhat better chances of a good end in worlds where superalignment does require strong technical philosophy[3]
  1. ^

    Shorter due to:

    • There being a number of people who might otherwise not have been willing to work for a scaling lab, or not do so as enthusiastically/effectively (~55% weight)
    • Encouraging race dynamics (~30%)
    • Making
... (read more)
2Mateusz Bagiński
how low?
4plex
eh, <5%? More that we might be able to get the AIs to do most of the heavy lifting of figuring this out, but that's a sliding scale of how much oversight the automated research systems need to not end up in wrong places.
1williawa
I basically agree with this.  Or I'd put 20% chance on us being in the worlds "where superalignment doesn't require strong technical philosophy", that's maybe not very low. Overall I think the existance of Anthropic is a mild net positive, and the only lab for which this is true (major in the sense of building frontier models). "the existence of" meaning, if they shut down today or 2 years ago, it would've not increased our chance of survival, maybe lowered it. I'm also somewhat more optimistic about the research they're doing helping us in the case where alignment is actually hard.
[-]plex30

[set 200 years after a positive singularity at a Storyteller's convention]

If We Win Then...

My friends, my friends, good news I say
The anniversary’s today
A challenge faced, a future won
When almost came our world undone

We thought for years, with hopeful hearts
Past every one of the false starts
We found a way to make aligned
With us, the seed of wondrous mind

They say at first our child-god grew
It learned and spread and sought anew
To build itself both vast and true
For so much work there was to do

Once it had learned enough to act
With the desired care and tact
It s... (read more)

[-]plex20

Titles of posts I'm considering writing in comments, upvote ones you'd like to see written.

[-]plex240

Why SWE automation means you should probably sell most of your cryptocurrency

1FinalFormal2
Super curious- are you willing to give a sentence or two on the take here?
2plex
A_donor wrote up some thoughts we had: https://www.lesswrong.com/posts/3eXwKcg3HqS7F9s4e/swe-automation-is-coming-consider-selling-your-crypto-1
[-]plex130

An opinionated tour of the AI safety funding landscape

9plex
Grantmaking models and bottlenecks
7plex
Ayahuasca: Informed consent on brain rewriting (based on anecdotes and general models, not personal experience)
3gwern
Can't you do this as polls in a single comment?
4plex
LW supports polls? I'm not seeing it in https://www.lesswrong.com/tag/guide-to-the-lesswrong-editor, unless you mean embed a manifold market which.. would work, but adds an extra step between people and voting unless they're registered to manifold already.
3plex
Mesa-Hires - Making AI Safety funding stretch much further
[-]plex*20

Thinking about some things I may write. If any of them sound interesting to you let me know and I'll probably be much more motivated to create it. If you're up for reviewing drafts and/or having a video call to test ideas that would be even better.

  • Memetics mini-sequence (https://www.lesswrong.com/tag/memetics has a few good things, but no introduction to what seems like a very useful set of concepts for world-modelling)
    • Book Review: The Meme Machine (focused on general principles and memetic pressures toward altruism)
    • Meme-Gene and Meme-Meme co-evolution (fo
... (read more)
1cSkeleton
Hi, did you ever go anywhere with Conversation Menu? I'm thinking of doing something like this related to AI risk to try to quickly get people to the arguments around their initial reaction and if helping with something like this is the kind of thing you had in mind with Conversation Menu I'm interested to hear any more thoughts you have around this. (Note, I'm thinking of fading in buttons more than a typical menu.) Thanks!
More from plex
Curated and popular this week