Re: Ayahuasca from the ACX survey having effects like:
[1]There's a cluster of subcultures that consistently drift toward philosophical idealist metaphysics (consciousness, not matter or math, as fundamental to reality): McKenna-style psychonauts, Silicon Valley Buddhist circles, neo-occultist movements, certain transhumanist branches, quantum consciousness theorists, and various New Age spirituality scenes. While these communities seem superficially different, they share a striking tendency to reject materialism in favor of mind-first metaphysics.
The common factor connecting them? These are all communities where psychedelic use is notably prevalent. This isn't coincidental.
There's a plausible mechanistic explanation: Psychedelics disrupt the Default Mode Network and adjusting a bunch of other neural parameters. When these break down, the experience of physical reality (your predictive processing simulation) gets fuzzy and malleable while consciousness remains vivid and present. This creates a powerful i...
This suggests something profound about metaphysics itself: Our basic intuitions about what's fundamental to reality (whether materialist OR idealist) might be more about human neural architecture than about ultimate reality. It's like a TV malfunctioning in a way that produces the message "TV isn't real, only signals are real!"
In meditation, this is the fundamental insight, the so called non-dual view. Neither are you the fundamental non-self nor are you the specific self that you yourself believe in, you're neither, they're all empty views, yet that view in itself is also empty. For that view comes from the co-creation of reality from your own perspective yet why should that be fundamental?
Emptiness is empty and so can't be true and you just kind of fall down into this realization of there being only circular or arbitrary properties of experience. Self and non-self are just as true and living from this experience is wonderfully freeing.
If you view your self as a nested hierarchical controller and you see through it then you believe that you can't be it and so you therefore cling onto what is next most apparent, that you're the entire universe but that has to be false as well...
@Daniel Kokotajlo I think AI 2027 strongly underestimates current research speed-ups from AI. It expects the research speed-up is currently ~1.13x. I expect the true number is more likely around 2x, potentially higher.
Points of evidence:
We did do a survey in late 2024 of 4 frontier AI researchers who estimated the speedup was about 1.1-1.2x. This is for their whole company, not themselves.
This also matches the vibe I’ve gotten when talking to other researchers, I’d guess they’re more likely to be overestimating than underestimating the effect due to not adjusting enough for my next point. Keep in mind that the multiplier is for overall research progress rather than a speedup on researchers’ labor, this lowers the multiplier by a bunch because compute/data are also inputs to progress.
How do you reconcile these observations (particularly 3 and 4) with the responses to Thane Ruthenis's question about developer productivity gains?
It was posted in early March, so after all major recent releases besides o3 (edit: and Gemini 2.5 Pro). Although Thane mentions hearing nebulous reports of large gains (2-10x) in the post itself, most people in the comments report much smaller ones, or cast doubt on the idea that anyone is getting meaningful large gains at all. Is everyone on LW using these models wrong? What do your informants know that these commenters don't?
Also, how much direct experience do you have using the latest reasoning models for coding assistance?
(IME they are good, but not that good; to my ears, "I became >2x faster" or "this is better than my best devs" sound like either accidental self-owns, or reports from a parallel universe.)
If you've used them but not gotten these huge boosts, how do you reconcile that with your points 3 and 4? If you've used them and have gotten huge boosts, what was that like (it would clarify these discussions greatly to get more direct reports about this experience)?
The new Moore's Law for AI Agents (aka More's Law) has accelerated at around the time people in research roles started to talk a lot more about getting value from AI coding assistants. AI accelerating AI research seems like the obvious interpretation, and if true, the new exponential is here to stay. This gets us to 8 hour AIs in ~March 2026, and 1 month AIs around mid 2027.[1]
I do not expect humanity to retain relevant steering power for long in a world with one-month AIs. If we haven't solved alignment, either iteratively or once-and-for-all[2], it's looking like game over unless civilization ends up tripping over its shoelaces and we've prepared.
An extra speed-up of the curve could well happen, for example with [obvious capability idea, nonetheless redacted to reduce speed of memetic spread].
From my bird's eye view of the field, having at least read the abstracts of a few papers from most organizations in the space, I would be quite surprised if we had what it takes to solve alignment in the time that graph gives us. There's not enough people, and they're mostly not working on things which are even trying to align a superintelligence.
My own experience is that if-statements are even 3.5's Achilles heel and 3.7 is somehow worse (when it's "almost" right, that's worse than useless, it's like reviewing pull requests when you don't know if it's an adversarial attack or if they mean well but are utterly incompetent in interesting, hypnotizing ways)... and that METR's baselines more resemble a Skinner box than programming (though many people have that kind of job, I just don't find the conditions of gig economy as "humane" and representative of what how "value" is actually created), and the sheer disconnect of what I would find "productive", "useful projects", "bottlenecks", and "what I love about my job and what parts I'd be happy to automate" vs the completely different answers on How Much Are LLMs Actually Boosting Real-World Programmer Productivity?, even from people I know personally...
I find this graph indicative of how "value" is defined by the SF investment culture and disruptive economy... and I hope the AI investment bubble will collapse sooner rather than later...
But even if the bubble collapses, automating intelligence will not be undone, it won't suddenly become "safe", the incentives to create real AGI i...
Life is Nanomachines
In every leaf of every tree
If you could look, if you could see
You would observe machinery
Unparalleled intricacy
In every bird and flower and bee
Twisting, churning, biochemistry
Sustains all life, including we
Who watch this dance, and know this key
Rationalists try to be well calibrated and have good world models, so we should be great at prediction markets, right?
Alas, it looks bad at first glance:
I've got a hopeful guess at why people referred from core rationalist sources seem to be losing so many bets, based on my own scores. My manifold score looks pretty bad (-M192 overall profit), but there's a fun reason for it. 100% of my resolved bets are either positive or neutral, while all but one of my unresolved bets are negative or neutral.
Here's my full prediction record:
The vast majority of my losses are on things that don't resolve soon and are widely thought to be unlikely (plus a few tiny not particularly well thought out bets like dropping M15 on LK-99), and I'm for sure losing points there. but my actual track record cached out in resolutions tells a very different story.
I wonder if there are some clever stats that @James Grugett @Austin Chen or others on the team could do to disentangle these effects, and see what the quality-adjusted bets on critical questions like the AI doom ones would be absent this kind of effect. I'd be excited to see the UI showing an extra column on the referrers table showing cashed out ...
A couple of months ago I did some research into the impact of quantum computing on cryptocurrencies, seems maybe significant, and a decent number of LWers hold cryptocurrency. I'm not sure if this is the kind of content that's wanted, but I could write up a post on it.
My current guess as to Anthropic's effect:
Shorter due to:
[set 200 years after a positive singularity at a Storyteller's convention]
My friends, my friends, good news I say
The anniversary’s today
A challenge faced, a future won
When almost came our world undone
We thought for years, with hopeful hearts
Past every one of the false starts
We found a way to make aligned
With us, the seed of wondrous mind
They say at first our child-god grew
It learned and spread and sought anew
To build itself both vast and true
For so much work there was to do
Once it had learned enough to act
With the desired care and tact
It s...
Thinking about some things I may write. If any of them sound interesting to you let me know and I'll probably be much more motivated to create it. If you're up for reviewing drafts and/or having a video call to test ideas that would be even better.