LESSWRONG
LW

Davidmanheim
5202Ω1217211931
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Modeling Transformative AI Risk (MTAIR)
7Davidmanheim's Shortform
Ω
6mo
Ω
18
Far-UVC Light Update: No, LEDs are not around the corner (tweetstorm)
Davidmanheim18h10

But we weren't talking about 254, we were talking about 222, so that it could / should be skin-safe, at least.

Reply
The Industrial Explosion
Davidmanheim18h40

Yeah, I think Thomas was arguing the opposite direction, and he argued that you "underrate the capabilities of superintelligence," and I was responding to why that wasn't addressing the same scenario as your original post.

Reply
BIG-Bench Canary Contamination in GPT-4
Davidmanheim3dΩ240

Flagging that I just found that Google Gemini also has this contamination: https://twitter.com/davidmanheim/status/1939597767082414295

Reply
The Industrial Explosion
Davidmanheim3d20

The macroscopic biotech that accomplishes what you're positing is addressed in the first part, and the earlier comment where I note that you're assuming ASI level understanding of bio for exploring an exponential design space for something that isn't guaranteed to be possible. The difficulty isn't unclear, it's understood not to bebfeasible.

Reply
The Industrial Explosion
Davidmanheim4d31

Given the premises, I guess I'm willing to grant that this isn't a silly extrapolation, and absent them it seems like you basically agree with the post? 

However, I have a few notes on why I'd reject your premises.

On your first idea, I think high-fidelity biology simulators require so much understanding of biology that they are subsequent to ASI, rather than a replacement. And even then, you're still trying to find something by searching an exponential design space - which is nontrivial even for AGI with feasible amounts of "unlimited" compute.  Not only that, but the thing you're looking for needs to do a bunch of stuff that probably isn't feasible due to fundamental barriers (Not identical to the ones listed there, but closely related to them.)

On your second idea, a software-only singularity assumes that there is a giant compute overhang for some specific buildable general AI that doesn't even require specialized hardware. Maybe so, but I'm skeptical; the brain can't be simulated directly via Deep NNs, which is what current hardware is optimized for. And if some other hardware architecture using currently feasible levels of compute is devised, there still needs to be a massive build-out of these new chips - which then allows "enough compute has been manufactured that nanotech-level things can be developed." But that means you again assume that arbitrary nanotech is feasible, which could be true, but as the other link notes, certainly isn't anything like obvious.

Reply
The Industrial Explosion
Davidmanheim4d40

How strong a superintelligence are you assuming, and what path did it follow? If it's already taken over mass production of chips to the extent that it can massively build out its own capabilities, we're past the point of industrial explosion. And if not, where did these (evidently far stronger than even the collective abilities of humanity, given the presumed capabilities,) capabilities emerge from?

Reply
The Industrial Explosion
Davidmanheim4d4-2

I'm very confused by this response - if we're talking about strong quality superintelligence, as opposed to cooperative and/or speed superintelligence, then the entire idea of needing an industrial explosion is wrong, since (by assumption) the superintelligent AI system is able to do things that seem entirely magical to us.

Reply
The Industrial Explosion
Davidmanheim4d41

The idea that near-term AI will be able to design biological systems to do arbitrary tasks is a bit silly, based on everything we know about the question. That is, you'd need very strongly ASI-level understanding of biology to accomplish this, at which point the question of industrial explosion is solidly irrelevant.

Reply
Intelligence Is Not Magic, But Your Threshold For "Magic" Is Pretty Low
Davidmanheim17d2119

Organizations can't spawn copies for linear cost increases, can't run at faster than human speeds, and generally suck at project management due to incentives. LLM agent systems seem poised to be insanely more powerful.

Reply
Do you even have a system prompt? (PSA / repo)
Davidmanheim1mo105

Seems like an attempt to push the LLMs towards certain concept spaces, away from defaults, but I haven't seen it done before and don't have any idea how much it helps, if at all.

Reply
Load More
Garden Onboarding
4y
(+28)
20The Fragility of Naive Dynamism
1mo
1
15Therapist in the Weights: Risks of Hyper-Introspection in Future AI Systems
2mo
1
9Grounded Ghosts in the Machine - Friston Blankets, Mirror Neurons, and the Quest for Cooperative AI
3mo
0
7Davidmanheim's Shortform
Ω
6mo
Ω
18
11Exploring Cooperation: The Path to Utopia
6mo
0
31Moderately Skeptical of "Risks of Mirror Biology"
6mo
3
17Most Minds are Irrational
Ω
7mo
Ω
4
9Refuting Searle’s wall, Putnam’s rock, and Johnson’s popcorn
7mo
31
16Mitigating Geomagnetic Storm and EMP Risks to the Electrical Grid (Shallow Dive)
7mo
4
27Proveably Safe Self Driving Cars [Modulo Assumptions]
10mo
29
Load More