LESSWRONG
LW

Vladimir_Nesov
33753Ω4894496211506
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Authors Have a Responsibility to Communicate Clearly
Vladimir_Nesov4d168

The useful thing is the ideas you take away, it's rarely relevant if the author intended them or not.

Reply2
Roman Malov's Shortform
Vladimir_Nesov5d20

I'm claiming ... "simplicity ⇒ no free will"

Consider the ASP problem, where the agent gets to decide whether it can be predicted, whether there is a dependence of the predictor on the agent. The agent can destroy the dependence by knowing too much about the predictor and making use of that knowledge. So this "knowing too much" (about the predictor) is what destroys the dependence, but it's not just a consequence of the predictor being too simple, but rather of letting an understanding of predictor's behavior precede agent's behavior. It's in the agent's interest to not let this happen, to avoid making use of this knowledge (in an unfortunate way), to maintain the dependence (so that it gets to predictably one-box).

So here, when you are calling something simple as opposed to complicated, you are positing that its behavior is easy to understand, and so it's easy to have something else make use of knowledge of that behavior. But even when it's easy, it could be avoided intentionally. So even simple things can have free will (such as humans in the eyes of a superintelligence), from a point of view that decides to avoid knowing too much, which can be a good thing to do, and as the ASP problem illustrates can influence said behavior (the behavior could be different if not known, as the fact of not-being-known could happen to be easily knowable to the behavior).

Reply
Roman Malov's Shortform
Vladimir_Nesov5d20

Just Turing machines / lambda terms, or something like that. And "behavior" is however you need to define it to make a sensible account of the dependence between "behaviors", or of how one of the "behaviors" produces a static analysis of the other. The intent is to capture a key building block of acausal consequentialism in a computational setting, which is one way of going about formulating free will in a deterministic world.

(You don't just control the physical world through your physical occurrence in it, but also for example through the way other people are reasoning about your possible behaviors, and so an account that simply looks for your occurrence in the world as a subterm/part misses an important aspect of what's going on. As Turing machines also illustrate, not having subterm/part structure.)

Reply
Roman Malov's Shortform
Vladimir_Nesov5d20

Some confusion remains appropriate, because for example there is still no satisfactory account of a sense in which the behavior of one program influences the behavior of another program (in the general case, without constructing these programs in particular ways), with neither necessarily occurring within the other at the level of syntax. In this situation, the first program could be said to control the second (especially if it understands what's happening to it), or the second program could be said to perform analysis of (reason about) the first.

Reply
Nina Panickssery's Shortform
Vladimir_Nesov6d20

Like with AGI, risks are a reason to be careful, but not a reason to give up indefinitely on doing it right. I think superintelligence is very likely to precede uploading (unfortunately), and so if humanity is allowed to survive, the risks of making technical mistakes with uploading won't really be an issue.

I don't see how this has anything to do with "succession" though, there is a world of difference between developing options and forcing them on people who don't agree to take them.

Reply
The Industrial Explosion
Vladimir_Nesov6d20

(It's useful to clearly distinguish exploration of what follows from some premises, and views on whether the premises are important/likely/feasible. Issues with the latter are no reason at all to hesitate or hedge with the former.)

But that means you again assume that arbitrary nanotech is feasible, which could be true, but as the other link notes, certainly isn't anything like obvious.

I mentioned arbitrary nanotech, but it's not doing any work there as an assumption. So it being infeasible doesn't change the point about macroscopic biotech possibly being first, which is technically still the case if nanotech doesn't follow at all.

Various claims that nanotech isn't feasible are indeed the major reason I thought about this macroscopic biotech thing, since existing biology is a proof of concept, so some of the arguments against feasibility of nanotech clearly don't transfer. It still needs to be designed, and the difficulty of that is unclear, but there seem to be fewer reasons to suspect it's not feasible (at a given level of capabilities).

Reply
The Industrial Explosion
Vladimir_Nesov7d1-3

That is, you'd need very strongly ASI-level understanding of biology to accomplish this

That's in some sense close to the premise, though I think fast high-fidelity chemistry/biology simulators (or specialized narrow AIs) should be sufficient to get this done even at near-human level, with enough subjective time and simulation compute. My point is that "fruit flies"/biorobots should be an entry on a list that contains both traditional robots and nanotech as relevant for post-AGI industry scaling. There are some perceived difficulties with proper nanotech that don't apply to this biorobot concept.

In the other direction, a sufficiently effective software-only singularity would directly produce strong ASIs on existing hardware, without needing more compute manufactured first, and so won't need to bother with human labor or traditional robots, which again doesn't fit the list in this post. So the premise from the post is more that software-only singularity somewhat fizzles, and then AGI-supercharged industry "slowly" scales to build more compute, until enough time has passed and enough compute has been manufactured that nanotech-level things can be developed. In this setting, the question is whether macroscopic biotech could be unlocked even earlier.

(So I'm not making a general/unconditional prediction in this thread. Outside the above premises I'm expecting a software-only singularity that produces strong ASI on existing hardware without having much use for scaling traditional industry first, though it might also start scaling initially for some months to 1-2 years, perhaps mostly to keep the humans distracted, or because AGIs were directly prompted by humans to make this happen.)

Reply
The Industrial Explosion
Vladimir_Nesov7d20

Compute could be a bottleneck, not just for AI but also for simulations of physical world systems that are good enough to avoid too many real experiments and thus dramatically speed up progress in designing things that will actually do what they need to do.

Without scaling industry first you can't get much more compute. And if you can't immediately design far future tech without much more compute, then in the meantime you'd have to get by with hired human labor and clunky robots, building more compute, thus speeding up the next phase of the process.

Reply
The Industrial Explosion
Vladimir_Nesov8d2-1

I'm not seeing a tradeoff. If you speed things up by a few years, that's also a few years earlier that local superintelligences get online at all of the stars in the reachable universe and start talking to each other at the speed of light, in particular propagating any globally applicable wisdom for the frontier of colonization, or observations made from star-sized telescopes and star-sized physics experiments, or conclusions reached by star-sized superintelligences, potentially making later hops of colonization more efficient.

So maybe launching drones to distant galaxies is not the appropriate first step in colonizing the universe, this doesn't change the point that the Sun should still be eaten in order to take whatever step is actually more useful faster. Not eating the Sun at all doesn't even result in producing Sun-sized value. It really does need to be quite valuable for its own sake, compared to the marginal galaxies, for leaving the Sun alone to be the better option.

Reply
The Industrial Explosion
Vladimir_Nesov8d5-4

An additional hop to the nearby stars before starting the process would delay it by 10-50 years, which costs about 10 galaxies in expectation. This is somewhere between 1e8x and 1e14x more than the Solar System, depending on whether there is a way of using every part of the galaxy.

Mass is computation is people is value. Whether there is more than 1e8x-1e14x of diminishing returns in utility from additional galaxies after the first 4e9 galaxies is a question for aligned superphilosophers. I'm not making this call with any confidence, but I think it's very plausible that marginal utility remains high.

Reply
Load More
10Vladimir_Nesov's Shortform
Ω
9mo
Ω
95
64Musings on AI Companies of 2025-2026 (Jun 2025)
15d
4
34Levels of Doom: Eutopia, Disempowerment, Extinction
1mo
0
181Slowdown After 2028: Compute, RLVR Uncertainty, MoE Data Wall
2mo
22
169Short Timelines Don't Devalue Long Horizon Research
Ω
3mo
Ω
24
19Technical Claims
3mo
0
148What o3 Becomes by 2028
6mo
15
41Musings on Text Data Wall (Oct 2024)
9mo
2
10Vladimir_Nesov's Shortform
Ω
9mo
Ω
95
27Superintelligence Can't Solve the Problem of Deciding What You'll Do
10mo
11
83OpenAI o1, Llama 4, and AlphaZero of LLMs
10mo
25
Load More
Quantilization
2y
(+13/-12)
Bayesianism
2y
(+1/-2)
Bayesianism
2y
(+7/-9)
Embedded Agency
3y
(-630)
Conservation of Expected Evidence
4y
(+21/-31)
Conservation of Expected Evidence
4y
(+47/-47)
Ivermectin (drug)
4y
(+5/-4)
Correspondence Bias
4y
(+35/-36)
Illusion of Transparency
4y
(+5/-6)
Incentives
4y
(+6/-6)
Load More