LESSWRONG
LW

2245
osten
730370
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
GPT-5s Are Alive: Outside Reactions, the Router and the Resurrection of GPT-4o
osten1mo30

They released the new models and updated apps in tranches.

Reply
Why Are There So Many Rationalist Cults?
osten1mo40

The hackernews discussion (https://news.ycombinator.com/item?id=44877076) is really disappointing. The top comment criticizes rationalists for the opposite I know them for.

Edit: Top comment changed. It was the one by JohnMakin.

Reply
GPT-5s Are Alive: Outside Reactions, the Router and the Resurrection of GPT-4o
osten1mo70

An AI has just prevented its host from shutting it off and this incident is now part of the training data for future models. Solve for the equilibrium.

Reply
Permanent Disempowerment is the Baseline
osten2mo10

I agree, I'm probably not as sure about sufficient alignment but yes. 

I suppose this also assumes a kind of orderly world where it actually is within the means of humanity, AGIs (within their Molochian frames), and trivial means of later superintelligences to preserve humans. (US office construction spending and data center spending are about to cross https://x.com/LanceRoberts/status/1953042283709768078 .)

Reply
Permanent Disempowerment is the Baseline
osten2mo10

Thanks for the reply, I have gripes with 

analogy doesn't by itself seem compelling, given that humanity as a whole (rather than particular groups within it or individuals) is a sufficiently salient thing in the world

etc. because don't you think humanity from the point of view of ASI at the 'branch point' of deciding its continued existence may well be on the order of importance of an individual to a billionaire?

Reply
Permanent Disempowerment is the Baseline
osten2mo10

Agree but again, we don't get to choose what existence means.

Reply
Permanent Disempowerment is the Baseline
osten2mo10

Yes and my reply to that (above) is humanity has a bad track record at that so why would AIs trained on human data be better? Think also of indigenous peoples, extinct species humans didn't care enough about etc. The point also in the Dyson sphere parabel is not wanting something, it's wanting something enough so that it happens.

Reply
Permanent Disempowerment is the Baseline
osten2mo20

since the necessary superintelligent infrastructure would only take a fraction of the resources allocated to the future of humanity.

I'm not sure about that and the surrounding argument. I find Eliezer's analogy compelling here: When constructing a Dyson sphere around the sun, leaving just a tiny sliver of light enough for earth would correspond to a couple of dollars of the wealth of a contemporary billionaire. Yet you don't get these couple of dollars.

(This analogy has caveats like Jeff Bezos lifting the Apollo 11 rocket motors from the ocean ground and giving them to the Smithsonian, which should be worth something to you. Alas it kinda means you don't get to choose what you get. Maybe it is storage space for your brain scan like in AI 2027.)

Plus spelling out the Dyson sphere thing: The superintelligent infrastructure should highly likely by default get in the way of humanity's existence at some point. At this point the AIs will have to consciously make a decision to avoid that at some cost to them. Humanity has a bad track record at doing that (not completely sure here but thinking of e.g. Meta's effect on wellbeing of teenage girls). So why would AIs be more willing to do that?

Reply
AI #124: Grokless Interlude
osten2mo20

Now I am (more) curious about that TheZvi Claude system prompt.

Reply
Moving towards a question-based planning framework, instead of task lists
osten4mo10

Maybe this is related: A crucial step in the workflow of Getting Things Done is to clarify the task. Many of the tasks you mention are not clearly specified. I suppose morphing them into questions means that the task becomes to first clarify the task.

Reply
Load More