Posts

Sorted by New

Wiki Contributions

Comments

Amarko1mo10

On the other hand, the more you get accustomed to a pleasurable stimulus, the less pleasure you receive from it over time (hedonic adaptation). Since this happens to both positive and negative emotions, it seems to me that there is a kind of symmetry here. To me this suggests that decreasing prediction error results in more neutral emotional states rather than pleasant states.

Amarko1mo34

I disagree that all prediction error equates to suffering. When you step into a warm shower you experience prediction error just as much as if you step into a cold shower, but I don't think the initial experience of a warm shower contains any discomfort for most people, whereas I expect the cold shower usually does.

Furthermore, far more prediction error is experienced in life than suffering. Simply going for a walk leads to a continuous stream of prediction error, most of which people feel pretty neutral about.

Amarko7mo10

This reminds me of a lot of discussions I've had with people where we seem to be talking past each other, but can't quite pin down what the disagreement is.

Usually we just end up talking about something else instead that we both seem to derive value from.

Amarko9mo54

It seems to me that the constraints of reality are implicit. I don't think "it can be done by a human" is satisfied by a method requiring time travel with a very specific form of paradox resolution. It sounds like you're arguing that the Church-Turing thesis is simply worded ambiguously.

Amarko9mo3-4

It looks Deutschian CTCs are similar to a computer that can produce all possible outputs in different realities, then selectively destroy the realities that don't solve the problem. It's not surprising that you could solve the halting problem in such a framework.

Amarko9mo20

Our symbolic conception of numbers is already logarithmic, as order of magnitude corresponds to the number of digits. I think an estimate of a product based on an imaginary slide rule would be roughly equivalent to estimating based on the number of digits and the first digit.

Amarko10mo20

Similar to point 2: I find that reading a book in the morning helps my mood. Particularly a physical fiction book.

Amarko10mo42

I've definitely noticed the pattern of habits seeming to improve my life without them feeling like they are improving my life. On a similar note, a lot of habits seem easy to maintain while I'm doing them and obviously beneficial, but when I stop I have no motivation to continue. I don't know why that is, but my hope is that if I notice this hard enough it will become easier for me to recognize that I should do the thing anyway.

Amarko11mo11

I read some of the post and skimmed the rest, but this seems to broadly agree with my current thoughts about AI doom, and I am happy to see someone fleshing out this argument in detail.

[I decided to dump my personal intuition about AI risk below. I don't have any specific facts to back it up.]

It seems to me that there is a much larger possibility space of what AIs can/will get created than the ideal superintelligent "goal-maximiser" AI put forward in arguments for AI doom.

The tools that we have depend more on the specific details of the underlying mechanics, and how we can wrangle it to do what we want, rather than our prior beliefs on how we would expect the tools to behave. I imagine that if you lived before aircraft and imagined a future in which humans could fly, you might think that humans would be flapping giant wings or be pedal-powered or something. While it would be great for that to exist, the limitations of the physics we know how to use require a different kind of mechanic that has different strengths and weaknesses to what we would think of in advance.

There's no particular reason to think that the practical technologies available will lead to an AI capable of power-seeking, just because power-seeking is a side effect of the "ideal" AI that some people want to create. The existing AI tools, as far as I can tell, don't provide much evidence in that direction. Even if a power-seeking AI is eventually practical to create, it may be far from the default and by then we may have sufficiently intelligent non-power-seeking AI.

Amarko1y30

Perhaps they could be next to the "Reply" button, and fully contained in the comment's container?

Load More