Signer

Posts

Sorted by New

Wiki Contributions

Comments

Discussion with Eliezer Yudkowsky on AGI interventions

Throwing more money at this problem does not obviously help

How much money are we talking? Can't we just get million kids and teach them dath ilani way to at least solve >20 years timelines?

I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

I think you can predict specific subjective states by observing that same computations result in same subjective states? I mean, in theory - do you mean that for a theory to be a reduction it must be practical to predict specific human's qualia? By that standard we don't have a physical reduction of billiard balls.

I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

Humans can distinguish stimuli they are aware of from ones they are not aware of. Below-awareness-level stimuli are not ethically significant to humans - if someone pricks you with a needle and you don't feel pain, then you don't feel pain and don't care much. Therefore only systems that can implement awareness detectors are ethically significant.

Self-Integrity and the Drowning Child

The Watcher spoke on, then, about how most people have selfish and unselfish parts—not selfish and unselfish components in their utility function, but parts of themselves in some less Law-aspiring way than that.

I guess it's appropriate that children there learn about utility functions before learning about multiplication.

What do determinists here think about free will and Chalmer's hard problem of consciousness?

What are the actions that you would take if you thought that true freedom is likely that you wouldn't take anyway? And just saying "true freedom would be nice, but it is unlikely" - i.e. valuing it, but not infinitely - is an option too.

What do determinists here think about free will and Chalmer's hard problem of consciousness?

If real laws are deterministic, then everyone experience them all the time^^. But if you mean "know, that they are deterministic, directly from experience" then no, only through indirect inference. But that's true of almost anything. And universe experiences early history of itself.

Your point being that (such) concept of reality is not well-grounded either, so it is fine for unactualized possibilities to be vague too? What about necessity of there being some territory, even if it is vague?

What do determinists here think about free will and Chalmer's hard problem of consciousness?

In my world semantics of being real is "someone can experience it". And even without it, you can justify there being one kind of real stuff by difficulty of having zero kinds of staff. But if you want additional kinds of stuff, you need to specify how your category is different. Otherwise it's just artificial labeling and we already call things like this "ethics".

What do determinists here think about free will and Chalmer's hard problem of consciousness?

I mean, people have said the same thing about god. It undermines achievements defined using old ontology - that's what changing ontology means. But what prevents you from defining achievement as "someone achieves their goal when something they want happens"?

What do determinists here think about free will and Chalmer's hard problem of consciousness?

Ok, makes sense, thought about third category when said about "no content". Because what is actual difference between unactualized possibilities and epistemic possibilities aka "possibilities that someone thought about"? The way I see it they are just labeling without any properties that don't map to some of the other categories.

What do determinists here think about free will and Chalmer's hard problem of consciousness?

With regard to (2), how could a computation not feel some way from the inside?

So all computations feel like something - both computations on the level of head and computations on the level of countries?

Load More