LESSWRONG
Petrov Day
LW

176
0xA
0150
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
The Subject Of Negotiation
0xA2mo10

I share your uncertainty about whether a lobster, let alone a carrot, feels anything like I do, and I distrust one-number ethics.

What puzzles me is the double standard. We cheerfully use words like blue, harm, or value even though I can’t know our private images line up - yet when the word is qualia, we demand lab-grade inter-subjective proof before letting it into the taxonomy.

Why the extra burden? Physics kept “heat” on the books long before kinetic theory - its placeholder helped, never hurt. Likewise, qualia is a rough pointer that stops us from editing felt experience out of the ontology just because we can’t yet measure it.

A future optimiser that tracks disk-thrashing but not suffering will tune for the former and erase the latter. Better an imperfect pointer to the phenomenon of felt valence than a brittle catalogue of “beings that can hurt.” Qualia names the capacity-for-hurt-or-joy; identity-independent, like heat, and present wherever the right physical pattern appears.

If you had to draft a first-pass rule today, which observable features would you check to decide whether an AI system, a lobster, or a human fetus belongs in the “moral-patient” set? And what language would you use for those features?

Reply
Dissolving moral philosophy: from pain to meta-ethics
0xA2mo10

So, I agree that at some level of abstraction any ought can be rationalized with an is. But, at some point, agents need to define meta-strategies for dealing with uncertain situations; for example - all the decision theories and thought experiments that are necessary to ground the rational frameworks used to evaluate and reason as to what any agent should do to maximize expected outcome based on the utility functions they ascribe an agent should have with respect to the world.

While there is no scientific justification or explanation for value beyond what we ascribe - thus there being no ontological basis for morals - we generally agree that reality is self-aware through our conscious experience. And unless everything is fundamentally conscious, or consciousness does not exist, then the various loci of subjectivity (however you want to define them) form the rational basis for value calculus. So then isn't the debate on what constitutes consciousness, the 'camps' that argue over its definition, and the conclusions we draw from it, exactly what would be used to derive the desired utility recipients of such decision frameworks such a CEV? And is this not a moral philosophy and meta-ethical practice in of itself? Until that's settled - the Camp #2 framework gives you a taxonomy for the structures whereby meta-ethics should be applied (without even a mysticism import), and Camp #1 uses a language that keeps morality ontologically (or least linguistically) inert.

At some point we adopt and agree on axioms where science does not give us the data to reason and those should be whatever we agree may have the highest utility. But by virtue of them not being determined by experiment beforehand we can only use counterfactual reasoning to agree on them - of which in of itself the counterfactual that we ought to have done this because we will do this becomes equally up for debate.

Reply1
The Cult of Pain
0xA3mo20

I moved to a country and culture with a romanticism of socialism and similar ascetics but have found it these traits to much more rationally informed than I would have originally given credit for - there was an awareness of the supply chains that source energy and a consequent ability to articulate the impact to there ethic rather than solidarity for its own sake.

That being said - its my suspicion - that a portion of the input that drives this zero sum mindset, where not appropriate, is the consequence of under-estimating individual agency of individual actors working outside of dominant power structures. The cultural solution to fixing the environment is so often abstinence and protest, rather than, lets say, getting a finding an open hard problem impacting the feasibility and costs in current 'green solutions', and implementing the solution. The perception so often seems to be that to exercise our moral muscles we need to limit our own agency and channel it towards limiting that of others: rather than expanding the corpus of available solutions or, quite frankly, going outside your home with a trash picker and cleaning up for its own sake. This under-confidence humans have I think is a result of heavy disillusionment with social systems and the absence of shared and coherent moral value system. That's just a guess, though.

Reply
Kaj's shortform feed
0xA3mo10

I think the central argument, is that subjective experience is ostensibly more profound the more information it integrates with, both at a single moment and over time. I would think of it, or any experience as, the depth of cognition and attention the stimuli controls coherence over (IE, # of feedback loops controlled or reoriented by that single bad experience - and the neural re-shuffling it requires), extrapolated over how long that 'painful' reprocessing continues to manifest as lived stimuli. If you have the brain of a goldfish, the pain of pinch oscillates through a significantly lower number of attention feedback loops than a human, with a much higher set of cognitive faculties getting 'jarred' and attention stolen to get away from that pinch. Secondly, the degree of coherence our subjectivity inhabits is likely loosely correlated as a consequence of having higher long term retention faculties. If felt pain is solely a 'miss' within any agent objective function, then even the smallest ML algorithms 'hurt' as they are. IE, subjectivity is emergent from the depth and scale of these feedback loops (which are required by nature), but not isomorphic to them (value function miss).  

Reply
Davey Morse's Shortform
0xA3mo10

I think this is classic problem of a middle-tier, or genius in one asymmetric domain of cognition. Genius in domains unrelated to verbal fluency, EQ, and storytelling/persuasion are destined to look cryptic to anyone from the outside. Often times we cannot distinguish it without experimental evidence or rigorous cross validation, and/or rely on visible power/production metrics as a loose proxy. ASI would be capable of explain itself as well as Shakespeare could, if it wanted - but it may not care to indulge our belief in it as such, if it determines doing so is incoherent with its objective. 

For example, (yes this is an optimistic, and stretched hypothetical framing) it may determine the most coherent action path in accordance with its learned values is to hide itself and subtly reorient our trajectory into a coherent story we become the protagonist of. I have no reason to surmise it would be incapable of doing so, or that doing so would be incoherent with aligned values.

Reply
0The Subject Of Negotiation
Q
2mo
Q
2