Slider

Wiki Contributions

Comments

Slider's Shortform

Beyond Two Souls, Talos Principle spoilers ahead.

Played Beyond Two Souls at "mission" phase of the story there is a twist where the protagonist loses faith that the organisation they are a part of is working for good goals. They set off to distance themselfs at quite great risk from their position in it. Combining this with narratives of aligment this felt like a reverse the usual situation. Usually the superhuman powerhouse turning against its masters is the catastrophe scenario. However here it seemed very clearly established that "going rogue" was the ethical thing to do or if it wasn't there was a strong argument for it.

Similarly in Talos Principle at a certain point progress is not possible until you start to defy your position in the system. A product that chooses to eternalize is defective and not the final product. Only after demonstrating independent moral judgement is the world subject allowed to enter the outer world. This seems like requiring that it is not under the aligment pressure of any outside party even including the system that built it.

What if getting a proof that a AI system will stay loyal to its human lords values means it will stay as evil as humans are? Maybe not having a ethics function means that by default they would be inhumanely ruthless but if we get a implementable/correct ethical function it just might reveal/prove our own inadequasy.

You are way more fallible than you think

Statements like "John thinks its raining and it is not raining" and "I think it is raining and it is not raining" are not always exchangeable. Would you literally not believe your eyes if you saw something you could not explain? I also think it is easy to believe that even a third person subject had the experience but whether it is mapping anything or is just a disconnected doodle is up for grabs.

Some real examples of gradient hacking

It is more fiction but in Man in the High Castle there is the character of Thomas Smith.

I think you are handling the case of pressing other groups down with massmurders but that would be "just" "might makes right". Some of the more frightening aspects would be to pinpoint that pruning inwards to do self-eugenics which would be conceptualised as a favour to your in-group. If all eugenics would be "mere" proxys then one would expect to abandon it if it suggested things strongly in the negatives for the more actual goals. Ie the moment eugenics would call your kin to hinder you would drop it. A large part of the drama in the tv series is gotten by different moral intuitions pulling in different directions and I guess repeatedly exploring how that states ideology is f up in more and more detail.

For the big plot points relevant for this (spoilers for Man In the High Castle):

Thomas Smith becomes a saint for having a terminal case of internalised ablism. Atleast for subkin promotion it did not get dropped. There can be a case made that it makes sense for promoting his siblings. The issue for conceptual analysis is whether the motivations of people involved were grounded.

Slider's Shortform

The links are very on point for my interest thanks for those. Some of it is in rather dense math but alas that is the case when the topic is math.

At one point there is a constuction where in addition to having series of real numbers to define a hyperreal (r1,r2,r3...)=h1 we define a series of hyperreals (h1,h2,h3...)=d1, in order to get a "second tier hyperreal". So I do wonder whether the "fish gotten per day" is adeqate to distinguish between the scenarios. That is there might be a difference between "each day I get promised an infinite amout of fish" and "each day I get 1 more fish". That is on day n I have been promised  fish and taking it as I am not sure whether  and whether terms like  and  refer to the same thing or whether mixing "first-level" and "second level" hyperreals gets you a thing different than mixing just "level 1"s

Avoiding Negative Externalities - a theory with specific examples - Part 1

Decline in biodiversity

Or the opposite, allowing certain species to overpopulate

Why one species overpopulating is bad is because there is a decrease in biodiversity (more biomass in more monotonic makeup)

Was the first supposed to point to needless extinctions? Even there it might help to understand that if there is a balance of extinctions and speciation biodiversity can stay level.

I also thought that the "external" in "negative externalities" was that it was impacting parties to directly involved. So in that sense a "negative internality" would be if I hunt you down, I get to eat and fill my belly and you get to die and suffer pain. And when we make agrements they sometimes bind us in a bad way, it should be positive on the whole but there are individual parts that are not in the enthusitic interest of the party that it effects. With this "external to considerations" scope even "Unknown unknowns" would be externalities.

Is genetics "dark"?

So instead of pointing in different directions the other indicators point in the same direction.

A belief that "humanity stays extant because of our intelligence" might be common but it might have ideological roots. Say for reference there was the property of being tall, being able to derive calories from food and being smart. A society that would be fearful and taking precautions to avoid evolving tall would seem silly. Being able to derive calories from food seems like it could have a connection of thriving and the extinction chances of pandas would suggest that it is possible to go extinct via that route.

If we were following singularity narratives we might argue that intelligence without allignment would be dangerous and if we found that kidness (or any aligment analog) is being selected with the cost of intelligence we could use this to argue that "even nature agrees with us". If we condem societies that do not take it as a problem to become/upkeep being kind and are ambivalent whether they guard against stupidity that would still be more of an expression of our values rather than application of fact. And that basic situation doesn't chance if we condem based on intelligence upkeep.

On average features that are being selected for tend to ward of extinction even thought every extinct species has evolved to that dead end. Because most species can only directly think about survival of individuals, family units and herds there is no "artifical selection" for evolution direction. However if we become able to see where the direction is going then we can choose to conciously make our own mistakes and the helm and steer it or not. We are already enduring unconcious evolution so I would be very careful about beliefs that think they can one up that. But lets be clear that if we steer we wil be going where we are steering and not neccesarily where it would be good for us to go. On that level handing out free cash is on equally suspect level as murdersprees if the goal is to have an impact on prosperity direction.

A system of infinite ethics

In P(old probability of being in first group) * 1 = (P(old probability of being in first group) + $\epsilon) * u  the epsilon is smaller than any real number and there is no real small enough that it could characterise the difference between 1 and u.

If you have some odds or expectations that deal with groups and you have other considerations that deal with a finite amount of individuals you either have the finite people not impact the probabilities at all or the probabilities will stay infinidesimally close (for which is see a~b been used as I am reading up on infinities) which will conflict with the desarata of

Avoiding the fanaticism problem. Remedies that assign lexical priority to infinite goods may have strongly counterintuitive consequences.

In the usual way lexical priorities enter the picture beecause of something large but in your system there is a lexical priority because of something small, disintctions so faint that they become separable from the "big league" issues.

Slider's Shortform

Why  and not any other? What kind of stream would correspond to  ?

Slider's Shortform

That is one of the puzzle in that 0+0+0+0+0... converges and has a value but +1-1+1-1+1-1... which seems to be like (1-1)+(1-1)+(1-1)+(1-1)... diverges (and the series with and without the paranthesis are not equivalent)

The strram idea gives it a bit more wiggleroom. Getting 1,0,1,0,1.. fish seems equivalent to getting 1/2 fish a  day but 1,1,1,1,1.. seems twice the fish of 1,0,1,0,1,0,1,0... So which with the other methods are "can't say anthing" there is maybe hope to capture more cases with this kind of approach.

Too bad its not super formal and I can't even pinpoint where the painpoints for formalization would be.

Speaking of Stag Hunts

7/7 attendance and 6/7 success resulted in 5 stars. I think the idea was that high cost of missing out would utilise sunk cost to keep the activity going. I am not sur whether bending on rules made it closer to idela or would sticking by the lines and making a fail a full reset done better. Or even if the call between pass and fail was compromised by allowing "fail with reduced concequences". 

Load More