Anon User

Posts

Sorted by New

Wiki Contributions

Comments

I am actually agreeing - but I am saying that the way we'd actually accomplish it is by relying on the meta-level as much as possible. E.g. by catching that the object-level conclusion does not reconsider with our meta-level principles, or noticing that it's incongruent in some way.

The availability of emotionally-charged information is exactly what causes the issue. It's easier to avoid temptations by setting rules for yourself ahead of time than by relying on your ability to resist them when they are right in front of you.

Nope, it's now on you, not on them. Do not let shady people route their money through you; if you do, now people get to blame you. (Not saying it's necessary a good thing, just what it is).

Make adjustments based on (1)

You cannot trust the hardware not to skew this adjustment in all kinds of self-srving ways. The point is that the hardware will mess up any attempt to compute this on object level, all you can do it compute at the meta-level (where the HW corruption is less pronounced), come with firm rules, and then just stick to them on object level.

The specific object-level algorithms do not matter - the corrupted HW starts with the answer it wants the computation to produce, and then backpropages it to skew the computation towards the desired answer.

So, a shady character approaches you and offers a deal that would make you $1K richer. You know that worst case scenario if you accept and they get caught is that you will just have to return the $1K, so you have no incentive to refuse to deal with them. No, you need something like 3x damages to properly disincentivise people from turning a blind eye.

Insufficiently tested, not ready to be placed in production. Test in a sandbox (city, or a small state) first.

For your last question - I think there are very few, if any, implications. Humans arguably occupy an extremely tiny region in the space of possible intelligent agent designs, while the orthogonality thesis applies to the space as a whole. If it was the case that goals and intelligence were correlated in humans, I'd expect it would be more reflective of how humans happen to be distributed in that space of possible designs, and not telling us much about the properties of the space itself.

Yes, unfortunately you have to figure out how to do all that when there are politicians they consider to be of their side screaming "look, zombies, zombies!!!" and the people you are trying to claim down suspect that you might also be a secret zombie...

As for our own world, I predict that as people see just how possible it is to find common ground and build on it, they will lose their susceptibility to the polarization efforts of politicians.

But to what extent the divisions are driven by genuine desire to address the issue(s) vs just a raw "us vs them" drives (think - divisions between fandoms of rival sports teams) where the actual issues are just an excuse to think "we are better then them"?

Think of a spectrum between a world with overabundant resources where trying to hoard them is stupid and "learn to be friends with everybody" is the right strategy vs a literal zombie apocalypse scenario where anybody even thinking of being friendly to the zombies endangers not only themselves, but their whole community, and hoarding is essential to survival. The reality of this world is that for quite a while resources tended to indeed become more abundant, resulting in "we are all in this together" mentality tending to win over "us vs them" one more often than not, but if somebody is truly in zombie apocalypse survival mode, there is fairly little you can do to convince them to embrace "zombies".

You do not seem to be addressing the misaligned incentives - such as politicians often being incentivised to exaggerate and perpetuate divisions, rather than address them. Observe how in controversial areas (e.g. gun rights vs gun control) common sense reforms that have broad public support still tend to not happen.

Load More