Gurkenglas

I operate by Crocker's rules.

Comments

Convolution as smoothing

The fourier transform as a map between function spaces is continuous, one-to-one and maps gaussians to gaussians, so we can translate "convoluting nice distribution sequences tends towards gaussians" to "multiplying nice function sequences tends towards gaussians". The pointwise logarithm as a map between function spaces is continuous, one-to-one and maps gaussians to parabolas, so we can translate further to "nice function series tend towards parabolae", which sounds more almost always false than usually true.

In the second space, functions are continuous and vanish at infinity.
This doesn't work if the fourier transform of some function is somewhere negative, but then multiplying the function sequence has zeroes.
In the third space, functions are continuous and diverge down at infinity.

Convolution as smoothing

and it doesn't much matter if you change the kernel each time

That's counterintuitive. Surely for every  there's an  that'll get you anywhere? If .

Changing the AI race payoff matrix

Indeed players might follow a different strategy than they declare. A player can only verify another player's precommitment after pressing the button (or through old-fashioned espionage of their button setup). But I find it reasonable to expect that a player, seeing the shape of the AI race and what is needed to prevent mutual destruction, would actually design their AGI to use a decision theory that would follow through on the precommitment. Humans may not be intuitively compelled by weird decision theories, but they can expect someone to write an AGI that uses them. Although even a human may find giving other players what they deserve more important than not letting the world as we know it continue for another decade.

Compare to Dr. Strangelove's doomsday machine. We expect that a human in the loop would not follow through, but we can't expect that no human would build such a machine.

It’s not economically inefficient for a UBI to reduce recipient’s employment

The crazy distortions are the damage. People fear low-income people stopping their work because they fear that goods produced by low-income workers will become more expensive.

It’s not economically inefficient for a UBI to reduce recipient’s employment

Your argument proves too much - in medieval times, if more than 20% of people stopped working agriculture to buy food with their UBI, food prices would go up until they resumed, as an indicator of damage done to society from people stopping their work.

It’s not economically inefficient for a UBI to reduce recipient’s employment

You can cause more than a dollar of damage to society for every dollar you spend, say by hiring people to drive around throwing eggs at people's houses. Though I guess in total society is still better off by a hundred dollars compared to if you had received them via UBI.

Open & Welcome Thread – November 2020

Perhaps the police officer simply thinks that your average person will easily do dangerous things like shaking someone off their car without thinking much of it, but will not take a knife to another's guts. Therefore, the car incident would not mark the man as unusually dangerous.

There is no mathematically canonical way to distinguish between trade and blackmail, between act and omission, between different ways of assigning blame. The world where nobody jumps on cars is as safe as the one where nobody throws people off cars. We decide between them by human intuition, which differs by memetic background.

Some AI research areas and their relevance to existential safety

Yeah, I basically hope that enough people care about enough other people that some of the wealth ends up trickling down to everyone. Win probability is basically interchangeable with other people caring about you and your ressources across the multiverse. Good thing the cosmos is so large.

I don't think making acausal trade work is that hard. All that is required is:

  1. That the winner cares about the counterfactual versions of himself that didn't win, or equivalently, is unsure whether they're being simulated by another winner. (huh, one could actually impact this through memetic work today, though messing with people's preferences like that doesn't sound friendly)
  2. That they think to simulate alternate winners before they expand too far to be simulated.
The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables

I'm not convinced that we can do nothing if the human wants ghosts to be happy. The AI would simply have to do what would make ghosts happy if they were real. In the worst case, the human's (coherent extrapolated) beliefs are your only source of information on how ghosts work. Any proper general solution to the pointers problem will surely handle this case. Apparently, each state of the agent corresponds to some probability distribution over worlds.

Some AI research areas and their relevance to existential safety

with the exception of people who decided to gamble on being part of the elite in outcome B

Game-theoretically, there's a better way. Assume that after winning the AI race, it is easy to figure out everyone else's win probability, utility function and what they would do if they won. Human utility functions have diminishing returns, so there's opportunity for acausal trade. Human ancestry gives a common notion of fairness, so the bargaining problem is easier than with aliens.

Most of us care some even about those who would take all for themselves, so instead of giving them the choice between none and a lot, we can give them the choice between some and a lot - the smaller their win prob, the smaller the gap can be while still incentivizing cooperation.

Therefore, the AI race game is not all or nothing. The more win probability lands on parties that can bargain properly, the less multiversal utility is burned.

Load More