## LESSWRONGLW

Donald Hobson

Mmath Cambridge. Currently studying postgrad at Edinburgh. D.P.Hobson@sms.ed.ac.uk

# Sequences

Logical Counterfactuals and Proposition graphs
Assorted Maths

It’s not economically inefficient for a UBI to reduce recipient’s employment

I don't have the idea that its impossible. There are plenty of healthy people with jobs.

The question is, how high is getting fit on the persons list of important things to do?

It depends how long the hours are, and commute, and other demands on time.

It’s not economically inefficient for a UBI to reduce recipient’s employment

Another question is "what counts as work? - What are they doing instead of work?"

Suppose a group of people are all given UBI. They all quit their job stacking shelves.

They go on to do the following instead.

1. start writing a novel
2. look after their children (instead of using a nursery)
3. look after their ageing parents (instead of a nursing home)
4. learn how to play the guitar
5. make their (publicly visible) garden a spectacular display of flowers.
6. take (unpaid) positions on the local community counsel and the school board of governors.
7. helping out at the local donkey sanctuary
8. getting themselves fit and healthy (exercise time +cooking healthy food time)

Their are a variety of tasks that are like this. Beneficial to society in some way, compared to sitting doing nothing. But not the prototypical concept of "work".

I would expect a significant proportion of people on UBI to do something in this category.

Do we say that UBI is discouraging work, and that these people are having positive effects by not working? Do we say that they are now doing unpaid work?

Of course, the answer to these questions doesn't change reality, only how we describe it.

It’s not economically inefficient for a UBI to reduce recipient’s employment

If you receive $100 for work, that means you have already provided at least$100 in value to society. That society might gain additional benefit from how you spend your money is merely coincidental.

No, it means that there is at least 1 person prepared to pay $100 for the work. If you are manufacturing weapons that end up in the wrong hands. You might be doing quite a lot of harm to society overall. Your employer gains at least$100 in value. The externalities could be anything.

Comparing Covid and Tobacco

The important number is not how many people is not how many people covid does kill, but how many it would have killed if we hadn't tried to stop it.

Extreme example, suppose a meteor headed for earth. We divert it at great cost and effort. Then people come along saying, look how much we spent on diverting the meteor, and it didn't kill anyone. The important question is how many people an undiverted meteor would kill.

Spend twice as much effort every time you attempt to solve a problem

In computer science, this is a standard strategy for allocating blocks of memory.

Suppose you have some stream of data that will end at some point. This could come from a user input or a computation that you don't want to repeat. You want to store all the results in a contiguous block of memory. You can ask for a block of memory of any size you want. The strategy here is that whenever you run out of space, you ask for a block that's twice as big and move all your data to it.

Examples of Measures

There is the cantor distribution.

https://en.wikipedia.org/wiki/Cantor_distribution

One way of getting it is to take a coin, write 0 on one side and 2 on the other. Flip it infinity times. This gives you a number in trinary.

If you have a set  to measure, then  where  is the number made in trinary above.

There are also measurable cardinals. https://en.wikipedia.org/wiki/Measurable_cardinal

These are cardinals big enough to have a 0,1 measure on their powerset.

Well ZFC can't prove whether or not they exist. If you know what ultra-filters are, these are ulrafilters that meet the stronger condition of being closed under countable intersection, not just finite intersection.

On Arguments for God

In fact, it'd actually be suspicious if all forty of these arguments came out against God. Surely we should expect the advantage to belong to the deists in at least one or two?

Good strong arguments are exactly the arguments you shouldn't expect to see for a position that is false.

You can construct arguments that would technically be large baysian updates, if you ignored the cherry picking. You pray for heads and toss a coin 200 times. If you just focus on the 100 times it lands heads, you have an evidence factor of nearly 2^100:1  in favour of a god that grants prayers about coin-flips over randomness. Of course, you aren't allowed to select a subset of the evidence to update on, and ignore the rest.

Once you realize why these sort of arguments fail, the only arguments left for god are ones that make some sort of mistake.

Reality is very different from notions of god. There are good reasons not to expect any fully decent arguments for god to exist, and no reason half decent arguments must exist.

Misalignment and misuse: whose values are manifest?

I think that you have a 4th failure mode. Moloch.

Model Depth as Panacea and Obfuscator

and halve it for every tails in the first two coins or tails in the second two.

Is this an error? Should that read heads?

Any work on honeypots (to detect treacherous turn attempts)?

There are several reasons that make this not a brilliant research opportunity. Firstly, what is and is not a honeypot is sensitively dependant on the AI's capabilities and situation. There is no such thing as a one size fits all honeypot.

Say you plugged a device that pretended to provide internet access, but was really supposed to monitor for access attempts. You don't know what the AI's escape plan and capabilities are. Will it brainwash the human operators? Abuse its chip to hack physics? Not all escape plans use the honeypot fake hole instead of all the real holes. (And if the fake security hole is huge, blatant, and has no reason to be there, the AI will be suspicious.)

Secondly, in order for it to have a chance of working, we have to hide the existence of the honeypot, and even then, the AI will suspect that making a honeypot is the sort of thing we might do.

Making a honeypot is another way of challenging the AI to a wit measuring contest, generally a bad idea.