They will in fact stop you from taking fancy laser measuring kit to the top of the Shard: https://www.youtube.com/watch?v=ckcdqlo3pYc&t=792s.
I don't think that's the only reason - if I value something linearly, I still don't want to play a game that almost certainly bankrupts me.
I still think that's because you intuitively know that bankruptcy is worse-than-linearly bad for you. If your utility function were truly linear then it's true by definition that you would trade an arbitrary chance of going bankrupt for a tiny chance of a sufficiently large reward.
...I mean, that's not obvious - the Kelly criterion gives you, in the example with the game, E(money) = $240, compared to $246.61 with the
It bankrupts you with probability 1 - 0.6^300, but in the other 0.6^300 of cases you get a sweet sweet $25 × 2^300. This nets you an expected $1.42 × 10^25.
Whereas Kelly betting only has an expected value of $25 × (0.6×1.2 + 0.4×0.8)^300 = $3220637.15.
Obviously humans don't have linear utility functions, but my point is that the Kelly criterion still isn't the right answer when you make the assumptions more realistic. You actually have to do the calculation with the actual utility function.
The answer is that you bet approximately Kelly.
No, it isn't. Gwern never says that anywhere, and it's not true. This is a good example of what I'm saying.
For clarity the game is this. You start with $25 and you can bet any multiple of $0.01 up to the amount you have. A coin is flipped with a 60/40 bias in your favour. If you win you double the amount you bet, otherwise you lose it. There is a cap of $250, so after each bet you lose any money over this amount (so in fact you should never make a bet that could take you over). This continues for 300 rounds.
Bo...
If Bob wants to maximise his money at the end, then he really should bet it all every round. I don't see why you would want to use Kelly rather than maximising expected utility. Not maximising expected utility means that you expect to get less utility.
Can you be more precise about the exact situation Bob is in? How many rounds will he get to play? Is he trying to maximise money, or trying to beat Alice? I doubt the Kelly criterion will actually be his optimal strategy.
I tend to view the golden ratio as the least irrational irrational number. It fills in the next gap after all the rational numbers. In the same way, 1/2 is the noninteger which shares the most algebraic properties with the integers, even though it's furthest from them in a metric sense.
Nice idea! We can show directly that each term provides information about the next.
The density function of the distribution of the fractional part in the continued fractional algorithm converges to 1/[(1+x) ln(2)] (it seems this is also called the Gauss-Kuzmin distribution, since the two are so closely associated). So we can directly calculate the probability of getting a coefficient of n by integrating this from 1/(n+1) to 1/n, which gives -lg(1-1/(n+1)^2) as you say above. But we can also calculate the probability of getting an n followed by an m, by int...
I've only been on Mastodon a bit longer than the current Twitter immigrants, but as far as I know there's no norm against linking. But the server admins are all a bit stressed by the increased load. So I can understand why they'd be annoyed by any link that brought new users. I've been holding off on inviting new users to the instance I'm on, because the server is only just coping as it is.
Apologies for asking an object level question, but I probably have Covid and I'm in the UK which is about to experience a nasty heatwave. Do we have a Covid survival guide somewhere?
(EDIT: I lived lol)
Is there a way to alter the structure of a futarchy to make it follow a decision theory other than EDT?
Or is that still too closely tied to the explore-exploit paradigm?
Right. The setup for my problem is the same as the 'bernoulli bandit', but I only care about the information and not the reward. All I see on that page is about exploration-exploitation.
What's the term for statistical problems that are like exploration-exploitation, but without the exploitation? I tried searching for 'exploration' but that wasn't it.
In particular, suppose I have a bunch of machines which each succeed or fail independently with a probability that is fixed separately for each machine. And suppose I can pick machines to sample to see if they succeed or fail. How do I sample them if I want to become 99% certain that I've found the best machine, while using the fewest number of samples?
The difference with exploration-exploitat...
The problem is that this measures their amount of knowledge about the questions as well as their calibration.
My model would be as follows. For a fixed source of questions, each person has a distribution describing how much they know about the questions. It describes how likely it is that a given question is one they should say p on. Each person also has a calibration function f, such that when they should say p they instead say f(p). Then by assigning priors over the spaces of these distributions and calibration functions, and applying Bayes' rule we get a...
I gave some more examples here: https://www.reddit.com/r/math/comments/9gyh3a/if_you_know_the_factors_of_an_integer_x_can_you/e68u5x4/
You just have to carefully do the algebra to get an inductive argument. The fact that the last digit is 5 is used directly.
Suppose n is a number that ends in 5, and such that the last N digits stay the same when you square it. We want to prove that the last N+1 digits stay the same when you square n^2.
We can write n = m*10^N + p, where p has N digits, and so n^2 = m^2*10^(2N) + 2mp*10^N + p^2. Note that since 2p ends in 0, the term 2mp*10^N actually divides by 10^(N+1). Then since the two larger terms divide by 10^N+1, n^2 agrees with p^2 on its last N+1 d...
If we start with 5 and start squaring we get 5, 25, 625, 390625, 152587890625.... Note how some of the end digits are staying the same each time. If we continue this process we get a number ...8212890625 which is a solution of x^2 = x. We get another solution by subtracting this from 1 to get ...1888119476.
if not, then we'll essentially need another way to define determinant for projective modules because that's equivalent to defining an alternating map?
There's a lot of cases in mathematics where two notions can be stated in terms of each other, but it doesn't tell us which order to define things in.
The only other thought I have is that I have to use the fact that is projective and finitely generated. This is equivalent to being dualisable. So the definition is likely to use somewhere.
I'm curious about this. I can see a reasonable way to define in terms of sheaves of modules over : Over each connected component, has some constant dimension , so we just let be over that component.
If we call this construction then the construction I'm thinking of is . Note that is locally -dimensional, so my construction is locally isomorphic to yours but globally twisted. It depends on via more than just its...
I agree that 'credence' and 'frequency' are different things. But round here the word 'probability' does refer to credence rather than frequency. This isn't a mistake; it's just the way we're using words.
My intuition for is that it tells you how an infinitesimal change accumulates over finite time (think compound interest). So the above expression is equivalent to . Thus we should think 'If I perturb the identity matrix, then the amount by which the unit cube grows is proportional to the extent to which each vector is being stretched in the direction it was already pointing'.
This is close to one thing I've been thinking about myself. The determinant is well defined for endomorphisms on finitely-generated projective modules over any ring. But the 'top exterior power' definition doesn't work there because such things do not have a dimension. There are two ways I've seen for nevertheless defining the determinant.
I think the determinant is more mathematically fundamental than the concept of volume. It just seems the other way around because we use volumes in every day life.
https://www.lesswrong.com/posts/AAqTP6Q5aeWnoAYr4/?commentId=WJ5hegYjp98C4hcRt
I don't dispute what you say. I just suggest that the confusing term "in the worst case" be replaced by the more accurate phrase "supposing that the environment is an adversarial superintelligence who can perfectly read all of your mind except bits designated 'random'".
In this case P is the cumulative distribution function, so it has to approach 1 at infinity, rather than the area under the curve being 1. An example would be 1/(1+exp(-x)).
A simple way to do this is for ROB to output the pair of integers {n, n+1} with probability K((K-1)/(K+1))^|n|, where K is some large number. Then even if you know ROB's strategy the best probability you have of winning is 1/2 + 1/(2K).
If you sample an event N times the variance in your estimate of its probability is about 1/N. So if we pick K >> √N then our probability of success will be statistically indistinguishable from 1/2.
The only difficulty is implementing code to sample from a geometric distribution with a parameter so close to 1.
The original problem doesn't say that ROB has access to your algorithm, or that ROB wants you to lose.
Note that Eliezer believes the opposite: https://www.lesswrong.com/posts/GYuKqAL95eaWTDje5/worse-than-random.
Right, I should have chosen a more Bayesian way to say it, like 'suceeds with probability greater than 1/2’.
The intended answer for this problem is the Frequentist Heresy in which ROB's decisions are treated as nonrandom even though they are unknown, while the output of our own RNG is treated as random, even though we know exactly what it is, because it was the output of some 'random process'.
Instead, use the Bayesian method. Let P({a,b}) be your prior for ROB's choice of numbers. Let x be the number TABI gives you. Compute P({a,b}|x) using Bayes' Theorem. From this you can calculate P(x=max(a,b)|x). Say that you have the highest number if this is over 1/2
Some related discussions: 1. https://www.conwaylife.com/forums/viewtopic.php?f=2&t=979 2. https://www.conwaylife.com/forums/viewtopic.php?f=7&t=2877 3. https://www.conwaylife.com/forums/viewtopic.php?p=86140#p86140
My own thoughts.
Patterns in GoL are generally not robust. Typically changing anything will cause the whole pattern to disintegrate in a catastrophic explosion and revert to the usual 'ash' of randomly placed small still lifes and oscillators along with some escaping gliders.
The pattern Eater 2 can eat gliders along 4 adjacent lanes.
Yes, I'm a big fan of the Entropic Uncertainty Principle. One thing to note about it is that the definition of entropy only uses the measure space structure of the reals, whereas the definition of variance also uses the metric on the reals as well. So Heisenberg's principle uses more structure to say less stuff. And it's not like the extra structure is merely redundant either. You can say useful stuff using the metric structure, like Hardy's Uncertainty Principle. So Heisenberg's version is taking useful information and then just throwing it away.
I'd almos...
A good source for the technology available in the Game of Life is the draft of Nathaniel Johnston and Dave Greene's new book "Conway’s Game of Life: Mathematics and Construction".
The if the probabilities of catching COVID on two occasions are x and y, the the probability of catching it at least once is 1 - (1 - x)(1 - y) which equals x + y - xy. So if x and y are large enough for xy to be significant, then splitting is better because even though catching it the second time will increase your viral load, it's not going to make it twice as bad as it already was.
The link still works for me. Perhaps you must first become a member of that discord? Invite link: https://discord.gg/nZ9JV5Be (valid for 7 days)
The weird thing is that there are two metrics involved: information can propagate through a nonempty universe at 1 cell per generation in the sense of the l_infinity metric, but it can only propagate into empty space at 1/2 a cell per generation in the sense of the l_1 metric.
You're probably right, but I can think of the following points.
Its rule is more complicated than Life's, so its worse as an example of emergent complexity from simple rules (which was Conway's original motivation).
It's also a harder location to demonstrate self replication. Any self replicator in Critters would have to be fed with some food source.
Yeah, although probably you'd want to include a 'buffer' at the edge of the region to protect the entity from gliders thrown out from the surroundings. A 1,000,000 cell thick border filled randomly with blocks at 0.1% density would do the job.
This is very much a heuristic, but good enough in this case.
Suppose we want to know how many times we expect to see a pattern with n cells in a random field of area A. Ignoring edge effects, there are A different offsets at which the pattern could appear. Each of these has a 1/2^n chance of being the pattern. So we expect at least one copy of the pattern if n < log_2(A).
In this case the area is (10^60)^2, so we expect patterns of size up to 398.631. In other words, we expect the ash to contain any pattern you can fit in a 20 by 20 box.
The glider moves at c/4 diagonally, while the c/2 ships move horizontally. A c/2 ship moving right and then down will reach its destination at the same time the c/4 glider does. In fact, gliders travel at the empty space speed limit.
Most glider guns in random ash will immediately be destroyed by the chaos they cause. Those that don't will eventually reach an eater which will neutralise them. But yes, such things could pose a nasty surprise for any AI trying to clean up the ash. When it removes the eater it will suddenly have a glider stream coming towards it! But this doesn't prove it's impossible to clear up the ash.
Making a 'ship sensor' is tricky. If it collides with something unexpected it will create more chaos that you'll have to clear up.
This sounds like you're treating the area as empty space, whereas the OP specifies that it's filled randomly outside the area where our AI starts.
My understanding was that we just want to succeed with high probability. The vast majority of configurations will not contain enemy AIs.
Yup, you want a bi-interpretation:
Bi-interpretation in weak set theories