Posts

Sorted by New

Wiki Contributions

Comments

How do you find doing problems/exercises from these textbooks when you have prepared using Anki? And are you finding that earlier material seems obvious when reread?

Sorry if this is all coming across as critical and /or doubtful. I've tried to use Anki for theory before and dismally failed; the success you claim is very exciting and I'm trying to understand where I was going wrong.

So far I think I have focused too much on creating cards that can be memorised exactly (formulae and what-not), rather than having general concept cards that are used to develop fluency and familiarity (and later understanding), which sounds like what you are doing.

What process do you use to review cards? Do you look at a prompt until you can say exactly what is on the card? Or if not verbatim what tolerance do you have for missing details/small mistakes?

I'm a bit skeptical of what you claim because it is so different from my approach to becoming proficient at pieces mathematics: usually I will work through progressively more-complicated problems in excruciating detail. I don't claim that this is the most efficient method, and it would be nice to find such an approach, just that memorization methods have usually lead to me agreeing with the mathematics, rather than really understanding it.

But maybe you are using Anki differently to how I expect. How exactly do you review cards? Do you look at a prompt until you can say verbatim what is on the card? Or if not verbatim what tolerance do you have for missing details/small mistakes?

The attribution I have seen for the bull market is that investors are bullish on a return to normal via widespread vaccine distribution. If that is the case it follows that the current market is highly dependent on investor sentiment, and that a rapid, negative change in the short-to-moderate term outlook (due to the rise in a new, more-proboematic variant) will decrease the market.

However, the above line of logic is easy to follow and any investor who made or lost a lot of money last March will be on the lookout for the same thing to happen. So, the chance of a downturn could end up being overpriced by the market as people try to capitalize on a crash. Upside bets might end up being really cheap; the scenario in my head is something like this: we have rapid spread of the new strain and consequently a lot of panic spreads too, but then it becomes apparent that the strain is not as harmful, or some good vaccine related news comes out (I'm no expert on what the good news could look like, but I'm sure there is something), and the market recovers or rallies. That is why I thought a bet on vol could be good, if you buy a straddle (or strangle) you win on either a big upside or downside move.

Answer by ErrethAkbeDec 25, 202070

One could just wait until 'the market' (pick an ETF on your favourite index) drops by x%, buy back in (or buy calls) and cash out ~ a year or so later. This would have been a good trade in late March/early April and has a couple of pros: limited downside, relatively unsophisticated (i.e. easy to execute and plan), clear entry and exit signals. The cons are a lack of precision (I suspect a more targeted bet on e.g. vol could make more money, maybe buy the ATM straddle?), and that the low leverage.

There is a rich field of research on statistical laws (such as the CLT) for deterministic systems, which I think might interest various commenters. Here one starts with a randomly chosen point x in some state space X, some dynamics T on X, and a real (or complex) valued observable function g :X -> R and considers the statistics of Y_n = g(T^n(x)) for n > 0 (i.e. we start with X, apply T some number of times and then take a 'measurement' of the system via g). In some interesting circumstances these (completely deterministic) systems satisfy a CLT. Specifically, if we set S_n to be the sum of Y_0,... ,Y_{n-1} then for e.g. expanding or hyperbolic T, one can show that S_n is asymptotically a standard normal RV after appropriate scaling/translation to normalise. The key technical hypothesis is that the an invariant measure mu for T exists such that the mu-correlations between Y_0 and Y_k decays summably fast with k.

This also provides an example of a pathological case for the CLT. Specifically, if g is of the form g(x) = h(T(x))-h(x) (a coboundary) with h uniformly bounded then by telescoping the terms in S_n we see that S_n is uniformly bound over n, so when one divides by sqrt(n) the only limit is 0. Thus the limiting distribution is a dirac delta at 0.

Regarding the topic of your last paragraph (how can we have choice in a deterministic universe): this is something Gary Drescher discusses extensively in his book.

Firstly, he points out that determinism does not imply that choice is necessarily futile. Our 'choices' only happen because we engage in some kind of decision or choice making process. Even though the choice may be fixed in advance, it is still only taken because we engage in this process.

Additionally, Gary proposes the notion of a subjunctive means-end link (a means-end link is a method of identifying what is a means to a particular end), wherein one can act for the sake of what would have to be the case if they take a particular action. For example, in newcomb's problem one can pick just a single box because it would then have to be the case that the big box contains a million.

Putting these two things together might help make sense of how our actions affect these kind of thought experiments.

I don't think it's a fair deduction to conclude that Goldbach's conjecture is "probably true" based on a estimate of the measure (or probability) of the set of possible counter examples being small. The conjecture is either true or false, but more to the point I think you are using the words probability and probable in two different ways (the measure theoretic sense, and in the sense of uncertainty about the truth value of a statement), which obfuscates (at least to me) what exactly the conclusion of your argument is.

There is of course a case to be made about whether it matters if Goldbach's conjecture should be considered as true if the first counter example is larger than an number that could possibly and reasonable manifest in physical reality. Maybe this was what you are getting at, and I don't really have a strong or well thought out opinion either way on this.

Lastly, I wonder whether there are examples of heuristic calculations which make the wrong prediction about the truth value of the conjecture to which they pertain. I'm spitballing here, but it would be interesting to see what the heuristics for Fermat's theorem say about Euler's sum of primes conjecture (of which Fermat's last theorem is K = 2 case), since we know that the conjecture is false for K = 4. More specifically, how can we tell a good heuristic from a bad one? I'm not sure, and I also don't mean to imply that heuristics are useless, more that maybe they are useful because they (i) give one an idea of whether to try to prove something or look for a counter example, and (ii) give a rough idea of why something should be true or false, and what direction a proof should go in (e.g. for Goldbach's conjecture, it seems like one needs to have precise statements about how the primes behave like random numbers).