All of rxs's Comments + Replies

I'm bit confused about the Deribit trade. I can see that you can hedge your position with this trade, but I don't understand how you get the return?

The futures price will converge to the spot price as expiration draws near, but this is not necessarily the spot price you paid... I must be missing something... Any pointer?

The basic idea is this. Let's say you buy a bitcoin at 23k USD and sell the BTC futures contract for 25k. At expiration date (or sooner) you will get 25k but will have to handover the bitcoin you paid 23k for. No matter the spot price at that point you will still have made 2k (minus fees). If bitcoin has gone up to 30k you are giving away an asset worth 30 in return for 25k, but you still made a profit since you bought it for 23k. But be aware that the high bitcoin volatiliy can eat your margin account.

Is there an alternative to for private predictions? I'd like to have all the nice goodies like updateble predictions in scicast/metaculus, but for private stuff?

Alternative question: Is there a off-line version of prediction book (command line or gui)?

For mobile, there's LW Predictions on Android.
You can set PB predictions to be private. Of course, this doesn't guarantee privacy since there are so many ways to hack websites and PB is not the best maintained codebase nor has it ever been audited... You could encrypt your private predictions, which would offer security but also reminders+scoring. I don't know of any offline CLI versions but the core functionality is pretty trivial so you could hack one up easily.

Thanks tried that. Not sure it worked as I didn't learn anything concrete. We spent 30 mins in discussion though (which he didn't need to do as there was no further value he could extract from me).

Oh well, such is life...

If he's a headhunter than he might value the relationship with you to call you up when he has another job.

Any tips on eliciting good, honest personal feedback? I just got a rejection from a position I wanted and will have a call with the headhunter tomorrow. I'd like to extract something useful information out of it. Any tips of good question formulations?

E.g. in a survey I ask instead of "Do you use X?" the question "In the past 3 months how many times did you use X?" to get a less biased answer.

Any good questions/ideas?

The first answer here is pretty good, though doesn't quite apply for my situation: (read more)

Headhunters will rarely be honest about this. I always recommend to clients that they say "brutal feedback" instead of just feedback to make sure they're getting good responses, but it's the rare manager that will be honest about this.

New papers byt Jan Leike, Marcus Hutter:

Solomonoff Induction Violates Nicod's Criterion

On the Computability of Solomonoff Induction and Knowledge-Seeking

Is there a reason to take magnesium citrate at night and not in the morning?

It helps with sleep, so I prefer to take it right before bed.

I suppose you've already checked the usuals like coursera, udacity, youtube courses etc.? "Medicine" is exteremely broad, but you can find some interesting intro courses to some of its aspects, e.g.:

Just some more general courses that sound interesting/useful:

Clinical Terminology for International and U.S. Students

Understanding Research: An Overview for Health Professionals (looks extremely useful!) (read more)

Those are some very good tips, thanks !

Reposting for visibility from the previous open thread as I posted on the last day of it (will not be reposting this anymore):

Speed reading doesn't register many hits here, but in a recent thread on subvocalization there are claims of speeds well above 500 WPM.

My standard reading speed is about 200 WPM (based on my eReader statistics, varies by content), I can push myself to maybe 240 but it is not enjoyable (I wouldn't read fiction at this speed) and 450-500 WPM with RSVP.

My aim this year is to get myself at 500+ WPM base (i.e. usable also for leisure rea... (read more)

I use Acceleread. Its an app for the iPhone and iPad, and very user friendly with 10 minute lessons divided into 2 minute segments.
In my experience, subvocalization doesn't become a barrier until you hit maybe 900-1000 wpm. I still subvocalize, and I read at about 800 wpm with appropriate software and 500 wpm on dead trees, so it's definitely achievable. Over the span of several weeks, I increased my speed from ~250 wpm by spending 30 minutes a day practicing the techniques from Matt Fallshaw's presentation at the Effective Altruism Summit. Unfortunately, my notes are about 3000 miles away, right now.
I just read a lot. No system. Also, I don't normally read at 600 wpm - that was approaching the limit where I don't need to stop and think about what I'm reading, only stopping to consciously note and identify each individual word. On, say, a LW comment, where I actually need to think at least a little? Hmm. Heh. It came out as 550 wpm, not a big drop. Trying a harder one? 490.

Speed reading doesn't register many hits here, but in a recent thread on subvocalization there are claims of speeds well above 500 WPM.

My standard reading speed is about 200 WPM (based on my eReader statisitcs, varies by content), I can push myself to maybe 240 but it is not enjoyable (I wouldn't read fiction at this speed) and 450-500 WPM with RSVP.

My aim this year is to get myself at 500+ WPM base (i.e. usable also for leisure reading and without RSVP). Is this even possible? Claims seem to be contradictory.

Does anybody have recommendations on systems th... (read more)

I read around 600 wpm without ever taking speedreading lessons so with training it should be very possible.

Category theory gives a few hits at LW, but doesn't seem to be recognized very wildly. On a first glance it seems to be relevant for Bayes nets, cognitive architectures and several other topics. Recent text book that seems very promising:

Category theory for scientists by David I. Spivak:

Abstract: There are many books designed to introduce category theory to either a mathematical audience or a computer science audience. In this book, our audience is the broader scientific community. We attempt to show that category theory can ... (read more)

These texts can work as an introductory undergraduate sequence (with "Sets for Mathematics" going after enough exposure to rigor, e.g. a real analysis course, maybe some set theory and logic, and Awodey's book after a bit of abstract algebra, maybe functional programming with types, as in Haskell/Standard ML/etc.): * F. W. Lawvere & S. H. Schanuel (1991). Conceptual Mathematics: A First Introduction to Categories. Buffalo Workshop Press, Buffalo, NY, USA. * F. W. Lawvere & R. Rosebrugh (2003). Sets for Mathematics. Cambridge University Press. * S. Awodey (2006). Category Theory. Oxford Logic Guides. Oxford University Press, USA.

I'll try to come, too. Given the weather, Gasteig?

German speakers - trying to improve my german I'm looking for good blog recommendations. Ideally dealing with similar topics as seen here (rationality, AI, philosophy) but any thoughtful, well written essays would do. Some good people to follow? I like Thomas Metzinger as a reference point for you. Thank you!

Yep, I agree. This is definitely an (optimistic) lower limit. Good that these studies are gaining attention, though a systemic change would be needed to get us out of this.

Empirical estimates suggest most published medical research is true

OK, so now we need a meta-analysis of these meta-analyses...

Gelman's comments: One of the authors replies in the comments:
I don't think it works in the sense of refuting the earlier results by Ioannidis etc. Remember that much of that previous work is based on looking at replication rates and changes as sample sizes increase - so actually empirical in a meaningful way. This simply aggregates all p-values, takes them at face value, and tries to infer what the false positive rate 'should' be. It doesn't seem to account in any way for the many systematic errors involved or biases or problems in the process, only covers false positives and not false negatives (so ignores issues of statistical power, which is a serious problem in psychology, anyway, although I think medical trials are better powered). I'd take their estimate of a 17% false positive rate as a lower bound. I also question some other aspects; for example, they dismiss the idea that the false positive rate is increasing because it hits p=0.18 - but if you look at pg11, every journal sees a net increase in false positive rates from the beginning of their sample to the end, although there's enough variation that the beginning/end difference doesn't hit 0.05. So there is a clear trend here, and I have to wonder: if they looked at more than 5 journals over a decade, would the extra data make it hit significance? (A 0.5% increase each year is very troubling, since that implies very bad things for the long-term.) I liked their data collection strategy, though; scraping - not just for hackers!

Thanks! And thank you for the link!

In the Interview with Adam Ford Michael Vasser mentions a series of papers on efficient market biases at presence of risk by Brad Delong and somebody whose name I can not make out (Samus? Samuls?). Does anybody know which papers is he referring to?

@16:00 into the video

The other economist is Larry Summers. I believe this is one of the papers Vasser is referring to.

Michael Vasser - Darwinian Method - Interview with Adam Ford is pretty damm excellent

Rest of Adam Ford's uploads seem very interesting too!