No problem. Looks like that will be the soonest I'll be able to make it as well.
Looks like SSC meetups are still ongoing: https://www.lesswrong.com/events/ifsZbNmHwxhCm7F4n/slate-star-codex-meetup?commentId=ZccQoDDQY2skHDsAY#ZccQoDDQY2skHDsAY
This whole time? Man, I haven't been looking hard enough. What's the algorithm, 2d Saturdays at 1900?
Ahh, I think I did not think through what "rationality enhancement" might mean; perhaps my own recent search and the AI context of Yudkowsky's original intent skewed me a little. I was thinking of something like "understanding and applying concepts of rationality" in a way that might include "anticipating misaligned AI" or "anticipating AI-human feedback responses".
I like the way you've framed what's probably the useful question. I'll need to think about that a bit more.
Cool, thanks for sharing.
I posted about my academic research interest here, do you know their research well enough to give input on whether my interests would be compatible? I would love to find a way to do my PhD in Europe, but especially Germany.
Cool, that sounds like a pretty useful combination.
I'd love to. The soonest I'd be available in August would be at the end of the month. I'm sure we can find somewhere public that would work. What will you be studying?
A few observations.
First, it seems likely that the increase in positivity can be explained by fewer precautionary tests: fewer people are getting tested "just to be sure", fewer people are being required by work/travel/etc. to get tested. Therefore fewer negative tests.
Second, it seems likely to me that the "93%, 93%, 91%" numbers are calculated independently from each other. I.e. 93% less likely to contract than unvaccinated, 93% less likely to hospitalize than unvaccinated, and the vaccinated group was 91% less likely to die than the unvaccinated group. So with alpha, all probabilities were reduced ~uniformly. Now consider a variant (delta) where the vaccine is not as effective at reducing symptoms of any level, but is still ~as effective at preventing hospitalizations and deaths. This would decrease the likelihood of the vaccine preventing a positive test or symptoms, while not changing the hospitalization/deaths numbers much. This makes sense in my head, but perhaps there's something I'm missing?
Finally, a typo that tripped me up a bit:
We should also look at case counts in Israel. On June 18 they had 1.92 cases per million, right before things started rising, on June 14 it was 65.09, for R0 = 1.97. From previous data, we can presume that when Delta was a very small portion of Israeli cases, the control system adjusted things to something like R0 = 1, so we’ll keep that number in mind.
The second "June" should be "July", as in "July 14". (Small nitpick, I know, but it took me a minute to work out, so I figured I'd share.)
I've started formalizing my research proposal, so I now have:I intend to use computational game theory, system modeling, cognitive science, causal inference, and operations research methods to explore the ways in which AI systems can produce unintended consequences and develop better methods to anticipate outer alignment failures.
Can anyone point me to existing university research along these lines? I've made some progress after finding this thread, and I'm now planning to contact FHI about their Research Scholar's Programme, but I'm still finding it a little time-consuming to try to match specific ongoing research with a given University or professor, so if anyone can point me to any other university programs (or professors to contact) which would fit well with my interests, that would be super helpful.
Wait, isn't that an example of efficiency of scale being dependent on investment? You have to get a 1-foot rope and scissors, but once you have, you can create two 1/2 foot ropes? I think the "given a 1-foot rope" is doing more work than you realize, because when I try to apply your example to the world above, I keep getting hung up on "but in the imaginary world above, when we account for economy of scale, if you just needed one 1/2 foot rope, you would just create a 1/2 foot rope, and that would take you 1/2 the time as creating 1 foot of rope." And for The David, I feel like "sure, but that doesn't explain why someone wouldn't just carve their own David if they wanted one". I think I'm bypassing some of the issue here, but I'm not entirely sure what it is.
It does, however, bring up another interesting reason for trade (and this may be part of how investment can be independent from efficiency of scale): shared resources. If a pair of scissors does not scale according to how often I use them, and I only use them once per day, I can increase efficiency/decrease required investment by trading their use so others can use them when I'm not. This applies to the the David as such: utility gained from the David is not zero sum, multiple people can utility from it without decreasing the utility the others gain; therefore it does not make sense for everyone to carve their own. So any time a resource or product produces non-zero sum benefits if it exists, we have a reason for it's use to be traded/trade to be involved in sharing it.
Applying this, if 5 people each carve a statue and put them in a sculpture garden in exchange for access to the garden, they can each enjoy five statues (alternatively, they could collaborate to build the statue in 1/5th the time and share in the enjoyment of it).
Not sure this is what you were getting at, but I think I've talked myself into thinking that when investment has independence from efficiency of scale it's because of the non-zero sum nature of some shared resources.