amcknight

Posts

Sorted by New

Comments

[LINK] Updating Drake's Equation with values from modern astronomy

But the lower bound of this is still well below one. We can't use our existence in the light cone to infer there's at least about one per light cone. There can be arbitrarily many empty light cones.

[LINK] Updating Drake's Equation with values from modern astronomy

They use the number of stars in the observable universe instead of the number of stars in the whole universe. This ruins their calculation. I wrote a little more here

CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype

Charity Science, which fundraises for GiveWell's top charities, needs $35k to keep going this year. They've been appealing to non-EAs from the Skeptics community and lot's of other folks and kind of work as a pretty front-end for GiveWell. More here. (Full disclosure, I'm on their Board of Directors.)

Questions of Reasoning under Logical Uncertainty

A more precise way to avoid the oxymoron is "logically impossible epistemic possibility". I think 'Epistemic possibility' is used in philosophy in approximately the way you're using the term.

January Monthly Bragging Thread

Links are dead. Is there anywhere I can find your story now?

2014 Less Wrong Census/Survey

Done! Ahhh, another year another survey. I feel like I did one just a few months ago. I wish I knew my previous answers about gods, aliens, cryonics, and simulators.

The Octopus, the Dolphin and Us: a Great Filter tale

I don't have an answer but here's a guess: For any given pre-civilizational state, I imagine there are many filters. If we model these filters as having a kill rate then my (unreliable stats) intuition tells me that a prior on the kill rate distribution should be log-normal. I think this suggests that most of the killing happens on the left-most outlier but someone better at stats should check my assumptions.

Donating to MIRI vs. FHI vs. CEA vs. CFAR

It sounds like CSER could use a loan. Would it be possible for me to donate to CSER and to get my money back if they get $500k+ in grants?

Why CFAR?

From the perspective of long-term, high-impact altruism, highly math-talented people are especially worth impacting for a number of reasons. For one thing, if AI does turn out to pose significant risks over the coming century, there’s a significant chance that at least one key figure in the eventual development of AI will have had amazing math tests in high school, judging from the history of past such achievements. An eventual scaled-up SPARC program, including math talent from all over the world, may be able to help that unknown future scientist build the competencies he or she will need to navigate that situation well.

More broadly, math talent may be relevant to other technological breakthroughs over the coming century; and tech shifts have historically impacted human well-being quite a lot relative to the political issues of any given day.

I'm extremely interested in this being spelled out in more detail. Can you point me to any evidence you have of this?

Load More