holomanga

My other car is Tetraspace Grouping.

holomanga on Metaculus

Posts

Sorted by New

Comments

Understanding Eliezer's "Any Fact Would Move Me in the Same Direction"

Any instance of the planning fallacy is an example of this - any setback, regardless of what the specific setback is, would make one expect a project to take longer to do, but by default people predict timelines as if the project will go smoothly even though if you asked them they'd say that they'd expect there to probably be a setback of some form.

TurnTrout's shortform feed

It was probably just regression to the mean because lots of things are, but I started feeling RSI-like symptoms a few months ago, read this, did this, and now they're gone, and in the possibilities where this did help, thank you! (And either way, this did make me feel less anxious about it 😀)

Book Launch: The Engines of Cognition

Would me compiling the Lesswrong posts as they are on the website into an .epub and uploading it provide a significant amount of the value of an ebook, or does enough of it come from other sources that one can't do at home like the pretty new diagrams and being on the Kindle store?

Book Launch: The Engines of Cognition

There is a listing already on Amazon UK, automatically generated by Amazon reselling from Amazon US, which as of the time of writing claims to deliver by January 5th. For me it was slightly cheaper to order it there than from the US site with international delivery directly :).

Robin Hanson's Grabby Aliens model explained - part 1

Ah, yeah, good point, especially since the whole point of the grabby aliens model is that the durations of hard steps are influenced very strongly by survivorship bias.

Scratch my quantitative claims, though I'm still confused because the time for abiogenesis is an actual hard step in the past that has anything at all to do with evolution (and looking at it gives you one datapoint for estimating the number of hard steps in general, since with N hard steps each hard step takes 1/N of the time from planetary formation to civilisation on average), while the time for a planet to become uninhabitable due to its star's lifecycle is unrelated and just seems to be about the right order of magnitude for our civilisation and star in particular.

EDIT: Daniel Eth explains why: https://www.lesswrong.com/posts/JdjxcmwM84vqpGHhn/great-filter-hard-step-math-explained-intuitively

How can one identify the absolute safest car in practice rather than theory?

Insurance premiums might be a good way to get this information, since the amount that insurance companies have to pay is (size of damages)×(probability of damage occurring) so they have a financial incentive to estimate the danger of driving each car correctly, taking into account estimates of the skill of the driver. Though I'm unsure whether insurance quotes are accessible enough that one could compare a huge list of potential cars.

Robin Hanson's Grabby Aliens model explained - part 1

First: the period between now and when Earth will first become uninhabitable, which is 1.1 billion years.

I'm not too clear on why this would be related to hard steps; it's just a fact about changes in the brightness of the Sun that on the face of it seems unrelated to evolution. If I was asked about a planet orbiting a red dwarf star at a distance that gives an Earth-like temperature, then I'd expect the time between the formation of the planet and life emerging to still be about the same 0.4 billion years, but the time until the planet becomes uninhabitable would be many times longer.

Petrov Day 2021: Mutually Assured Destruction?

In the message sent to holders of launch codes that's repeated in this post, it says:

LessWrong and the EA Forum both have second-strike capability that will last one hour.

Load More