[Short, Meta] Should open threads be more frequent?

The blue line suddenly stops because the last comment is posted at that time. I was kind of lazy about this graph and did have labels and a legend, but apparently I was too out of it to realise they didn't show on the png.

As said by gwillen, x axis is minutes.

[Short, Meta] Should open threads be more frequent?

Sorry about not having units, I added code to set them but apparently it was the wrong code and I wasn't paying enough attention.

Green line is total comments, blue is top level comments. X-axis is minutes, y axis is number of comments.

[Short, Meta] Should open threads be more frequent?

So I did what you suggested and plotted the number of top level posts and total posts over time. The attached graph is averaged over the last 20 open threads. Code available here:

I don't trust myself to do any analysis, so I delegate that task to you lot.

EDIT: Changed GitHub repo to a gist

Open thread, September 16-22, 2013

That's not quite the law of the excluded middle. In your first example, leaving isn't the negation of buying the car but is just another possibility. Tertium non datur would be He will either buy the car or he will not buy the car. It applies outside formal systems, but the possibilities outside a formal system are rarely negations of one another. If I'm wrong, can someone tell me?

Still, planting the "seed of destruction" definitely seems like a good idea, although I'd think caution in specifying only one event where that would happen. This idea is basically ensuring beliefs are falsifiable.

Open thread, September 16-22, 2013

Does the average LW user actually maintain a list of probabilities for their beliefs? Or is Bayesian probabilistic reasoning just some gold standard that no-one here actually does? If the former, what kinds of stuff do you have on your list?

Yet more "stupid" questions

Thanks. Just going to clarify my thoughts below.

Because doing so will lead to worse outcomes on average.

In specific instances, avoiding the negative outcome might be beneficial, but only for that instance. If you're constantly settling for less-than-optimal outcomes because they're less risky, it'll average out to less-than-optimal utility.

The terminology "non-linear valuation" seemed to me to imply some exponential valuation, or logarithmic or something; I think "subjective valuation" or "subjective utility" might be better here.

Yet more "stupid" questions

Is there any reason we don't include a risk aversion factor in expected utility calculations?

If there is an established way of considering risk aversion, where can I find posts/papers/articles/books regarding this?

NEW TIME: Sydney Less Wrong meetup, 23/4, 3PM

Just found this in a search for "Brisbane". I'd show up, and maybe bring a friend who is a non-LW rationalist.

More "Stupid" Questions

It's likely that Eliezer isn't tending towards either side of the nature vs. nurture debate, and as such isn't claiming that nature or nurture is doing the work in generating preferences.

Beautiful Math

Neither finite differences nor calculus are new to me, but I didn't pick up the correlation between the two until now, and it really is obvious.

This is why I love mathematics - there's always a trick hidden up the sleeve!

Load More