User Profile

star0
description0
message17

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

No posts to display.

Recent Comments

I think it would be interesting if we weigh the benefits of human desire modification in all its forms (ranging from strategies like delayed gratification to brain pleasure centre stimulation: covered very well in [this fun theory sequence article](http://lesswrong.com/lw/xk/continuous_improvement/)...(read more)

There is no guarantee that there exists some way for them to understand.

Consider the possibility that it's only possible for people with nontrivial level of understanding to work with 5TB+ amounts of data. It could be a practical boost in capability due to understanding storage technology principl...(read more)

So someone has mentioned it on LW after all. Lots of singulatarian ideas depend heavily exponential growth.

Thanks :) Can you elaborate a bit? Are you saying that I overreached, and that largely there should be some transformed domain where the model turns out to be simple, but is not guaranteed to exist for every model?

Sorry, hadn't seen this (note to self: mail alerts).

Is this really true, even if we pick a similarly restricted set of models? I mean, consider a set of equations which can only contain products of a number of variables : like (x_1)^a (x_2)^b = const1 ,(x_1)^d (x_2)^e = const2 .

Is this nonlinea...(read more)

I argue that agw is the _worst_ because it is the only one that hits at very deep-seated human assumptions that may well be genetic/inherent.

The first obstacle to agw is, even before coordination, is anchoring - we assume that _everything_ must get better only, and _nothing ever_ gets worse. Furt...(read more)

I assume you're talking of around 4 degrees warming under business-as-usual conditions?

To pick the most important effect, it's going to impact agriculture severely. Even if irrigation can be managed, untimely heavy rain will still damage crops. And they can't be prevented from affecting crops, unl...(read more)

Of course, "leading to global warming" is a subset of "harmful for the environment". Agreed on all counts.

Computing can't harm the environment in any way - it's within a totally artificial human space.

The others ("good") can harm the environment in general, but are much better for AGW.

*Longtime lurker, and I've managed to fight akrasia and geniune shortage of time to put my thoughts down into a post. I think it does deserve a post, but I don't have the karma or the confidence to create a top-level post.

Comments and feedback _really_ welcome and desired : I've gotten tired of b...(read more)