All of shin_getter's Comments + Replies

I think the reason that organizations haven't gone 'FOOM' is due to the lack of a successful "goal focused self improvement method." There is no known way of building a organization that does not suffer from goal drifting and progressive degradation of performance. Humans have not even managed to understand how to build "goals" into organization's structure except in the crudest manner which is nowhere flexible enough to survive assaults of modern environmental change, and I don't think the information in sparse inter-linkages of real... (read more)

I don't live in vancouver at the moment, but I am quite curious on what the background breakdown for people that goes to LW meetups there. Are they like all UBC grad students or something? Any significant numbers of chinese?

We had one chinese guy show up once. LW (at least in vancouver) seems to be largely a white-male-middle-class thing, though it would be nice to get a broader perspective than that. Join the mailing list and stalk us if you're interested.
I think one or two regulars are UBC grads. There are no regulars who are Chinese.

I'd either reorganize the planet into a planetary transportation government and regional city-states - the planetary transportation government runs an intercontinental rail system that connects every city-state and enforces with overwhelming military might (provided by feudal grants from city states) only one right, that of emigration.

Sounds like the logical extention to libertarianism ideas that accepts the concept of social contract. I think some sort of externality management needs to exist as well.

Throwing out a theory as powerful and successful as relativity would require very powerful evidence, and at this point the evidence doesn't fall that way at all.

On the other hand, the lower bound for GAI becoming a very serious problem is very low. Simply by dropping the price of peak human intelligence down to material and energy costs of a human (break no laws unless one hold the mind is amaterial) would result in massive social displacement that would require serious planning beforehand. I don't think it is very likely that we'd see an AI that can laugh at exp-space problems, but all it needs to be is to be too smart to be easily controlled to mess everything up.

I remembered a claim of how the measurements of a number of physical constants is subject to anchoring where a previous result lead researchers to look for error at level of scrutiny and correct value is only slowly converged upon. Perhaps this is something similar, where a high profile display make researchers to look for that kind of result.

I suffer from the same problem and this topic have been talked about quite a bit before, though I don't think there is an accepted solution to the problem posted yet.

As I understand it, this confusion is not at all due to the activity being chosen, but skeptical methods exposing the illusion of coherence of the mind. To think harder would point to the traditional sense of self being more a flawed map then some physical level construct however it does not suggest a comfortable, stable alternative in which to organize mental processes.

On the short term, ... (read more)

Haven't read the book so will have to go on reviews....

It appears to me this can be viewed as a "utility function" memetic virus trying to spread by modifying its host without regard to the host's ultimate survival. In any case, the winning strategy is to build a better replicator and rebellion doesn't sound like the right word for it.

You usually give your manifesto away if your main desire is to propagate its message. "Rebellion" does seem like a reasonable word for what Stanovich is talking about. Dawkins used the same word: in the words that ended "The Selfish Gene" – saying to: “rebel against the tyranny of the selfish replicators”.

This sounds like a whole paragraph on how "talk is cheap" and thus have little value compared to costly signaling that actually demonstrate something.

If one thinks it about in that way, a generalized community symbol doesn't really do anything and instead what is needed something that ties directly to the user and his abilities and contributions. What would work is a piece of code that provides information on the account used in Lesswrong, other tracking tools and tests that demonstrate rationality. This may result in competition to "karma... (read more)

Hard stuff like this happen to me when getting a gym membership (expensive!) and in a number of other cases where the salesperson bring up a set of reasonable claims (but of course highly biased and selected) in a friendly manner to get me into the agreeing frame before pressuring for a sale.

I find it helps to have defined the requirements before talking to any sales person, if not build up reflexive response to sales people and not attempt to update with likely highly incorrect, biased and difficult to process information in time sensitive communications. It also makes sense to pay more attention in non-routine purchases since the sales tactics is not inoculated against and make take more thought.