Wiki Contributions


I think the reason that organizations haven't gone 'FOOM' is due to the lack of a successful "goal focused self improvement method." There is no known way of building a organization that does not suffer from goal drifting and progressive degradation of performance. Humans have not even managed to understand how to build "goals" into organization's structure except in the crudest manner which is nowhere flexible enough to survive assaults of modern environmental change, and I don't think the information in sparse inter-linkages of real organizations can store or process such information without having a significant part outsources to human scale processing, thus it couldn't even have stumbled upon it by chance.

In theory there is no reason why a computation devices build out of humans can't go FOOM. In practice, making a system that work on humans is extremely noisy, slow to change ('education' is slow) while countless experimental constraints exists with no robust engineering solutions is simply harder. Management isn't even a full science at this point. The selection power from existing theory still leaves open a vast space of unfocused exploration, and only a tiny and unknown subset of that can go FOOM. Imagine the space of all valid training manuals and organizational structures and physical aid assets and recruitment policies and so on, and our knowledge of finding the FOOMing one.

AGI running on electronic computers is a bigger threat compared to other recursive intelligence improvement problems because the engineering problems are lower and the rate of progress is higher. Most other recursive intelligence self improvement strategies take pace at "human" time scales and does not leave humans completely helpless.

I don't live in vancouver at the moment, but I am quite curious on what the background breakdown for people that goes to LW meetups there. Are they like all UBC grad students or something? Any significant numbers of chinese?

I'd either reorganize the planet into a planetary transportation government and regional city-states - the planetary transportation government runs an intercontinental rail system that connects every city-state and enforces with overwhelming military might (provided by feudal grants from city states) only one right, that of emigration.

Sounds like the logical extention to libertarianism ideas that accepts the concept of social contract. I think some sort of externality management needs to exist as well.

Throwing out a theory as powerful and successful as relativity would require very powerful evidence, and at this point the evidence doesn't fall that way at all.

On the other hand, the lower bound for GAI becoming a very serious problem is very low. Simply by dropping the price of peak human intelligence down to material and energy costs of a human (break no laws unless one hold the mind is amaterial) would result in massive social displacement that would require serious planning beforehand. I don't think it is very likely that we'd see an AI that can laugh at exp-space problems, but all it needs to be is to be too smart to be easily controlled to mess everything up.

I remembered a claim of how the measurements of a number of physical constants is subject to anchoring where a previous result lead researchers to look for error at level of scrutiny and correct value is only slowly converged upon. Perhaps this is something similar, where a high profile display make researchers to look for that kind of result.

I suffer from the same problem and this topic have been talked about quite a bit before, though I don't think there is an accepted solution to the problem posted yet.

As I understand it, this confusion is not at all due to the activity being chosen, but skeptical methods exposing the illusion of coherence of the mind. To think harder would point to the traditional sense of self being more a flawed map then some physical level construct however it does not suggest a comfortable, stable alternative in which to organize mental processes.

On the short term, one thing that has been on my mind is how to merge the very counterintuitive empirical, outside view of the self with the inside view and not run into ineffective introspective loops,

Haven't read the book so will have to go on reviews....

It appears to me this can be viewed as a "utility function" memetic virus trying to spread by modifying its host without regard to the host's ultimate survival. In any case, the winning strategy is to build a better replicator and rebellion doesn't sound like the right word for it.

This sounds like a whole paragraph on how "talk is cheap" and thus have little value compared to costly signaling that actually demonstrate something.

If one thinks it about in that way, a generalized community symbol doesn't really do anything and instead what is needed something that ties directly to the user and his abilities and contributions. What would work is a piece of code that provides information on the account used in Lesswrong, other tracking tools and tests that demonstrate rationality. This may result in competition to "karma up" on the site and perhaps some perverse behaviour, but it should be controllable with good moderation.

Some part of me feels like building a customized barcode format which allows for a stylish symbol for the general community that also provides customized information for each user, but that is likely overkill al the moment.

Hard stuff like this happen to me when getting a gym membership (expensive!) and in a number of other cases where the salesperson bring up a set of reasonable claims (but of course highly biased and selected) in a friendly manner to get me into the agreeing frame before pressuring for a sale.

I find it helps to have defined the requirements before talking to any sales person, if not build up reflexive response to sales people and not attempt to update with likely highly incorrect, biased and difficult to process information in time sensitive communications. It also makes sense to pay more attention in non-routine purchases since the sales tactics is not inoculated against and make take more thought.