Posts

Sorted by New

Wiki Contributions

Comments

I'm not sure I'm clear on the AI/AIG distinction. Wouldn't an AI need to be able to apply its intelligence to novel situations to be "intelligent" at all, therefore making its intelligence "general" by definition? Watson winning Jeopardy! was a testament to software engineering, but Watson was programmed specifically to play Jeopardy!. If, without modification, it could go on to dominate Settlers of Catan then we might want to start worrying.

I guess it's natural that QI tests would be chosen. They are objective and feature a logic a computer can, at least theoretically, recreate or approximate convincingly. Plus a lot of people conflate IQ with intelligence, which helps on the marketing side. (Aside: if there is one place the mind excels, it's getting more out than it started with--like miraculously remembering something otherwise forgotten (in some case seemingly never learned) at just the right moment. Word-vector embeddings and other fancy relational strategies seem to need way more going in--data-wise--than the chuck back out, making them crude and brute force by comparison.)

I just don't think there are many features human social organization that can be usefully described by a one-dimensional array, the alleged left-right political divide perhaps being the canonical example. Take two books I have on my Kindle: Sirens of Titan and Influx. While one can truly say the latter is a vastly more terrible book than the former, it would be absurd to say they--and every other book I've read--should be placed in a stack that uniquely ranks then against one another. And it's not a matter of comparing apples and oranges--because you can compare apples and oranges--it's that the comparison is not scalar, perhaps not even mathematically representable at all.

In terms of status, know one knows what the word means. If we base it on influence, then some people who had the most lasting impacts where despised in their day. Additionally, people who wield power over others are generally resented if not loathed by subalterns. As with economics, with social science you can pretty much get the result you want by choosing the slice that yields the results closest to the answer you are looking for.

"He predicts that unconscious signals of a stable environment will increase self-control, which helps explains why high social-economic status correlates strongly with self-control."

What evidence is there that this is true? For what anecdotage is worth (which is probably the only evidence there is on the matter), some of the most out-of-control people I've met have been rich kids. Showing up to a 10-hour shift at a low-wage retail job every day with a smile on your face even though you have medical bills you can't pay--that's real self control. Meanwhile it's rich customer who's the first to go ballistic because their latte came out cold.

Obviously people are going to behave worse in a less stable environment. But I'd wager those who have had to deal with real hardship function better than the socio-economic well off in a crisis.

This definitely incidental--

Wouldn't a super intelligent, resource gathering agent simply figure out the futility of its prime directive and abort with some kind of error? Surely it would realize it exists in a universe of limited resources and that it had been given an absurd objective. I mean maybe it's controlled controlled by some sort of "while resources exist consume resources" loop that is beyond its free will to break out of--but if so, should it be considered an "agent"?

Contra humans, who for the moment are electing to consume themselves to extinction, if anything resource consumer AIs would be comparatively benign.

Isn't a "boolean" right/wrong answer exactly what utilitarianism promises in the marketing literature? Or, more precisely doesn't it promise to select for us the right choice among collection of alternatives? If the best outcomes can be ranked--by global goodness, or whatever standard--then logically there is a winner or set of winners which one may, without guilt, indifferently choose from.

I personally think there's not a lot of hope for animals as long as humans can't sort out their own mess. On the other hand, I don't think there is much hope for humanity as long as altruism stands in for actually taking responsibility. The very social system that puts $5 in our pockets to donate creates those who depend on our charity.

Probably the time wasted on the cost/benefit analysis was more costly--all told--than either branch of the flow chart. Having said that, I suspect the real objective of these exercises is quite different than the ostensible one.

It also takes no shortage of conceit to imagine one knows better than the majority or people. Lots of individuals flit between business and politics--GHW Bush is a major owner of a gold mine where I'm from.* But an honest person isn't going to go into politics, because they understand the fundamental lie doing so requires.

*'Fact, I'd wager the two are strongly correlated--though I'm not privy to correlation data you are.

Probably has something to do with the American work morality--the zealousness we apply any religion can only weep in envy of. We believe/have been brainwashed into believing work is what we were born to do. As to how much we should do; I'm not sure this is a question for psychological studies so much as a question of how much (and of what kind of) work we actually want to do. It's like asking how many hours one should spend cleaning their house; one balances a cleanliness level one can live with against time one would rather spend doing something else.

Might the apparent weird alliance not be a failure to accurately separate the substantive from the superficial? It could be the New Ager and the biohacker are driven by the same psychological imperative, each just dresses it a little differently. By even classifying their alliance as "weird", we are jumping the gun on what were are entitled to take for granted. I.e., we lack even the understanding to say what is weird as what isn't.

Load More