Wiki Contributions



"There are 729,500 single women my age in New York City. My picture and profile successfully filtered out 729,499 of them and left me with the one I was looking for."

I know this is sort of meant as a joke, but I feel like one of the more interesting questions that could be addressed in an analysis like this is what percentage of the women in the dating pool could you actually have had a successful relationship with. How strong is your filter and how strong does it need to be? There's a tension between trying to find/obtain the best of many possible good options, and trying to find the one of a handful of good options in a haystack of bad ones.

I'm somewhat amazed that you looked at 300 profiles, read 60 of them, and liked 20 of them enough to send them messages. Only 1 in 5 potential matches met your standards for appearance, but 1 in 3 met your standards based on what they wrote, and that's not even taking into account the difference in difficulty between reading a profile and composing a message.

You make a big deal about the number of people available online, but in your previous article on soccer players you implied that the average had a much larger effect on the tails than the average did. If you're really looking for mates in the tails of the distribution, and 1 in 729,500 is about 4.5 sigma event, then being involved in organizations whose members are much more like your ideal mate on average may be a better strategy than online dating.

  • There is regular structure in human values that can be learned without requiring detailed knowledge of physics, anatomy, or AI programming. [pollid:1091]
  • Human values are so fragile that it would require a superintelligence to capture them with anything close to adequate fidelity.[pollid:1092]
  • Humans are capable of pre-digesting parts of the human values problem domain. [pollid:1093]
  • Successful techniques for value discovery of non-humans, (e.g. artificial agents, non-human animals, human institutions) would meaningfully translate into tools for learning human values. [pollid:1094]
  • Value learning isn't adequately being researched by commercial interests who want to use it to sell you things. [pollid:1095]
  • Practice teaching non-superintelligent machines to respect human values will improve our ability to specify a Friendly utility function for any potential superintelligence.[pollid:1096]
  • Something other than AI will cause human extinction sometime in the next 100 years.[pollid:1097]
  • All other things being equal, an additional researcher working on value learning is more valuable than one working on corrigibility, Vingean reflection, or some other portion of the FAI problem. [pollid:1098]

Testing [pollid:1090]

[This comment is no longer endorsed by its author]Reply

I'm working through the udacity deep learning course right now, and I'm always trying to learn more things on the MIRI research guide. I'm in a fairly different timezone, but my schedule is pretty flexible. Maybe we can work something out?


This raises a really interesting point that I wanted to include in the top level post, but couldn't find a place for. It seems plausible/likely that human savants are implementing arithmetic using different, and much more efficient algorithms than those used by neurotypical humans. This was actually one of the examples I considered in support of the argument that neurons can't be the underlying reason humans struggle so much with math.


This is a really broad definition of math. There is regular structure in kinetic tasks like throwing a ball through a hoop. There's also regular structure in tasks like natural language processing. One way to describe that regular structure is through a mathematical representation of it, but I don't know that I consider basketball ability to be reliant on mathematical ability. Would you describe all forms of pattern matching as mathematical in nature? Is the fact that you can read and understand this sentence also evidence that you are good at math?


It's the average({4-2}/2), rather than the sum, since the altruistic agent is interested in maximizing the average utility.

The tribal limitations on altruism that you allude to are definitely one of the tendencies that much of our cultural advice on altruism targets. In many ways the expanding circle of trust, from individuals, to families, to tribes, to cities, to nation states, etc. has been one of the fundamental enablers of human civilization.

I'm less sure about the hard trade-off that you describe. I have a lot of experience being a member of small groups that have altruism towards non-group members as an explicit goal. In that scenario, helping strangers also helps in-group members achieve their goals. I don't think large-group altruism precludes you from belonging to small in-groups, since very few in-groups demand any sort of absolute loyalty. While full effort in-group altruism, including things like consciously developing new skills to better assist your other group members would absolutely represent a hard trade-off with altruism on a larger scale, people appear to be very capable of belonging to a large number of different in-groups.

This implies that the actual level of commitment required to be a part of most in-groups is rather low, and the socially normative level of altruism is even lower. Belonging to a close-knit in-group with a particularly needy member, (e.g. having a partially disabled parent, spouse, or child) may shift the calculus somewhat, but for most in-groups being a member in good-standing has relatively undemanding requirements. Examining my own motivations it seems that for many of the groups that I participate in most of the work that I do to fulfilling expectations and helping others within those group is more directly driven by my desire for social validation than my selfless perception of the intrinsic value of the other group members.


Fiction is written from inside the head of the characters. Fiction books are books about making choices, about taking actions and seeing how they play out, and the characters don't already know the answers when they're making their decisions. Fiction books often seem to most closely resemble the problems that I face in my life.

Books that have people succeed for the wrong reasons I can put down, but watching people make good choices over and over and over again seems like a really useful thing. Books are a really cheap way to get some of the intuitive advantages of additional life experience. You have to be a little careful to pick authors that don't teach you the wrong lessons, but in general I haven't found a lot of histories or biographies that really try to tackle the problem of what it's like to make choices from the inside in an adequate way. If you've read lots of historically accurate works that do manage to give easily digested advice on how to make good decisions, I'd love to see your reading list.


On a very basic level, I am an algorithm receiving a stream of sensory data.

So, do you trust that sensory data? You mention reality, presumably you allow that objective reality which generates the stream of your sensory data exists. If you test your models by sensory data, then that sensory data is your "facts" -- something that is your criterion for whether a model is good or not.

I am also not sure how do you deal with surprises. Does sensory data always wins over models? Or sometimes you'd be willing to say that you don't believe your own eyes?

I don't understand what you mean by trust. Trust has very little to do with it. I work within the model that the sensory data is meaningful, that life as I experience it is meaningful. It isn't obvious to me that either of those things are true any more than the parallel postulate is obvious to me. They are axioms.

If my eyes right now are saying something different than my eyes normally tell me, then I will tend to distrust my eyes right now in favor of believing what I remember my eyes telling me. I don't think that's the same as saying I don't believe my eyes.

group selection

When you said "more closely linked to genetic self-interest than to personal self-interest" did you mean the genetic self-interest of the entire species or did you mean something along the lines of Dawkins' Selfish Gene? I read you as arguing for interests of the population gene pool. If you are talking about selfish genes then I don't see any difference between "genetic self-interest" and "personal self-interest".

The idea of the genetic self-interest of an entire species is more or less incoherent. Genetic self-interest involves genes making more copies of themselves. Personal self-interest involves persons making decisions that they think will bring them happiness, utility, what have you. To reiterate my earlier statement "the ability of individual members of that species to plan in such a way as to maximize their own well-being."

is a series of appeals of to authority

Kinda, but the important thing is that you can go and check. In your worldview, how do you go and check yourself? Or are "streams of sensory data" sufficiently syncronised between everyone?

And I go look for review articles that support the quote that people care about social status. But if you don't consider expert opinion to be evidence, then you have to go back and reinvent human knowledge from the ground up every time you try and learn anything.

I can always go look for more related data if I have questions about a model. I can read more literature. I can make observations.


Fact just isn't an epistemological category that I have, and it's not one that I find useful. There are only models.

So how you choose between different models, then? If there are no facts, what are your criteria? Why is the model of lizard overlords ruling the Earth any worse than any other model?

You use expressions like "because it's always been true in the past", but what do you mean by "true"?

My primary criterion is consistency. On a very basic level, I am an algorithm receiving a stream of sensory data. I make models to predict what I think that sensory data will look like in the future based on regularities I detect/have detected in the past. Models that capture consistent features of the data go on to correctly control anticipation and are good models, but they're all models. The only thing I have in my head is the map. I don't have access to the territory.

And yet I believe with perfect sincerity that, in generals my maps correspond to reality. I call that correspondence truth. I don't understand the separation you seem to be attempting to make between facts and models or models and reality.

aspect of the climate system that consistently and frequently chnages between glacial and near-interglacial conditions in periods of less than a decade, and on occassion as rapidly as three years

I am not sure this interpretation of the data surivived -- see e.g. this:

Neat. Thanks.

The article you link seems to go out of its way to not be seen as challenging my basic claim, e.g. "Having said this, it should be reemphasised that ice-core chemistry does show extremely rapid changes during climate transitions. The reduction in [Ca] between stadial to interstadial conditions during D-O 3 in the GRIP ice-core occurred in two discrete steps totalling just 5 years [Fuhrer et al., 1999]."

Indeed it the success of the human species that I would cite as evidence for my assertion that human behavior is more closely linked to genetic self-interest than to personal self-interest. Cultural and social success is a huge factor in genetic self-interest.

I haven't been following the subject closely, but didn't the idea of group selection ran into significant difficulties? My impression is that nowadays it's not considered to be a major evolution mechanism, though I haven't looked carefully and will accept corrections.

I'm not sure how group selection is related to material you're quoting. Cultural success and social success refer to the success of an individual within a culture/society, not to the success of cultures and societies.

If you don't consider the opinions of experts evidence, what qualifies?

Opinions are not evidence, they are opinions. Argument to authority is, notably, a fallacy. I call things which qualify "facts".

I mean, it's sort of a fallacy. At the same time when I'm sick, I go to a doctor and get her medical opinion and treat it as evidence. I'm not an expert on the things that humans value. I don't have the time or energy to background to perform experiments and evaluate statistical and experimental methods. Even trusting peer review and relying on published literature is a series of appeals of to authority.

Load More