[Edited on 26-11-2014]
In discussion opened by XiXiDu on what to call Pascal Mugging one of the top comments seem to successfully clarify to me that this particular situation should not be regarded as Pascal's Mugging. (Considering lack of entanglement between improbability and size of impact).
Though question of what to do (technically, not common-sensically) with 'high impact low probability' claims is still not quite clear to me.
As I became a little less anonymous on a social network (by posting in rationality-related public page) I received a message from a person, who introduced himself as a 'self-taught AI researcher'.
From his words - he don't know english and isn't acquainted with any works from sequences (or other not-translated into russian sources - i.e textbooks), but that he has 'a firm understanding' of what he has to do in order to build a general AI from 'genetic algorithms'.
Mostly he wrote that complexity of values is a bullshit article and that he 'knows the answer' to that question, and also 'all questions' of contemporary AI field.
Chances of his words being true is infinitesimal.
I probably had a vague feeling that I am being 'pascal mugged' (threat of creating uFAI) in the start (but flinched from it), and definitely had this idea somewhere along the line, but didn't quite thought about it and didn't consciously choose any particular mode of communication. Just tried to understand his position with guiding questions and explain why uFAI is not a good thing.
As I write this I am aware that it is probably not the best quality material, but hopefully it will be good enough for 'discussion'. I probably seek advice on how to handle this stuff emotionally and practically (ignore / engage).
And maybe some encouragement\critique because I feel really bad for engaging in conversations with him since it brought about only insults from him (he is most likely a troll) and more close mindedness (in a rare chance that he is sincere). Also because if I think about it my motivations to talk to him were mostly comprised from thrill of engaging in a conversation with a hateful person (hello nineteen neighty-four) and then worries about chance that he is sincerely crazy.
[edit: added on 20.11.2014] Overnight he wrote that he is not 'interested' in creating friendly AI. As he understands it - as 'not harming people', because he wants to rule the world, and because ai would be 'more important' than people, since it so smart.
In really FAI would be one that can understand his commands and values as he understands them, and probably one that understands notion of life at all.
He is ignoring notion of evidence, since he posted screenshot of 'ai interface' with nothing but 10 lines of php code, that are barely functional as I understand them to support his claims.
Again chances that he really doing what he's claiming - diminishing with every his word, and chances that he is trolling or mentally ill - grow. But now he requests material help with creating full blown uFAI for a chance to get safety when it arrives.
I feel like I should ignore him now and block him on a social network, and that I should have done it before it spiraled out of control. Such small probabilities should not be allowed to reside in my mind - looks like I have unwillingly favored the hypothesis since despite it all - I am a little unhinged by his writing. Really, I feel that I should ignore the urge to mess with him and just block all communications whatsoever.