All of Satoshi_Nakamoto's Comments + Replies

Ok. Thanks for letting me know. I have removed the first example. I was thinking that it would make it simpler if I started out with an example that didn't look at evidence, but I think it is better without it.

If anyone wants to know the difference between frequency and probability. See the below quote:

“A probability is something that we assign, in order to represent a state of knowledge, or that we calculate from previously assigned probabilities according to the rules of probability theory. A frequency is a factual property of the real world that we

... (read more)

Yes you can. See this site for what I think is a good example of visualizing Bayes' theorem with venn diagrams.

Good point. Would you say that this is the problem: when you are rational, you deem your conclusions more valuable than those of non-rational people. This can end up being a problem as you are less likely to update your beliefs when they are opposed. This adds the risk that if you make a one false belief and then rationally deduce a plethora of others from it you will be less likely to update any erronous conclusions.

I think that the predicament highlights the fact that going against what is reasonable is not something that you should do lightly. Maybe, I... (read more)

I agree that this is probably not the best example. The scrub one is better.

I think that "moral" is similar to "reasonable" in that it is based on intutition rather than argument and rationality. People have seen slavery as being "moral" in the past. Some of the reasons for this is false beliefs like that it's natural that some people are slaves, that slaves are inferior beings and that slavery is good for slaves,

I guess I was thinking about it from two points of view:

  • Is it rational to have the moral belief that there shou
... (read more)
0VoiceOfRa8y
On the other hand, high levels of melanin were correlated with lower intelligence.

I agree that rationality and reasonableness can be similar, but they can also be different. See this post for what I mean by rationality. The idea of it being choosing the best option is too vague.

Some factors that may lead to what others think is reasonable being different from what is the most rational are: the continued use of old paradigms that are known to be faulty, pushing your views as being what is reasonable as a method of control and status quo bias.

Here is are two more examples of the predicament

  • Imagine that you are in family that is heavil
... (read more)

I don't think I was very clear. I meant for this case to be covered under "avoid the issue". As by avoiding the issue you just continue whatever course of action or behaviour you were previously undertaking. I have edited the post to make this a bit clearer.

I thought about this later and think you were right. I have updated the process in the picture.

Yes. They seem pretty close to me. I think it is a bit different though. I think the bruce article was trying to convey the idea that Bruce was a kind of gaming masochist. That is, he wanted to lose.

An example quote is:

If he would hit a lucky streak and pile up some winnings he would continue to play until the odds kicked in as he knew they always would thus he was able to jump into the pit of despair and self-loathing head first. Because he needed to. And Bruce is just like that.

The difference as I see it is that bruce loses through self sabotage because of unresolved issues in his psyche and the scrub loses through self sabotage because they are too pedantic.

1Luke_A_Somers8y
I don't think pedantry is anything like the key element of scrubbitude. As I understand it, a scrub is interested in 'fairness', which tends to mean shallow learning curves, balance between options (even when it is not called-for), and their winning.
0Gunnar_Zarncke8y
Which is also a psych problem. But I agree that it is different. They don't want to loose but they also don't want to win. Seems like an in-between state.

Good idea. I replaced it with "Why can't you just conform to my belief of what is the best course of action for you here". Thanks.

A wrote a post based on this, see The Just-Be-Reasonable Predicament. The just-be-reasonable predicament occurs when in order to be seen as being reasonable you must do something irrational or non-optimal.

Is this a decent summary of what you mean by 'reasonable': noticeably rational in socially acceptable ways, i.e. you use reasons and arguments that are in accordance with group norms?

A reasonable person:

  • can explain their reasoning
  • is seen as someone who will update their beliefs based on socially acceptable evidence
  • is seen to act in accordance with social norms even when the norms are irrational. This means that their behaviour and reasoning are seen as socially acceptable and/or praiseworthy
1abramdemski8y
Yes, I think that's an accurate succinct definition. (Note: I spent a few minutes writing this comment thinking that there was a small different between your statement and my intention, and ultimately decided that there wasn't.) We could make many fine distinctions in this cluster. To list several notions in this close region: * A person who is guided by the goal of being rational vs a person who is guided by the goal of seeming rational * Trying to seem rational by any means vs trying to seem rational in socially acceptable ways * Trying to be rational by any means vs trying to be rational only in socially acceptable ways The last of these is similar to the concept of a person who is playing to win vs a scrub [http://www.sirlin.net/articles/playing-to-win]. A scrub is (roughly] a person who sees overly clever strategies as a kind of cheating, but other than that, plays to win. Another important concept is negitiability: that the decision-making process is open to scrutiny and adjustment by outsiders. This is similar to corrigibility [https://intelligence.org/2014/10/18/new-report-corrigibility/], as well.

Don’t worry about the money. Just like the comments if they are useful. In Technological precognition does this cover time travel in both directions? So, looking into the future and taking actions to change it and also sending messages into the past. Also, what about making people more compliant and less aggressive by either dulling or eliminating emotions in humans or making people more like a hive mind.

0turchin8y
I uploaded new version of the map with changes marked in blue. http://immortality-roadmap.com/globriskeng.pdf [http://immortality-roadmap.com/globriskeng.pdf] Technological precognition does not cover time travel, because it too much fantastic. We may include scientific study of claims about precognitive dreams, as such study will become soon possible with live brain scans of sleeping people and dream recording. Time travel could have its own x-risks, like well known grandfather problem. Lowering human intelligence is in bad plans. I have been thinking about hive mind... It may be a way to create safe AI, which will be based on humans and use their brains as free and cheep supercomputers via some kind of neuro-interface. But in fact contemporary science as whole is an example of such distributed AI. If a hive mind is enforced, it is like worst totalitarian state... If it does not include all humans, the rest will fight against it, and may use very powerful weapons to safe their identity. It is already happen as fight between globalists and anti-globalists.

bitcoinis an electronic payment system based on cryptographic proof instead of trust. I think the big difference between it and the risk control system is the need for enforcement i.e. changing what other people can and can’t do. There seems to be two components to the risk control system: prediction of what should be researched and enforcement of this. The prediction component doesn’t need to come from a centralised power. It could just come from the scientific community. I would think that the enforcement would need to come from a centralised power. I g... (read more)

In plans: 1. Is not "voluntary or forced devolution" the same as "ludism" and "relinquishment of dangerous science" which is already in the plan?

I was thinking more along the lines of restricting the chance for divergence in the human species. I guess I am not really sure what is it that you are trying to preserve. What do you take to be humanness? Technological advances may allow us to alter ourselves so substantially that we become post-human or no longer human. This could be for example from cybernetics or genetic engin... (read more)

0hairyfigment8y
Technically, I wouldn't say we'd lost it if the price of sperm donation rose (from its current negative level) until it stopped being an efficient means of reproduction. But I think you underestimate the threat of regular evolution making a lot of similar changes, if you somehow froze some environment for a long time. Not only does going back to our main ancestral environment seem unworkable - at least without a superhuman AI to manage it! - we should also consider the possibility that our moral urges are a mixed bag derived from many environments, not optimized for any.
0turchin8y
A question: is it possible to create risk control system, which is not based on centralized power, like bitcoin is not based on central banking? For example: local police could handle local crime and terrorists; local health authorities could find and prevent disease spread. If we have many x-risks peers, they could control their neighborhood in their professional space. Counter example: how it could help in situations like ISIS or other rogue state, which is going (may be) to create a doomsday machine or virus which will be used to blackmail or exterminate other countries?
0turchin8y
Sent 150 USD to Against Malaria foundation. The idea of dumbing people is also present in Bad plan section, "limitation of human or collective intelligence"... But the main idea of preventing human extinction is, by definition to ensure that at least several examples of Homo sapienses are still alive in any given point of time. It is not the best possible definition. It should also include posthumans if they based on humans and share a lot of their properties (and as Bostrom said: could realise full human potential). In fact, we can't said what is really good before we solve Friendly AI problem. And if we know what is good, we could also said what is worst outcome, and so constitute existential catastrophe. But real catastrophe which could happen in 21 century is far from such sophisticated problems of determination ultimate good, human nature and full human potential. It is clear visible physical process of destruction. There are some ideas of down to top solving problems of control, like idea of transparent society by David Brin, where vigilants will scan the web and video sensors searching for terrorists. So it would be not hierarchical control but net based, pr peer to peer. I like two extra boxes, but for now I already spent my prize budget two times, which unexpectedly put me in controversial situation: as author of the map I want to make the best and most inclusive map, but as a owner of prize fund (which I pay from personal money earned selling art) I feel my self more screwy :)

I would use the word resilient rather than robust.

  • Robust: A system is robust when it can continue functioning in the presence of internal and external challenges without fundamental changes to the original system.

  • Resilient: A system is resilient when it can adapt to internal and external challenges by changing its method of operations while continuing to function. While elements of the original system are present there is a fundamental shift in core activities that reflects adapting to the new environment.

I think that it is a better idea to think ab... (read more)

1turchin8y
I accepted your idea about replacing the word “robust" and will award the prize for it. The main idea of this roadmap is to escape availability bias by listing all known ideas for x-risk prevention. This map will be accompanied by the map of all known x-risks which is ready and will be published soon. More than 100 x-risks have been identified and evaluated. The idea that some of plans create their own risks is represented in this map with red boxes below plan A1. But it may be possible to create completely different future risks and prevention map using system approach, or something like a scenarios tree. Yes, each plan is better to contain specific risks. A1 is better to contain biotech and nanotech risks, A2 is better for UFAI, A3 for nuclear war and biotech an so on. So another map may be useful to correspond risks and prevention methods. Timeline was already partly replaced with "steps", as was already suggested by "elo" and he was awarded for it. Phil Torres shows that Bostroms classification of x-risks is not as good as it seems to be, in: http://ieet.org/index.php/IEET/more/torres20150121 [http://ieet.org/index.php/IEET/more/torres20150121] So I prefer the notion of "human extinction risks" as more clear. I still don't know how we could fix all the world system problems which are listed in your link without having control of most of the world which returns us to plan A1. In plans: 1. Is not "voluntary or forced devolution" the same as "ludism" and "relinquishment of dangerous science" which is already in the plan? 1. The idea of uploding was already suggested here in the form of "migrating into simulation" and was awarded. 2. I think that "some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware" is basically the same idea as "smaller catastrophe could help unite humanity (pandemic, small asteroid, local nuclear war)", but your wording is excellent. I think I should accep
1[anonymous]8y
This is useful. Mr. Turchin, please redirect my award to Satoshi.