All of Randolf's Comments + Replies

I think the main reason for this is that these persons have simply spent more time thinking about cyronics compared to other people. By spending time on this forum they have had a good chance of running into a discussion which has inspired them to read about it and sign up. Or perhaps people who are interested in cyronics are also interested in other topics LW has to offer, and hence stay in this place. In either case, it follows that they are probably also more knowledgeable about cyronics and hence understand what cyrotechnology can realistically offer currently or in the near future. In addition, these long-time guys might be more open to things such as cyronics in the ethical way.

I don't think this is obvious at all. If you had asked me before in advance which of the following 4 possible sign-pairs would be true with increasing time spent thinking about cryonics: 1. less credence, less sign-ups 2. less credence, more sign-ups 3. more credence, more sign-ups 4. more credence, less sign-ups I would have said 'obviously #3, since everyone starts from "that won't ever work" and move up from there, and then one is that much more likely to sign up' The actual outcome, #2, would be the one I would expect least. (Hence, I am strongly suspicious of anyone claiming to expect or predict it as suffering from hindsight bias.)

Then maybe, instead of just downvoting, these persons should have asked him to clarify and repharse his post. This would have actually led to an interesting dicussion, while downvoting gave nobody nothing. Maybe it should be possible to downvote a post only if you also reply to that post.

That would kill the main idea of downvoting which is to improve the signal/noise ratio by ensuring comments made by "trolls" just aren't noticed anymore unless people really want to see them. Downvoting does lead to abuses, and I do consider that downvoting deeb's comment was not really needed, but forcing to make comments will kill the purpose, and not really prevent the abuses.

Personally I think that this call voting is indeed useless and belongs to places such as Youtube or other such sites where you can't expect a meaningful discussion in the first place. Here, if a person disagrees with you, I believe she or he should post a counter argument instead of yelling "your are wrong!", that is, giving a negative vote.

The problem with downvotes is that those who are downvoted are rarely people who know that they are wrong, otherwise they would have deliberately submitted something that they knew would be downvoted, in which case the downvotes would be expected and have little or no effect on the future behavior of the person. In some cases downvotes might cause a person to reflect on what they have written. But that will only happen if the person believes that downvotes are evidence that their submissions are actually faulty rather than signaling that the person who downvoted did so for various other reasons than being right. Even if all requirements for a successful downvote are met, the person might very well not be able to figure out how exactly they are wrong due to the change of a number associated with their submission. The information is simply not sufficient. Which will cause the person to either continue to express their opinion or avoid further discussion and continue to hold wrong beliefs. With respect to the reputation system employed on lesswrong it is often argued that little information is better than no information. Yet humans can easily be overwhelmed by too much information. Especially if the information are easily misjudged and only provide little feedback. Such information might only add to the overall noise. And even if the above mentioned problems wouldn't exist, reputation systems might easily reinforce any groupthink, if only by causing those who disagree to be discouraged and those who agree to be rewarded. If everyone was perfectly rational a reputation system would be a valueable tool. But lesswrong is open to everyone. Even if most of the voting behavior is currently free of bias and motivated cognition it might not stay that way for very long. Take for example the voting pattern when it comes to plain English, easily digestible submissions, versus highly technical posts including math. A lot of the latter category receives much less upvotes. The

The first picture is a dark image of a planet with a sligthly threatening atmosphere. It looks like the upper half of a mushroom cloud, but it could be also seen as the earth violently torn apart. This is why I think , given the context, that it symbolises the threat of a nuclear war, and more universally, the threat of a dystopia.

The last picture shows a beatiful utopia. I thought it's there to give a message of the type: "If everything goes well, we can still achieve a very good future." That is, while the first picture symbolises the threat ... (read more)

I'm afraid you are making a very strong statement with hardly any evidence to support it. You merely claim that people who pursue gratitude-free goals are often religious people (source?) and that such goals are a myth and absurd. (Why?) I for one, don't understand why such a goal would be necessarily absurd..

Also, I can imagine that even if I was the only person in the world, I would still pursue some goals.

It's absurd from an ethical point of view, as a finality. I was implicitely talking in the context of pursuing "important goals", that is, valued on an ethical basis. Abnegation at some level is an important part of most religious doctrines.

Strange enough. After all, while I am a transhumanist to some degree and also enjoy scifi, I am far from being a genious. Still the message of the pictures were immeditately obvious.This would suggest towards what you said: they maybe appealing to general people, while not necessarily as appealing to those already very familiar with scifi and transhumanism.

I would count myself among "general people". I didn't get it at all. In fact, having read the comments, I'm still not sure I get it. It's a pretty picture and all, but why is it there?

I could indeed simply lie and play the role of an obeying soldier to get the position I were looking for. However, it is of course true that if I had born and lived in a country where people are continiously fed with nationalist propaganda, I would be less likely to disobey the rules or to think it's wrong to retalite.

If I had been one of those persons with the missile warning and red button, I wouldn't have pressed it even if I knew the warning was real. What use would it be to launch a barrage of nuclear weapons against normal citizens simply because their foolish leaders did so to you? It would only make things worse, and certainly wouldn't save anyone. Primitive needs to revenge can be extremely dangerous with todays technology.

From a game-theoretic perspective, if the other side knew you thought that way then they should launch on your watch. MAD only works if both sides believe the other is willing to retaliate. If one side is willing to push the button and the other is not willing to retaliate, then the side willing to push the button nukes the other and takes over the world. If you can be absolutely certain the other side never finds out you aren't willing to retaliate, then yours is the optimal policy.
This is why you would not have been hired to sit in front of the button, even given the Soviets' dubious hiring techniques. Also, if you had been raised in Soviet Russia, your thoughts on the topic might have been different.
Primitive need for revenge can be even more vital with today's technology. It is the only thing holding the most powerful players in check.
Followup question: if someone was about to be placed in front of that red button, would you rather it be someone who had previously expressed the same opinion, or someone who had credibly committed to retaliate in case of a nuclear strike (however useless or foolish such retaliation might be)? Conversely, if someone were to be placed in front of the corresponding red button of a country your leaders were about to launch a barrage of nuclear weapons against, which category would you prefer they be in?

Mutually assured destruction is essentially a precommitment strategy: if you use nuclear weapons on me I commit to destroying you and your allies, a larger downside than any gains achievable from first use of nuclear weapons.

With this in mind, it's not clear to me that it'd be wrong (in the decision-theoretic sense, not the moral) to launch on a known-good missile warning. TDT states that we shouldn't differentiate between actions in an actual and a simulated or abstracted world: if we don't make this distinction, following through with a launch on warn... (read more)

Not that I disagree with your conclusion, but there was a significant selection pressure in the process of qualifying to get into the chair in front of the button. Political leaders don't like to give power to subordinates who are not likely to implement leadership's desires. Having gone through the process and its accompanying ideological training makes Petrov's refusal to risk nuclear armageddon even more impressive. Even though moral courage was [ETA: not] a criteria in selecting him, Petrov showed more that anyone could reasonably expect.

as interesting as picking up rocks and observing insects crawling under them, IMHO

What, insects are fascinating!

Rationality can be useful when drawing. It allows you to avoid simple mistakes which you could otherwise make. I think this is especially true when you are for example inking your work, or doing some other other task which is mostly mechanical. However, sometimes following mere feelings can provide very interesting results. I am not a good drawer, nor do I actually know anything about drawing, but I draw a little bit every now and then. I find drawing most enjoyable when I draw quided by intuition, just letting the pen draw curve after curve the way it fee... (read more)

This is another sort of mistake. Because a hypothesis can't be tested by me does not mean that it is meaningless. Vereficationists would agree with this because they think verification works everywhere, even on the other side of the universe. If some alien race over there could have seen the spaceship, or seen something which made the probability of there being a spaceship there high, or not have, then the claim is not meaningless.

I don't think I understand.. If it isn't possible to ever verify the existence of these aliens, what does it matter that the... (read more)

0Ronny Fernandez12y
It doesn't help you at all. It just means that verificationsts would and should not call it meaningless. It is unverifiable for you but not for science as a whole.

I left that field plank because I don't think the question is well defined. It has very little meaning to assign probabilities on the existence of something as vaque as a god. Maybe there is a god, maybe there isn't. It's entirely beyond my scope.

Yes, I think you managed to put my thoughts into words very well here. Probably a lot more clearly than I.

That's a bit differend from what I'm trying to say. My word choosing of intuition was clearly bad, I should have talked about mental experiences. My point is that when I do the mathematics, when I, for example, use the axioms and theorems of natural numbers to proof that 1+1 is 2, I have to rely on my memories and feelings at some point. If I use a theorem proven before, I must rely on my memories that I have proven that theorem before and correctly, but remembering is just another type of vaque mental experience. I could also remember axioms of natural nu... (read more)

FWIW: I agree with you that: * my beliefs are always the outputs of real-world embodied algorithms (for example, those associated with remembering previously proven axioms) and therefore not completely reliable. * there exists a non-empty set S1 of assertions that merit a sufficiently high degree of confidence that it is safe to call them "true" (while keeping in mind when it's relevant that we mean "with probability 1-epsilon" rather than "with probability 1"). I would also say that: * there exists a non-empty set S2 of assertions that don't merit a high degree of confidence, and that it is not safe to call them true. * the embodied algorithms we use to determine our confidence in assertions are sufficiently unreliable that we sometimes possess a high degree of confidence in S2 assertions. This confidence is not merited, but we sometimes possess it nevertheless. Would you agree with both of those statements? Assuming you do, then it seems to follow that by "what I truly believe" you mean to exclude statements in S2. (Since otherwise, I could have a statement in S2 that I truly believe, and is therefore definitionally true, which is at the same time not safe to call true, which seems paradoxical.) Assuming you do, then sure: if I accept that "what I truly believe" refers to S1 and not S2, then I agree that truth is what I truly believe, although that doesn't seem like a terribly useful thing to know.

Whetever it is a weaker statement or not isn't the point. I only brought it up because it made me change the way I think about mathematics and the world. While I don't know what you mean by "any story is as good as any other", I do not believe that it is possible to give truth a honest definition which would leave no open questions about the very nature of truth, while still being entirely objective.

Well, let's say I believe that I can fly by will alone. You, on the other hand, believe that I cannot fly by will alone. Which one of us is right ? If truth is entirely subjective, then we're both "right", in the sense that we both have some sort of a story in our heads regarding flight, and in our respective worldviews this story makes perfect sense, and since there's no objective standard for truth (at least, none that we can access in any way), the stories are all that matters. Thus, all stories are equally true, just by the virtue of being stories. According to a weaker interpretation of your statements, however, one of us is probably closer to the truth than the other. More specifically, it is very likely that my belief about my ability to fly by will alone is false. It's still not 100% likely, of course -- there's always that chance that we live in the Matrix, or that I'm a superhero, or that by "flight" I really mean "pretending to fly without physically moving", etc. -- but such chances quite small. Thus, for all practical purposes, we can say, "Bugmaster's belief about flight is false", with the understanding that we can never be 100% sure. There could be other interpretations of your claims, of course; these are just the two I could come up with. I could support the second interpretation, though whether it applies to math or not is highly debatable. However, if you support the first interpretation, or if you don't place any significant value on reason, then any further discussion on the topic is pointless -- because, by definition, there's nothing I can say that will make any difference to you.

Yes, I agree, it doesn't work on this case. It was an interesting talk though, thank you for that. Now I must sleep over this..

Yes, that's pretty much what I would say. Also, a simple answer to the question would also be:

At least the part where you use feelings to verify you didn't make an error. After writing the proof, you remember that you checked every part carefully that you didn't make an error. But this remembering is a mere feeling.

My world view used to be differend until I read the following pharse somewhere. That moment I realised I can only be as sure as my feelings let me.

Not even mathematical facts necessarily hold since there could always be a magical demon blu

... (read more)
That's a much weaker statement than the one you originally stated. This new statement says, basically, "you can never be 100% sure of anything", whereas before you seemed to be saying, "there exist no objective standards of truth at all, any story is as good as any other".

No, I think you understood pretty well what I meant. However, even though I may not be a rationalist myself, I think I can still take part in rational debate by embracing the definition of rational truth during that debate. Same way a true Christian can take part in a scientific debate about evolution, even if he doesn't actually believe that evolution is true. Rational talk, just like any talking, can also change my feelings and intuitions and hence persuade me to change my subjective beliefs.

However, I now realise this wasn't exactly the right place to tell about my idea of subjective truth. Sorry about that.

I don't think it will work in this case, because we're debating the very notion of rational truth. I personally didn't mean to give you that impression at all, I apologize if I did. Just because I happen to think that using reason to debate with someone who does not value reason is futile, doesn't mean that I want to actively discourage such debate. After all, I could be wrong !

Hmm, well, if you truly believe that truth is subjective, then there's nothing I can do to dissuade you, by definition -- since my subjective opinion is as good as yours. Now if you'll excuse me, I've got to go build some hula-hoops, and then maybe take to the skies by will alone

Oh, you probably could. I'm not so fond on this definition. It's just something I have found most satisfying so far but it's still subject to chance (How ironic!).

I think he will have a strong feeling that pi is about 3.141... . Like I said, in my definition truth is subjective and may chance since it's tied to the person's beliefs / feelings. This may not seem beatiful to everyone, but I can live with that.

That's the key issue. Reality is doing something here. And you know, in advance what his model will move to. You don't think he will succeed at his event. At the end of the day, you are pretty sure that there's something objective going on. More starkly, I can give you mathematical examples where your intuition will be wildly at odds with the correct math. Some of those make fun games to play for money. I suspect that you won't be willing to play them with me even if your intuition says that you should win and I shouldn't.
Why ? Hmm, well, if you truly believe that truth is subjective, then there's nothing I can do to dissuade you, by definition -- since my subjective opinion is as good as yours. Now if you'll excuse me, I've got to go build some hula-hoops, and then maybe take to the skies by will alone.

Maybe he is able to construct some sort of an abstract hula-hoop in his mind, which he believes to have those properties, but of course he isn't able to do it in the physical reality. Strong intuition suggests that it isn't possible.

However, we should not forgot that mathematical models of physical reality and mathematics itself are separate things. We can use mathematics to understand nature, but nature cares very little about anyones mathematical truths. Well, I think it's safe to say so anyway.

Ok, so consider what happens when this person does indeed attempt to construct a physical hula-hoop. After failing a few times, assuming he doesn't give up altogether, he'll be forced to accept (however provisinally) that pi is not 3, but approximately 3.14159265 (in our current physical reality, at least). He now has two conflicting models in his mind: one of an abstract hula-hoop made with pi == 3, and another one made with pi ~= 3.14159265. Which one will he "have a strong feeling / intuition / belief" about, do you think ?

Yes, I believe that proof is just a well-formed finite string, but I take that a little bit futher because one can always ask that "what a well formed finite string is?". Basically, I tell that person to use his honest intuition to check which things are "well-formed finite strings".

These [] questions [] have [] simple [] answers []. Please explain what part of carrying out a proof-checking procedure -- which can be by hand if need be -- requires intuition.

Someone doing that still puts faith on the computer, and the person who made the computer program to check the rules. Essentially, he has strong feeling that A holds because the computer program said so. He still has to rely on his "intuition" or "belief" that the computer program gives true statements.

Some people (mostly young children, though some adults as well) believe that the ratio of a circle's circumference to its diameter should be an integer, or at worst a rational fraction. Most other people, however, do not believe this to be the case. If mathematical truths are subjective as you claim, then a person who believes that pi == 3 should be able to build himself a 5-foot wide hula-hoop using exactly 15 feet of plastic tubing. Do you think this is actually the case ?
May I recommend "Godel, Escher, Bach" to you? It discusses the issue of what proof is at a rigorous but accessible level, including that a proof is just a well-formed finite string.
Checking whether mathematical rules are satisfied does not require intuition; it can be done by a computer program (and often is).

Yes, indeed. The ratio open/closed may be higher in scifi books than in fantasy books, but there are still many open fantasy books and closed scifi books. In the end it only depends on the invidual book. This is why I don't think it's really safe to label fantasy as a closed genre.

If the ratio is low enough in fantasy, I'm fine calling it "closed" as a genre. I'm just not sure that it's even commonly closed. (I'm also not that clear on what it means to be open vs closed)

What I'm pointing out is that all of these drug ideas are bound to be something that evolution has at some point tried out, and thrown away. And they are really unsophisticated ideas compared with those the brain has actually adopted.

Well there could be many reasons why evolution has" thrown them out". Maybe they are harmful in the long term, maybe their use consumes precious energy, or maybe they just aren't "good enough" for evolution to have kept them. That is, maybe they just don't give any signifigant evolutionary advantage.

Evolution doesn't create perfect beings, it creates beings which are good enough to survive.

Eagles are lonely hunters who don't spend much time with other birds, are quite rare in numbers and only live in the wilderness. Robins however, are often seen near other birds, basically live everywhere and are also large in numbers. So mayhaps people choose Robin as the better disease spreader simply because Robin probably is the better disease spreader.

There are very many factors that may affect this kind of a test.. What do you think about the following?

If you were told that planktons had caught a disease, how likely would you think it would spread amo... (read more)

I think that the saying "What can be destroyed by truth, should be" is a little bit too black and white to work well in all aspects of life. For example, a clumsy and fat person who thinks he is actually rather agile, might be a lot happier with this false belief than if he were aware of the truth*. Of course it could be said that if he knew the truth, he would start to exercise and eventually become healthier, but that's not necessarily the case. Another example would be, that if a not-so-good-looking person thinks he looks good, he might be enc... (read more)

I may be somewhat more radical than a lot of people here, but I don't think the fat man should be deluded. It will hurt him more in the long run, because, believing himself to be agile, he'll sign up for physically strenuous jobs and may injure himself, or try to compete in sports and be let down hard, instead of lightly like a controlled reveal could be.

I don't think you can call such a world good or perfect, but I don't think it's all bad either. I quess you could call it neutral.

I mean, I don't see that world as a big failure, if a failure at all. No civilization will be there forever*, but the one I mentioned had at least achieved something at it's time: it had once been glorious. While it left it's statues, it still managed to keep the world habitable for life and other species. (note how I mentioned trees and plants growing on the ruins). To put it simple, it was a beatiful civilization that left a beatiful world.. It isn't fair to call it a failure only because it wasn't eternal.

*Who am I to say that?

I'll only speak for myself, but 'everybody dead' gives an output nowhere near zero on my utility function. Everybody dead is awful. It's not the worst imaginable outcome, but it is really really really low in my preference ordering. I can see why you would think it's neutral - there's nobody to be happy but there's nobody to suffer either. However, if you think that people dying is a bad thing in itself, this outcome really is horrifying.

I don't know, or maybe I don't understand your point. I would find a quiet and silent, post-human world very beatiful in a way. A world where the only reminders of the great, yet long gone civilisation would be ancient ruins.. Super structures which once were the statues of human prosperity and glory, now standing along with nothing but trees and plants, forever forgotten. Simply sleeping in a never ending serenity and silence...

Don't you too, find such a future very beatiful in an eerie way? Even if there is no sentient being to perceive it at that time, the fact that such a future may exist one day, and that it can now be perceived through art and imagination, is where it's beauty truly lies.

I suspect that you are imagining this world a good because you can't actually separate your imagined observer from the world. The world you are talking about is not just a failure of humanity it is a world where we have failed so much that nothing is alive to witness our failure.