Sorted by New

Wiki Contributions


You could in principle very easily ignore the dice and eat the chocolate regardless. You need to take it upon yourself to follow through with the scheme and forfeit the chocolate 3 times out of 4. If you start with the understanding that chocolate is a possibility 4 times out of 4 if you followed a more permissive scheme, then you are effectively punishing yourself 3/4 of the time, which I expect would work as negative reinforcement for said task or the reward scheme in general. And it would also require enough willpower, which some people won't have.

The only factor under your control may be to realize that the only factor under your control is to obtain and use better methods and processes to think, gather information, act in the real world, generate feedback and adjust yourself.

Illustratively, no matter how innately intelligent a native English speaker might be, if he never had any experience with Japanese, he won't be able to read and understand kanji. Is that a failure of intelligence, or a failure of knowledge and method? If you've never had any experience in any science, and don't know the specialized vocabulary, then it is likely you won't be able to understand a technical paper. Again, is that a failure of intelligence, or is it just that you'll need some time to grow familiar with the field? A lot of intelligence and rationality is like that. Including understanding your own intelligence and capabilities to better yourself.

You'll ned to assess where you stand now, then iteratively improve yourself. You'll need to look outside, for information and help, to get better at it. Depending on your starting point, your incremental improvements may be slow at first, until you learn how to get better at improving yourself. You may have more terrain to cover too.

"I vow to always do my best to make my best become even better."

Your end point may still be determined by your IQ or working memory, but the starting realization that you can ameliorate yourself, can be as simple as a few words. It's still an external factor, but one that, depending on your sensitivity to such ideas, you could encounter regularly enough that it will eventually sink in, and start changing you. Frequenting places where such ideas are more prevalent (like here), may help bootstrap this process earlier.

This is consistent with my experience with European life-extension movements. Generally speaking we just don't have a clear idea of where we should be going. Neither do we even always agree an what research or project is even relevant. So we have a collection of people sharing a vaguely defined goal of life-extension, all pushing for their pet projects and hypotheses. No one is really willing to abandon what they came up with because no clear evidence-based project under which they could assemble exists (or is perceptible)(this therefore of course includes all such personal pet projects and ideas). Additionally, few if any really seem to believe strongly in life extension (as a way of life or something important enough to take precedence over other projects in their life), and newly interested people turnover is very high with little retention beyond a few months.

Hm. This was eye opening enough that I felt like commenting for the first time in a year. I've known for a while about people being too despaired to desire living on, but this puts it under a new perspective.

Most importantly it helps explain the huge discrepancy between how instrumentally important staying alive and able is for anyone who has any goal at all (barring some fringe cases), and how little most people do to plan and organize themselves in order to avoid aging and dying, even as it is reasonably expected to be unavoidable with our current means.

What you said suggests maybe another, little explored - to my knowledge - by life-extensionists set of strategies to sustain effective life extension projects - as generalized public acceptance and backing is still very much nowhere as far as it should be.

Interesting opinion. I rarely browse open threads, mainly because I find them a mess, and it takes a longer time to find if there's anything which would interest me in there. Discussion posts have their own page with neatly ordered titles, you get an idea at a glance, and can on a first filter sort through around 20 topics in around 2 seconds.

Please do note the delicious irony here :

I don't see much good in associating rationality with extreme caution.

I don't think that teaching people to expect worse case scenarios increases rational thinking.

Which in essence looks suspiciously like cautiously assuming a bad case scenario in which this story won't help the rationality cause, or even a worst case scenario in which it will do more wrong than right.

If you want to go forth and create a story about rationality, then do it. Humans are complex creatures, not everyone will react the same way to your story, and anybody who thinks they can accurately predict the reaction of all the different kinds of people who'll read your story (especially as this story hasn't even been written yet) is either severely deluded as to their ability, or secretly running the world behind curtains already.

When you are older, you will learn that the first and foremost thing which any ordinary person does is nothing.

I think this misses the point of the OP, which wasn't that IQ or intelligence can accurately be guessed in a casual conversation, but rather that intelligence can be guessed more accurately than other important parameters such as "conscientiousness, benevolence, and loyalty", for which we don't have tools nearly as good as those we have for measuring IQ. The consequence of which being, since we can't assess these as methodically, people can fake them more easily, and this has negative social consequences.

Especially to mess with one of those people intolerant of our beliefs in the supernatural, who always have to go about how this or that can easily be dismissed if only you were rational. How ironical could it be then to get one to believe in a haunted house because it was the rational thing to do given the "evidence"?

It's the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.

Still I wonder then, what could I do, to enhance my probability of being resurrected if worse comes to worse and I can't manage to stay alive to protect and ensure the posterity of my own current self if I am not one of those better minds (according to which values though?)

I know I prefer to exist now. I'd also like to survive for a very long time, indefinitely. I'm also not even sure the person I'll be 10 or 20 years from now will still be significantly "me". I'm not sure the closest projection of my self on a system incapable of suffering at all would still be me. Sure I'd prefer not to suffer, but over that, there's a certain amount of suffering I'm ready to endure if I have to in order to stay alive.

Then on the other side of this question you could consider creating new sentiences who couldn't suffer at all. But why would these have a priority over those who exist already? Also, what if we created people who could suffer, but who'd be happy with it? Would such a life be worthy? Is the fact that suffering is bad something universal, or a quirk of terran animals neurology? Pain is both sensory information and the way this information is used by our brain. Maybe we should distinguish between the information and the unpleasant sensation it brings to us. Eliminating the second may make sense, so long as you know chopping your leg off is most often not a good idea.

Load More