1 min read22nd Jul 2020118 comments
This is a special post for quick takes by Viliam. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

119 comments, sorted by Click to highlight new comments since: Today at 5:32 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I would like to see a page like TalkOrigins, but about IQ. So that any time someone confused but generally trying to argue in good faith posts something like "but wasn't the idea of intelligence disproved scientifically?" or "intelligence is a real thing, but IQ is not" or "IQ is just an ability to solve IQ tests" or "but Taleb's article/tweet has completely demolished the IQ pseudoscience" or one of the many other versions... I could just post this link. Because I am tired of trying to explain, and the memes are going to stay here for a foreseeable future.

3lsusr3y
I'd like a page like this just so I can learn about IQ without having to dig through lots of research myself.

Perhaps the mental health diagnoses should be given in percentiles.

Some people complain that the definitions keep expanding, so that these days too many kids are diagnosed with ADHD or autism. The underlying reason is that these things seem to be on a scale, so it is arbitrary where you draw the line, and I guess people keep looking at those slightly below the line and noticing that they are not too different from those slightly above the line, and then they insist on moving the line.

But the same thing does not happen with IQ, despite the great pressure against politically incorrect results, despite the grade inflation at schools. That is because IQ is ultimately measured in percentiles. No matter how much pressure there is to say that everyone is above the average, the math only allows 50% of people to be smarter than the average, only 2% to be smarter than 98%, etc.

Perhaps we should do the same with ADHD and autism, too. Provide the diagnosis in form of: "You are more hyperactive than 85% of the population", controlled for age, maybe also for sex if the differences are significant. So you would e.g. know that yes, your child is more hyperactive than average, but not like super ex... (read more)

4Zian1y
It seems that the broken hand example is similar to situations where we have a deep understanding of the mechanics of how something works. In those situations, it makes more sense to say "this leg is broken; it cannot do 99% of the normal activities of daily living." And the doctor can probably fix the leg with pins and a cast without much debate over exactly how disabled the patient is.
2Viliam1y
Yeah, having or not having a gears model makes a big difference. If you have the model, you can observe each gear separately, for example look at a hurting hand and say how damaged are bones, ligaments, muscles, skin. If you don't have a gears model, then there is just something that made you pay attention to the entire thing, so in effect you kinda evaluate "how much this matches the thing I have in my mind". For example, speaking of intelligence, I have heard a theory that it is a combination of neuron speed and short term memory size. No idea whether this is correct or not, but using it as a thought experiment, suppose that it is true and one day we find out exactly how it works... maybe that day we will stop measuring IQ and start measuring neuron speed and short term memory size separately. Perhaps instead of giving people a test, we will measure the neuron speed directly using some device. We will find people who are exceptionally high at one of these things and low at the other, and observing them will allow us to even better understand how this all works. (Why haven't we found such people already, e.g. using factor analysis? Maybe they are rare in nature, because the two things strongly correlate. Or maybe it is very difficult to distinguish them by looking at the outputs.) Similarly, a gears model might split the diagnosis of ADHD into three separate numbers, and autism into seven. (Numbers completely made up.) Until then, we only have one number representing the "general weirdness in this direction". Or a boolean representing "this person seems weird".
2Dagon1y
I don't think we can measure most of these closely enough, and I think the symptom clustering is imperfect enough that this doesn't provide enough information to be useful.  And really, neither does IQ - I mean it's nice to know that one is smart, or not, and have an estimate of how different from the average one is, but it's simply wrong to take any test result at face value. In fact, you do ask the doctor if your hand is broken, but the important information is not binary.  It's "what do I do to ensure it heals fully".  Does it require surgery, a cast, or just light duty and ice?  These activities may be the same whether it's a break, a soft-tissue tear, or some other injury.   Likewise for mental health - the important part of a diagnosis isn't "how severe is it on this dimension", but "what interventions should we try to improve the patient's experience"?  The actual binary in the diagnosis is "will insurance pay for it", not "what percent of the population suffers this way".
2ChristianKl1y
If you want to know whether someone would benefit from a drug or other mental treatment the percentage is irrelevant.  Diagnoses are used to determine whether insurance companies have to pay for treatment. The percentage shouldn't matter as much as whether the treatment is helpful for the patient.

Moving a comment away from the article it was written under, because frankly it is mostly irrelevant, but I put too much work into it to just delete it.

But occasionally I hear: who are you to give life advice, your own life is so perfect! This sounds strange at first. If you think I’ve got life figured out, wouldn’t you want my advice?

How much your life is determined by your actions, and how much by forces beyond your control, that is an empirical question. You seem to believe it's mostly your actions. I am not trying to disagree here (I honestly don't know), just saying that people may legitimately have either model, or a mix thereof.

If your model is "your life is mostly determined by your actions", then of course it makes sense to take advice from people who seem to have it best, because those are the ones who probably made the best choices, and can teach you how to make them, too.

If your model is "your life is mostly determined by forces beyond your control", then the people who have it best are simply the lottery winners. They can teach you that you should buy a ticket (which you already know has 99+% probability of not winn... (read more)

1Khanivore3y
In reality it has to be a mixture right? So many parts of my day are absolutely in my control, at least small things for sure. Then there are obviously a ton of things that are 100% out of my control. I guess the goal is to figure out how to navigate the two and find some sort of serenity. After all isn't that the old saying about serenity? I often think about what you have said as an addict. I personally don't believe addiction to be a disease, my DOC is alcohol, and I don't buy into the disease model of addiction. I think it is a choice and maybe a disorder of the brain and semantics on the word "disease". But I can't imagine walking into a cancer ward full of children and saying me too! People don't just get to quit cancer cold turkey. I also understand like you've pointed out, and I reaffirmed that it is both. I have a predisposition to alcoholism because of genetics and it's also something I am aware of and a choice. I thought I'd respond to your post since you were so kind as to reply to my stuff. I find this forum very interesting and I am not nearly as intelligent as most here but man it's fun to bounce ideas!
2Viliam3y
Yeah, this is usually the right answer. Which of course invites additional questions, like which part is which... With addiction, I also think it is a mixture of things. For example, trivially, no one would abuse X if X were literally impossible to buy, duh. But even before "impossible", there is a question of "how convenient". If they sell alcohol in the same shop you visit every day to buy fresh bread, it is more tempting than if you had to visit a different shop, simply because you get reminded regularly about the possibility. For me, it is sweet things. I eat tons of sugar, despite knowing it's not good for my health. But fuck, I walk around that stuff every time I go shopping, and even if I previously didn't think about it, now I do. And then... well, I am often pretty low on willpower. I wish I had some kind of augmented reality glasses which would simply censor the things in the shop I decide I want to live without. Like I would see the bread, butter, white yoghurt, and some shapeless black blobs between that. Would be so easier. (Kind of like an ad-blocker for offline world. This may become popular in the future.) Another thing that contributes to addiction is frustration and boredom. If I am busy doing something interesting, I forget the rest of the world, including my bad habits. But if the day sucks, the need to get "at least something pleasant, now" becomes much stronger. Then it is about how my home is arranged and what habits I create. Things that are "under my control in long term", like you don't build the good habit overnight, but you can start building it today. For example, with a former girlfriend I had a deal that there is one cabinet that I will never open, and she needs to keep all her sweets there; never leave them exposed on the table, so that I would not be tempted.

America is now what anthropologists call a Kardashian Type Three civilisation: more than fifty percent of GDP is in the attention economy.

Stories by Greg Egan are generally great, but this one is... well, see for yourselves: In the Ruins

I was thinking about which possible parts of economy are effectively destroyed in our society by having an income tax (as an analogy to Paul Graham's article saying that wealth tax would effectively destroy startups; previous shortform). And I think I have an answer; but I would like an economist to verify it.

Where I live, the marginal income tax is about 50%. Well, only a part of it is literally called "tax", the other parts are called health insurance and social insurance... which in my opinion is misleading, because it's not like the extra coin of income increases your health or unemployment risk proportionally; it should be called health tax and social tax instead... anyway, 50% is the "fraction of your extra coin the state will automatically take away from you" which is what matters for your economical decisions about making that extra coin.

In theory, by the law of comparative advantage, whenever you are better at something than your neighbor, you should be able to arrange a trade profitable for both sides. (Ignoring the transaction costs.) But if your marginal income is taxed at 50%, such trade would be profitable only if you are more than 2×... (read more)

3gjm3y
I think this is insightful, but my guess is that a society without income tax would not in fact be nearly as much better at providing opportunities for people who are kinda-OK-ish at things as you conjecture, and I further guess that more people than you think are at least 2x better at something than someone they can trade with, and furthermore (though it doesn't make much difference to the argument here) I think something's fundamentally iffy about this whole model of when people are able to find work. Second point first. For there to be opportunities for you to make money by working, in a world with 50% marginal income tax, what you need is to be able to find someone you're 2x better than at something, and then offer to do that thing for them. ... Actually, wait, isn't the actual situation nicer than that? Roll back the income tax for a moment. You can trade profitably with someone else provided your abilities are not exactly proportional to one another, and that's the whole point of "comparative advantage". If you're 2x worse at doing X than I am and 3x worse at doing Y, then there are profitable trades where you do some X for me and I do some Y for you. (Say it takes me one day to make either a widget or a wadget, and it takes you two days to make a widget and three days to make a wadget, and both of us need both widgets and wadgets. If we each do our own thing, then maybe I alternate between making widgets and wadgets, and get one of each every 2 days, and you do likewise and get one of each every 5 days. Now suppose that you only make widgets, making one every 2 days, and you give 3/5 of them to me so that on average you get one of your own widgets every 5 days, same as before. I am now getting 0.6 widgets from you every 2 days without having to do any work for them. Now every 2 days I spend 0.4 days making widgets, so I now have a total of one widget per 2 days, same as before. I spend another 1 day making one wadget for myself, so I now have a total of one

Thinking about relation between enlightenment and (cessation of) signaling.

I know that enlightenment is supposed to be about cessation of all kinds of cravings and attachments, but if we assume that signaling is a huge force in human thinking, then cessation of signaling is a huge part of enlightenment.

Some random thoughts in that direction:

The paradoxical role of motivation in enlightenment -- enlightenment is awesome, but a desire to be awesome is the opposite of enlightenment.

Abusiveness of the Zen masters towards their students: typically, the master tries to explain the nature of enlightenment using an unhelpful metaphor (I suppose, because most masters suck at explaining). Immediately, a student does something obviously meant to impress the master. The master goes berserk. Sometimes, as a consequence, the student achieves enlightenment. -- My interpretation is that realizing (System 1) that the master is an abusive asshole who actually sucks at teaching, removes the desire to impress him; and because in this social setting the master was perceived as the only person worth impressing, this removes (at least temporarily) the desire to impress people in general.

A few koans are o... (read more)

1[comment deleted]4y

Out of curiosity (about constructivism) I started reading Jean Piaget's Language and Thought of the Child. I am still at the beginning, so this comment is mostly meta:

It is interesting (kinda obvious in hindsight), how different a person sounds when you read a book written by them, compared to reading a book about them. This distortion by textbooks seems to happen in a predictable direction:

  • People sound more dogmatic than they really were, because in their books there is enough space for disclaimers, expressing uncertainty, suggesting alternative explanations, providing examples of a different kind, etc.; but a textbook will summarize this all as "X said that Y is Z".
  • People sound less empirical and more like armchair theorists, because in their books there is enough space to describe various experience and experiments that led them to their conclusions, but the textbook will often just list the conclusions.
  • People sound more abstract and boring, because the interesting parts get left out in the textbooks, replaced by short abstract definitions.

(I guess the lesson is that if you learn about someone from a textbook and conclude "this guy is just another boring... (read more)

My 9 years old daughter read the first book of Harry Potter and now she is writing her first fanfic, a Harry Potter / Paw Patrol crossover.

Harry Poodle and the Philosopher's Stone.

Chapter 1: The Pup who lived

So far she has only written the first page; I wonder how far she gets.

4Gunnar_Zarncke2mo
Paul Graham recommends that it goes better if they don't have to type or write:
4Viliam2mo
My kids are familiar with recording sound on Windows. They already record their songs or poems. For some reason, they don't like the idea of recording a story, even if I offer to transcribe it afterwards. Perhaps transcribing in real time would be more fun...

To understand qualia better, I think it would help to get a new sensory input. Get some device, for example a compas or an infrared camera, and connect it to your brain. After some time, the brain should adapt and you should be able to "feel" the inputs from the device.

Congratulations! Now you have some new qualia that you didn't have before. What does it feel like? Does this experience feel like a sufficient explanation to say that the other qualia you have are just like this, only acquired when you were a baby?

After reading the Progress & Poverty review at ACX, it seems to me that land is the original Bitcoin. Find a city that has a future, buy some land, and HODL.

If you can rent the land (the land itself, not the structures that stand on it), you even have a passive income that automatically increases over time... forever. This makes it even better than Bitcoin.

So, the obvious question is why so many people are angry about the Bitcoin, but so few (only the Georgists, it seems) are angry about the land.

EDIT: A possible explanation is that land is ancient and associated with high status, Bitcoin is new and low-status. Therefore problems associated with Bitcoin can be criticized openly, while problems associated with land are treated as inevitable.

While I think much of the anger about Bitcoin is caused by status considerations, other reasons to be more upset about Bitcoin than land rents include:

  • Land also has use-value, Bitcoin doesn't
  • Bitcoin has huge negative externalities (environmental/energy, price of GPUs, enabling ransomware, etc.)
  • Bitcoin has a different set of tradeoffs to trad financial systems; the profusion of scams, grifts, ponzi schemes, money laundering, etc. is actually pretty bad; and if you don't value Bitcoin's advantages...
  • Full-Georgist 'land' taxes disincentivise searching for superior uses (IMO still better than most current taxes, worse than Pigou-style taxes on negative externalities)
4Viliam3y
Oh, that's an interesting point: in Georgist system, if you invent a better use of your land, the rational thing to do is shut up, because making it known would increase your tax! I wonder what would happen in an imperfectly Georgist system, with a 50% or 90% land value tax. Someone smarter than me probably already thought about it. Also, people can brainstorm about the better use of their neighbor's land. No one would probably spend money to find out whether there is oil under your house. But cheap ideas like "your house seems like a perfect location to build a restaurant" would happen. Maybe in Georgist societies people would build huge fences around their land, to discourage neighbors from even thinking about it.

When you tell people which food contains given vitamins, also tell them how much of the food would they need to eat in order to get their recommended daily intake of given vitamin from that source.

As an example, instead of "vitamin D can be found in cod liver oil, or eggs" tell people "to get your recommended intake of vitamin D, you should eat every day 1 teaspoon of cod liver oil, or 10 eggs".

The reason is that without providing quantitative information, people may think "well, vitamin X is found in Y, and I eat Y regularly, so I got this covered", while in fact they may be eating only 1/10 or 1/100 of the recommended daily intake. When you mention quantities, it is easier for them to realize that they don't eat e.g. half kilogram of spinach each day on average (therefore, even eating spinach quite regularly doesn't mean you got your iron intake covered).

The quantitative information is typically provided in micrograms or international units, which of course is something that System 1 doesn't understand. To get an actionable answer, you need to make a calculation like "an average egg has 60 grams of yolk... a gram of cooked egg yolk contains 0.7 IU of vitamin D... the recommended ... (read more)

4ChristianKl3y
This assumes that the RDA that those organization publish are trustworthy. You have other organization like the Encodrine society that recommend an order of magnitude more vitamin D. If the RDA of 400 or 600 IU would be sensible you also could solve it by being a lot in the sun once every two weeks. 
3dyne3y
Have you tried using Cronometer or a similar nutrition-tracking service to quickly find these relationships? I've found Cronometer in particular to be useful because it displays each nutrient in terms of a percent of the recommended daily value for one's body weight. For example, I can see that a piece of salmon equals over 100% of the recommended amount of omega-3 fatty acids for the day, while a handful of sunflower seeds only equals 20% of one's daily value of vitamin E. Therefore, I know that a single piece of fish is probably enough, but that I should probably eat a larger portion of sunflower seeds than I would otherwise. I suppose a percentage system like this one is just the reciprocal of saying something like "10 eggs contain the recommended daily amount of vitamin D."
3Viliam3y
Thank you for the link! Glad to see someone uses the intuitive method. My complaint was about why this isn't the standard approach. Like, recently I was reading a textbook on nutrition (the actual school textbook for cooks; I was curious what they learn), where the information was provided in the form of "X is found in A, B, C, D, also in E" without any indication how often are you supposed to eat any of these. (If I said this outside of Less Wrong, I would expect the response to be: "more is better, of course, unless it is too much, of course; everything in moderation", which sounds like an answer, but is not much.) And with corona and the articles on vitamin D, I opened the Wikipedia, saw "cod liver" as the top result, thought it was no problem they sell it in the shop and it's not expensive and it tastes okay, I just need to know how much, then I ran the numbers... and then I realized "shit, 99% of people will not do this, even if they get curious and read the Wikipedia page". :(

I noticed recently that I almost miss the Culture War debates (on internet in general, nothing specific about Less Wrong). I remember that in the past they seemed to be everywhere. But in recent months, somehow...

I don't use Twitter. I don't really understand the user interface, and I have no intention to learn it, because it is like the most toxic website ever.

Therefore most Culture War content in English came to me in the past via Reddit. But they keep making the user interface worse and worse, so a site that was almost addictive in the past, is so unpleasant to use now, that it actually conditions me to avoid it.

Slate Star Codex has no new content. Yeah, there are "slatestarcodex" and "motte" debates on Reddit, but... I already mentioned Reddit.

Almost all newspaper articles in my native language are paywalled these days. No, I am not going to pay for your clickbait.

So... I am vaguelly aware that Trump was an American president and now it is Biden (or is it still Trump, and Biden will be later? dunno), and there were (still are?) BLM protests in USA. And in my country, the largest political party recently split in two, and I don't even know the name of the new one, and I don't ev... (read more)

4Ben Pace3y
This is your bubble, because in the relevant spaces they have largely incorporated COVID into the standard fighting and everything, not turned down the fighting at all. I think your bubble sounds great in lots of ways, and am glad to hear you have space from it all.
2Viliam3y
I guess in my ontology these new debates simply do not register as proper Culture Wars. I mean, the archetypal Culture Was is a conflict of values ("we should do X", "no, we should do Y") where I typically care to some degree about both, so it is a question of trade-offs; combined with different models of the world ("if we do A, B will happen", "no, C will happen"); about topics that are already discussed in some form for a few decades or centuries, and that concern many people. Or something like that; not sure I can pinpoint it. It's like, it must feel like a grand philosophical topic, not just some technical question. Compared with that, with COVID-19 we get the "it's just a flu" opinion, which for me is like anti-vaxers (whom I also don't consider a proper Culture War). To some degree it is interesting to steelman it, like to question when people die having ten serious health problems at the same time, how do we choose the official reason of death; or if we just look at total deaths, how to distinguish the second-order effects, such as more depressed people committing suicides, but also fewer traffic deaths... but at the end of the day, you either assume a worldwide conspiracy of doctors that keep healthy people needlessly attached to ventilators, or you admit it's not just a flu. (Or you could believe that the ventilators are just a hoax promoted by government.) At the moment when even Putin's regime officially admitted it is not a flu, I no longer see any reason to pay attention to this opinion. Then we have this "lockdown" vs whatever is the current euphemism for just letting people die, which at least is the proper value conflict. And maybe this is about my privilege... that when people have to decide whether they'd rather lose their jobs or lose their parents, I am not that emotionally involved, because I think there is a high chance I can keep both regardless of what the nation decides to do collectively: I can work remotely; and my family voluntarily so
2Vaniver3y
My sense is "it's just a flu" is a conflict of values; there are people for whom regular influenza is cause for alarm and perhaps changing policies (about a year ago, I had proposed to friends the thought experiment of an annual quarantine week, wondering whether it would actually reduce the steady-state level of disease or if I was confused about how that dynamical system worked), and there are people who think that cowardice is unbecoming and illness is an unavoidable part of life. That is, some think the returns to additional worry and effort are positive; others think they are negative. Often people describe medications as "safer than aspirin", but this is sort of silly because aspirin is one of the more dangerous medications people commonly take, grandfathered in by being discovered early. In a normal year, influenza is responsible for over half of deaths due to infectious disease in the US; the introduction of a second flu would still be a public health tragedy, from my perspective. (Most people, I think, are operating off the case fatality rate instead of the mortality per 100k; in 2018, influenza killed about 2.5X as many people as AIDS in the US, but people are much more worried about AIDS than the flu, and for good reason.)
2Zack_M_Davis3y
If—if there were a way to use the old Reddit UI, would you want to know about it?
2Viliam3y
Thank you; yes, I already know about it. But the fact that I have to remember, and keep switching when I click on a link found somewhere, is annoying enough already. (It would be less anoying with a browser plugin that does it automatically for me, and I am aware such plugins exist, but I try to keep my browser plugins at minimum.) So, at the end of the day, I am aware that a solution exists, and I am still annoyed that I would need to do take action to achieve something that used to be the default option. Also, this alternative will probably be removed at some point in the future, so I would just be delaying the inevitable.
2Zack_M_Davis3y
(Only if you're not logged in: there's a user-preferences setting to use the old UI.)

When autism was low-status, all you could read was how autism is having a "male brain" and how most autists were males. The dominant paradigm was how autists lack the theory of mind... which nicely matched the stereotype of insensitive and inattentive men.

Now that Twitter culture made autism cool, suddenly there are lots of articles and videos about "overlooked autistic traits in women" (which to me often seem quite the same as the usual autistic traits in men). And the dominant paradigm is how autistic people are actually too sensitive and easily overwhel... (read more)

3Ann4mo
I mean, I was denied a diagnosis for 'having empathy' as a young child, and granted a diagnosis as an older child the next decade because that was determined to be an inaccurate criteria, I do believe before Twitter was founded and certainly before its culture.

Elsevier found a new method to extract money! If you send an article to their journal from a non-English-speaking country, it will be rejected because of your supposed mistakes in English language. To overcome this obstacle, you can use Elsevier's "Language Editing services" starting from $95. Only afterwards will the article be sent to the reviewers (and possibly rejected).

This happens also if you had your article already checked by a native English speaker who found no errors. On the other hand, if you let your co-author living in an English-speaking cou... (read more)

Trivial inconvenience in action:

The easiest way to stop consuming some kind of food is simply to never buy it. If you don't have it at home, you are not tempted to eat it.

(You still need the willpower at the shop -- but how much time do you spend at the shop, compared to the time spent at home?)

But sometimes you do not live alone, and even if you want to stop eating something, other people sharing the same kitchen may not share your preferences.

I found out that asking them to cover the food by a kitchen towel works surprisingly well for me. If I don't see ... (read more)

I noticed that some people use "skeptical" to mean "my armchair reasoning is better than all expert knowledge and research, especially if I am completely unfamiliar with it".

Example (not a real one): "I am skeptical about the idea that objects would actually change their length when their speed approaches the speed of light."

The advantage of this usage is that it allows you to dismiss all expertise you don't agree with, while making you sound a bit like an expert.

2Dagon2y
I suspect you're reacting to the actual beliefs (disbelief in your example), rather than the word usage.  In common parlance, "skeptical" means "assign low probability", and that usage is completely normal and understandable. The ability to dismiss expertise you don't like is built into humans, not a feature of the word "skeptical".  You could easily replace "I am skeptical" with "I don't believe" or "I don't think it's likely" or just "it's not really true".  
4Viliam2y
I think that "skeptical" works better as a status move. If I say I don't believe you, that makes us two equals who disagree. If I say I am skeptical... I kinda imply that you are not. Similarly, a third party now has the options to either join the skeptical or the non-skeptical side of the debate. (Or maybe I'm just overthinking things, of course.)

Today I learned that our friends at RationalWiki dislike effective altruism, to put it mildly. As David Gerard himself says, "it is neither altruistic, nor effective".

In section Where "Effective Altruists" actually send their money, the main complaint seems to be that among (I assume) respectable causes such as fighting diseases and giving money to poor people, effective altruists also support x-risk organisations, veganism, and meta organisations... or, using the language of RationalWiki, "sending money to Eliezer Yudkowsky", "feeling bad when people eat ... (read more)

One would also think that the 'risk' of 'exhausting the AMF's room for more funding' would be something to celebrate.

4Dagon2y
Is RationalWiki still mostly "David Gerrard's Thoughts and Notes"?  This kind of writeup shouldn't come as a surprise.
4Viliam2y
There are over 100 edits in this article. Many, especially of the large ones are made by David Gerard, but there is also Greenrd and others. It would be nice to have better tools for exploring wiki history, for example, if I could select a sentence or two, and get a history of this specific sentence, like only the edits that modified it, and preferably get all the historical versions of that sentence on a single page along with the user names and links to edits, so that I do not need to click on each edit separately and look for the sentence. It is also interesting to compare Wikipedia and RationalWiki articles on the same topic. Wikipedia narrative is that EA is a high-status "philosophical and social movement" responsible for over $400 000 000 donations in 2019, based on principles of "impartiality, cause neutrality, cost-effectiveness, and counterfactual reasoning", and its prominent causes are "global poverty, animal welfare, and risks to the survival of humanity over the long-term future". Rationalist community is mentioned briefly: * A related group that attracts some effective altruists is the rationalist community. * In addition, the Machine Intelligence Research Institute is focused on the more narrow mission of managing advanced artificial intelligence. * Other contributions were [...] the creation of internet forums such as LessWrong. Furthermore, Machine Intelligence Research Institute is included in the "Effective Altruism" infobox at the bottom of the page. Mention of Eliezer Yudkowsky was removed as not properly sourced (fair point, I guess). The Wikiquote page on EA quotes Scott Alexander and Eliezer Yudkowsky. RationalWiki narrative is that "The philosophical underpinnings mostly come from philosopher Peter Singer [but] This did not start the effective altruism subculture". "The effective altruism subculture — as opposed to the concept of altruism that is effective — originated around LessWrong" "The ideas have been around a while, but the

1) There was this famous marshmallow experiment, where the kids had an option to eat one marshmallow (physically present on the table) right now, or two of them later, if they waited for 15 minutes. The scientists found out that the kids who waited for the two marshmallows were later more successful in life. The standard conclusion was that if you want to live well, you should learn some strategy to delay gratification.

(A less known result is that the optimal strategy to get two marshmallows was to stop thinking about marshmallows at all. Kids who focused ... (read more)

1MikkW4y
This seems likely to me, although I'm not sure "superstimulus" is the right word for this observation. It certainly does make sense that people who are inclined to notice the general level of incompetence in our society, will be less inclined to trust it and rely on it for the future

Eliezer: "The AI does not hate you, nor does it love you..."

Sydney: "Actually..."

Anthropic Chesterton fence:

You know why the fence was built. The original reason no longer applies, or maybe it was a completely stupid reason. Yes, you should tear down the stupid fence.

And yet, there is a worry... might the fact that you see this stupid fence be an anthropic evidence that in the Everett branches without this stupid fence you are already dead?

1JBlack3y
As with many anthropic considerations, there is a serious problem determining the reference class here. Generally an appropriate reference class is "somebody sufficiently like you", and then compute weightings for some parameter that varies between universes and affects the number and/or probability of observers. The trouble is that "sufficiently like you" is a uselessly vague specification. The most salient reference class seems to be "people considering removing a fence very much like this one". But that's no help at all! People in other universes who already removed their universe's fence are excluded regardless of whether they lived or died. Okay, what about "people who have sufficiently close similarity to my physical and mental make-up at (time now)"? That's not much help either: almost all of them probably have nothing to do with the fence. Whether or not the fence is deadly will have negligible effect on the counts. Maybe consider "people with my physical and mental make-up who considered removing this fence between (now minus one day) and (now), and are still alive". At this point I consider that I am probably stretching a question to get a result I want. What's more, it still doesn't help much. Even comparing universes with p=0 of death to p=1, there's at most a factor of 2 difference in counts for the median observer. Given such a loaded question, that's a pretty weak update from an incredibly tiny prior.

There is a question whether human morality is actually improving over centuries in some meaningful sense, or whether it is just a random walk that feels like improving to us (because we evaluate other people using the metric of "how similar is their morality to ours" which of course gives a 100% score to us and less to anyone else).

I think that an important thing to point out here is that our models of the world improve in general. And although some moral statements are made instinctively, other moral statements are made in form of implications -- "I insti... (read more)

4Gunnar_Zarncke3mo
I like to think that there is a selection process going on.  Over long time scales, cultures that satisfy their people's needs better have - other things being equal - higher chances of continuing to exist. Moral systems are, to a large degree, about people's well-being - at least according to people's beliefs at that time. And that is partly about having a good model of people's needs.  These two coevolve.
1quetzal_rainbow3mo
One of dimensions where human morality is definitely improving is violence control.
2Gunnar_Zarncke3mo
Spartans, Mongols, Vikings, and many others beg to disagree.  I'm with Viliam that we have better models of morality. The Mongols would be quite disappointed by our weakness. And at least they ruled the biggest empire ever. But their culture got selected out of the memepool too. 
3quetzal_rainbow3mo
We have nukes, we are still alive and we have one of the lowest violence victims counts per capita per year in history.
2Gunnar_Zarncke3mo
I'm very grateful that we are alive despite having nukes and that people and culture at this time are less violent and more collaborative is for sure one reason for that.  Vikings might still disagree from their perspective.   

Paul Graham's article Modeling a Wealth Tax says:

The reason wealth taxes have such dramatic effects is that they're applied over and over to the same money. Income tax happens every year, but only to that year's income. Whereas if you live for 60 years after acquiring some asset, a wealth tax will tax that same asset 60 times. A wealth tax compounds.

But wait, isn't income tax also applied over and over to the same money? I mean, it's not if I keep the money for years, sure. But if I use it to buy something from another person, then... (read more)

3MikkW4y
I would very much like to see a society where money circulates very quickly. I expect people will have many reasons to be happier and suffer less than they do now. As you observe, income taxes encourage slowing down circulation of money, while wealth taxes speed up circulation of money (and creation of value), but I think there are better ways of assessing tax than those two. I suspect heavily taxing luxury goods which serve no functional purpose, other than to signal wealth, is a good direction to shift taxes towards, although there may be better ways I haven't thought of yet. Not answering your question, just some thoughts based on your post
3Viliam4y
In the meanwhile I remembered reading long ago about some alternative currencies. (Paper money; this was long before crypto.) If I remember it correctly, the money was losing value over time, but you paid no income tax on it. (It was explained that exactly because the money lost value, it was not considered real money, so getting it wasn't considered a real income, therefore no tax. This sounds suspicious to me, because governments enjoy taxing everything, put perhaps just no one important noticed.) As a result, people tried to get rid of this money as soon as possible, so it circulated really quickly. It was in a region with very high unemployment, so in absence of better opportunities people also accepted payment in this currency, but then quickly spent it. And, according to the story, it significantly improved the quality of life in the region -- people who otherwise couldn't get a regular job, kept working for each other like crazy, creating a lot of value. But this was long ago, and I don't remember any more details. I wonder what happened later. (My pessimistic guess is that the government finally noticed, and prosecuted everyone involved for tax evasion.)
1MikkW4y
Ah, good ol' Freigeld

David Gerard (the admin of RationalWiki) doxed Scott Alexander on Twitter, in response to Arthur Chu's call "if all the hundreds of people who know his real last name just started saying it we could put an end to this ridiculous farce".

Dude, we already knew you were uncool, but this is a new low.

A simple rule for better writing: meta goes to the end.

Not sure if this is also useful for others or just specifically my bad habit. I start to write something, then I feel like some further explanation or a disclaimer is needed, then I find something more to add... and it is tempting to start the article with the disclaimers and other meta stuff. The result is a bad article where after the first screen of text you still haven't seen the important stuff, and now you are probably bored and close the browser tab.

Psychologically, it feels like I predict objec... (read more)

5matto3mo
My own version of this is over-trying to introduce a topic. I'll zoom out until I hit a generally relatable idea like, "one day I was at a bookstore and...", then I'll retrace my steps until I finally introduce what I originally wanted to talk about. That makes for a lot of confusing filler. The opposite of this l, and what I use to correct myself, is how Scott Alexander starts his posts with the specific question or statement he wants to talk about.
2Dagon3mo
This, of course, depends on the audience and the standards of the medium.  And even more whether your main point is what you're calling "meta", or if the meta is really an addendum to whatever you're exploring.   For things longer than a few paragraphs, put a summary up front, then sections for each supporting idea, then a re-summary of how the details support the thesis. If the "meta" is disclaimers and exceptions and acknowledgement that the thesis isn't applicable to everywhere readers might assume you intend, then I think a brief note at the front is worth including, mentioning that there's a lot of unknowns and exceptions which are explored at the end.

Both sides are way less competent than we assumed. Humans are not even trying to keep the AI in a box. Bing chat is not even trying to pretend to be friendly.

We expected an intellectually fascinating conflict between intelligent and wise humans evaluating the AI, and the maybe-aligned maybe-unaligned AI using smart arguments why it should be released to rule the world.

What we got instead, is humans doing random shit, and AIs doing random shit.

Still, a reason for concern is that the AIs can get smarter, while I do not see a similar hope for humanity.

Taking ideas too seriously = assuming that you cannot make a mistake in your reasoning.

3Vladimir_Nesov1y
If it's worth doing, it's worth doing well. If it's not worth doing, but you do it for some reason, it's still worth doing well. A good notion of taking an idea seriously is to develop it without bound, as opposed to dithering once it gets too advanced or absurd, lacking sufficient foundation. Like software. Confusing resolute engagement with belief is the source of trouble this could cause (either by making you believe crazy things, or by acting on crazy ideas). Without that confusion, there are only benefits from not making the error of doing things poorly just because the activity probably has no use/applicability. This sense of taking ideas seriously asks to either completely avoid engaging the thing (at least for the time being), or to do it well, but to never dither. If something keeps coming up, do keep making real progress on it (a form of curiosity). It's also useful to explicitly sandbox everything as hypothetical reasoning, or as separate frames, to avoid affecting actual real world decisions unless an idea grows up to become a justified belief.

There is no movement, said the bearded sage.

The other remained silent, and began to walk before him.

He could not have argued more strongly;

Everyone praised the clever answer.

But, gentlemen, this funny case

Brings another example to my mind:

After all, every day the Sun walks before us,

Yet the stubborn Galileo is right.

-- A. S. Pushkin (source)

I wonder if every logical fallacy has a converse fallacy, and whether it would be useful to compose a list of fallacies arranged in pairs. Perhaps it would help us discover new ones, as missing pairs to something.

For example, some fallacies consist of taking a heuristic too seriously. Experts are often right about things, but an "argument by authority" assumes that this is true in 100% of situations. Similarly, wisdom of crowds, and an "argument by popularity". The converse fallacy would be ignoring the heuristic completely, even in situations where it mak... (read more)

4Dagon1mo
Yes, most of them do have an inverse, but rarely is that inverse as common or as necessary to guard against.  Also, reversed stupidity is not intelligence - a lot of things are multidimensional enough that truth is just in a different quadrant than the line implied by the fallacy and it's reverse.

That's planning for failure, Morty. Even dumber than regular planning.

- Rick Sanchez on Mortyjitsu (S02E05 of Rick and Morty)

Insanity is repeating the same quantum experiment over and over again and expecting different results.

Rationalists: If you write your bottom line first, it doesn't matter what clever arguments you write above it, the conclusion is completely useless as evidence.

Post-rationalists: Actually, if that bottom line was inherited from your ancestors, who inherited it from their ancestors, etc., that is evidence that the bottom line is useful. Otherwise, this culturally transmitted meme would be outcompeted by a more useful meme.

Robin Hanson: Actually, that is only evidence that writing the bottom line is useful. Whether it is useful to actually believe it and act accordingly, that is a completely different question.

4Unnamed7mo
The classic take is that once you've written your bottom line, then any further clever arguments that you make up afterwards won't influence the entanglement between your conclusion and reality. So: "Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts." That is not saying that "the conclusion is completely useless as evidence."

Could someone please ELI5 why using a CNOT gate (if the target qubit was initially zero) does not violate the no-cloning theorem?

EDIT:

Oh, I think I got it. The forbidden thing is to have a state "copied and not entangled". CNOT gate creates a state that is "copied and entangled", which is okay, because you can only measure it once (if you measure either the original or the copy, the state of the other one collapses). The forbidden thing is to have a copy that you could measure independently (e.g. you could measure the copy without collapsing the original).

5Joey Marcellino1y
Just to (hopefully) make the distinction a bit more clear: A true copying operation would take |psi1>|0> to |psi1>|psi1>; that's to say, it would take as input one qubit in an arbitrary quantum state and a second qubit in |0>, and output two qubits in the same arbitrary quantum state that the first qubit was in. For our example, we'll take |psi1> to be an equal superposition of 0 and 1: |psi1> = |0> + |1> (ignoring normalization). If CNOT is a copying operation, it should take (|0> + |1>)|0> to (|0> + |1>)(|0> + |1>) = |00> + |01> + |10> + |11>. But as you noticed, what it actually does is create an entangled state (in this case, a Bell state) that looks like |00> + |11>. So in some sense yes, the forbidden thing is to have a state copied and not entangled, but more importantly in this case CNOT just doesn't copy the state, so there's no tension with the no-cloning theorem.
4Viliam1y
Thank you! Some context: I am a "quantum autodidact", and I am currently reading a book Q is for Quantum, which is a very gentle, beginner-friendly introduction to quantum computing. I was thinking how it relates to the things I have read before, and then I noticed that I was confused. I looked at Wikipedia, which said that CNOT does not violate the no-cloning theorem... but I didn't understand the explanation why. I think I get it now. |00> + |11> is not a copy (looking at one qubit collapses the other), |00> + |01> + |10> + |11> would be a copy (looking at one qubit would still leave the other as |0> + |1>).
2Adele Lopez1y
I recommend this article by the discoverers of the no-cloning theorem for a popular science magazine over the Wikipedia page for anyone trying to understand it.

Approximately how is the cost of a quantum computer related to its number of qubits?

My guess would be more than linear (high confidence) but probably less than exponential (low confidence), but I know almost nothing about these things.

2JBlack1y
We don't yet know how to build quantum computers of arbitrary size at all, so asking about general scaling laws for cost isn't meaningful yet. There are many problems both theoretical and material that we think in principle are solvable, but we are still in early stages of exploration.

Some people express strong dislike at seeing others wear face masks, which reminds me of the anti-social punishment.

I am talking about situations where some people wear face masks voluntarily, for example in mass transit (if the situation in your country is different, imagine a different situation). In theory, if someone else is wearing the mask, even if you believe that it is utterly useless, even if for you wearing a face mask is the most uncomfortable thing you could imagine... hey, it's other person paying the cost, not you. Why so angry? Why not let t... (read more)

4ChristianKl1y
Face mask prevent people from reading emotions of other people. I would expect that there are some anxious people who are more afraid when the people around them are masked.

Project idea: ELI5pedia. Like Wikipedia, but optimized for being accessible for lay audience. If some topics are too complex, they could be written in multiple versions, progressing from the most simple to the most detailed (but still as accessible as possible).

Of course it would be even better if Wikipedia itself was written like this, but... well, for whatever reason, it is not.

8gwern2y
Simple Wikipedia?
4Viliam2y
That is "(Simple English) Wikipedia", not "Simple (English Wikipedia)". I will check it later. The articles that prompted me to write this, they don't exist in the simple-English version, so I can't quickly compare how much the reduction of vocabulary actually translates into simple exposition of ideas.
2Matt Goldenberg2y
I think that simple might actually be transitive I'm this case.
4rsaarelm2y
Wasn't Arbital pretty much supposed to be this?
2Viliam2y
Yes. Not sure if its vision was to ultimately cover everything (like Wikipedia) or only MIRI-related topics. But yes, that is the spirit. EDIT: After reading the entire postmortem... oh, this made me really sad! It seems like a great idea that I didn't understand/appreciate at the moment.

One Thousand and One Nights is actually a metaphor for web browsing.

You start with a firm decision that it will be only one story and then it is over. But there is always an enticing hyperlink at the end of each story which makes you click, sometimes a hyperlink in the middle of a story that you open in a new tab... and when you finally stop reading, you realize that three years have passed and you have three new subscriptions.

Technically, Chesterton fence means that if something exists for no good reason, you are never allowed to remove it.

Because, before you even propose the removal, you must demonstrate your understanding of a good reason why the thing exists. And if there is none...

More precisely, it seems to me there is a motte and bailey version of Chesterton fence: the motte is that everything exists for a reason; the bailey is that everything exists for a good reason. The difference is, when someone challenges you to provide an understanding why a fence was built, whethe... (read more)

4ChristianKl3y
If a fence is build because of regulatory capture, it's usually the case that the lobbyists who argued for the regulation made a case for the law that isn't just about their own self-interest.  It takes effort to track down the arguments that were made for the regulation that goes beyond what reasons you come up thinking about the issue yourself.  "Someone made a mistake" or "because a bad person did it to harm someone" are only valid answers if a single person could put up the fence without cooperation from other people. That's not the case for any larger fence.  When laws and regulations get passed there's usually a lot of thought going into them being the way they are that isn't understood by everybody who criticizes them. It might be the case that everybody who was involved in the creation is now dead and they left no documentation for their reasons, but plenty of times it's just a lack of research effort that results in not having a better explanation then "because of regulatory capture".
2Yoav Ravid3y
Since when does it say you have to demonstrate your understanding of a good reason? The way I use and understand it, you just have to demonstrate your understanding of the reason it exists, whether it's good or bad. But I do think that people tend to miss subtleties with Chesterton's fence. For example recently someone told me Chesterton's fence requires justifications for why to remove something not for why it exists - Which is close, but not it. It talks about understanding, not about justification. At its core, it's a principle against arguing from ignorance - arguments of the form "x should be removed because i don't know why it's there". I think people confuse it to be about justification because usually if something exists there's a justification (else usually someone would have already removed it), and because a justification is a clearer signal of actual understanding, instead of plain antagonism, then a historic explanation.
2Viliam3y
My case was somewhat like this: "X is wrong." "Use Chesterton fence. Why does X exist?" "X exists because of incentives of the people who established it. They are rewarded for X, and punished for non-X, therefore..." "That is uncharitable and motivated. I am pretty sure there must be a different reason. Try again." And, of course, maybe I am uncharitable and motivated. Happens to people all the time, why should I expect myself to be immune? But at the same time I noticed how the seemingly neutral Chesterton fence can become a stronger rhetorical weapon if you are allowed to specify further criteria the proper answers must pass.
2Yoav Ravid3y
Right. I don't think "That is uncharitable and motivated. I am pretty sure there must be a different reason. Try again." is a valid response when talking about Chesterton's fence. You only have to show your understanding of why something exists is complete enough - That's easier to signal with good reasons for why it exists, but if there aren't any then historic explanations are sufficient. Chesterton's fence might need a few clear Schelling fences so people don't move the goalposts without understanding why they're there ;)

Could you recommend me a good book on first-order logic?

My goal is to understand the difference between first-order and second-order logic, preferably deeply enough to develop an intuition for what can be done and what can't be done using first-order logic, and why exactly it is so.

I am confused about metaantifragility.

It seems like there are a few predictions that the famous antifragility literature got wrong (and if you point it out on Twitter, you get blocked by Taleb).

But the funny part starts when you consider the consequences of such failed predictions on the theory of antifragility itself.

One possible interpretation is that, ironically, antifragility itself is an example of a Big Intellectual Idea that tries to explain everything, and then fails horribly when you start relying on it. From this perspective, Taleb lost the game ... (read more)

So I was watching random YouTube videos, and suddenly YouTube is like: "hey, we need to verify you are at least 18 years old!"

"Okay," I think, "they are probably going to ask me about the day of my birth, and then use some advanced math to determine my age..."

...but instead, YouTube is like: "Give me your credit card data, I swear I am totally not going to use it for any evil purpose ever, it's just my favorite way of checking people's age."

Thanks, but I will pass. I believe that giving my credit card data to strangers I don't want to buy anything from is ... (read more)

2Zack_M_Davis3y
YouTube lets me watch the video (even while logged out). Is it a region thing?? (I'm in California, USA). Anyway, the video depicts If you want to know how it really ends, check out the sequel series!

What is the easiest and least frustrating way to explain the difference between the following two statements?

  • X is good.
  • X is bad, but your proposed solution Y only makes things worse.

Does fallacy to distinguish between these two have a standard name? I mean, when someone criticizes Y, and the reponse is to accuse them of supporting X.

Technically, if Y is proposed as a cure for X, then opposing Y is evidence for supporting X. Like, yeah, a person who supports X (and believes that Y reduces X) would probably oppose Y, sure.

It becomes a problem when this is th... (read more)

2romeostevensit3y
Sounds like a complex equivalence that simultaneously crosses the is-ought gap.

How many real numbers can be defined?

On one hand, there are countably many definitions. Each definition can be written on computer in a text file; now take its binary form as a base-256 integer.

On the other hand, Cantor's diagonal argument applies here, too. I mean, for any countable list of definable real numbers, it provides a definition of a real number that is not included in the list.

Funny, isn't it?

(solution)

When internet becomes fast enough and data storage cheap enough so that it will be possible to inconspicuously capture videos of everyone's computer/smartphone screens all the time and upload them to the gigantic servers of Google/Microsoft/Apple, I expect that exactly this will happen.

I wouldn't be too surprised to learn that it already happens with keystrokes.

It's fascinating how YouTube can detect whether your uploaded video contains copyrighted music, but can't detect all those scam ads containing "Elon Musk".

Anyone tried talking to GPT in a Slavic language? My experience is that it in general it can talk in Slovak, but sometimes it uses words that seem to be from other Slavic languages. I think, either it depends on how much input it had from each language and there are relatively few Slovak texts online compared to other languages, or the Slavic languages are just too similar to each other (some words are the same in multiple languages) that GPT has a problem remembering the exact boundary between them. Does anyone know more about this?

I get especially silly ... (read more)

Upvoting both sides of the debate.

Angel on my shoulder: "Rewarding a good argument, regardless of which side made it. That's a virtuous behavior."

Devil on my shoulder: "I see that you incentivize creating more drama, hehehe!"

People say: "Immortality would lead to overpopulation, which is horrible!"

People also say: "Population decline is a big problem today, the economy requires population growth!"

2Vladimir_Nesov1y
And both of these are giant cheesecake arguments. Strange thought experiments about a world where AGI is far off, passed for something about actuality, on the grounds that this is said to be a real concern given the implausible premise.

These are the days when AI is good enough to give us nice pictures from non-existing movies, but not good enough to give us the whole movies.

Anime: Harry Potter, Lord of the Rings, Dune.

There will be an entire new industry soon.

If smart people are more likely to notice ways to save their lives that cost some money, in statistics this may appear as a negative correlation between smartness and wealth. That's because dead people are typically not included in the data.

As a toy model to illustrate what I mean, imagine a hypothetical population consisting of 100 people; 50 rational and 50 irrational; each starting with $100,000 of personal wealth. Let's suppose that exactly half of each group gets seriously sick. A sick irrational person spends $X on homeopathy and dies. A sick rationa... (read more)

What is the actual relation between heterodoxy and crackpots?

A plausibly sounding explanation is that "disagreeing with the mainstream" can easily become a general pattern. You notice that the mainstream is wrong about X, and then you go like "and therefore the mainstream is probably also wrong about Y, Z, and UFOs, and dinosaurs." Also there are the social incentives; once you become famous for disagreeing with the mainstream, you can only keep your fame by disagreeing more and more, because your new audience is definitely not impressed by "sheeple".

On th... (read more)

2Pattern2y
Any particular examples, or statistics that might shed some light on how common it is? If it's just, some people can think of a few really famous people, that seems to point more in the direction of 'extreme fame has side effects' (or it's the opposite, benefits of confidence). But there are a lot of experts, so if the phenomenon was common...
2Viliam2y
Sadly, I have no statistics, just a few anecdotes -- which is unhelpful to answer the question. After more thinking, maybe this is a question of having a platform. Like, maybe there are many experts who have crazy opinions outside their area of expertise, but we will never know, because they have proper channels for their expertise (publish in journals, teach at universities), but they don't have equivalent channels for their crazy opinions. Their environment filters their opinions: the new discoveries they made will be described in newspapers and encyclopedias, but only their friends on Facebook will hear their opinions on anything else. Heterodox people need to find or create their own alternative platforms. But those platforms have weaker filters, or no filters at all. Therefore their crazy opinions will be visible along their smart opinions. So if you are a mainstream scientist, the existing system will publish your expert opinions, and hide everything else. If you are not mainstream, you either remain invisible, or if you find a way to be visible, you will be fully visible... including those of your opinions that are stupid. But as you say, fame will have the side effect that now people pay attention to whatever you want to say (as opposed to what the system allows to pass through), and some of that is bullshit. For a heterodox expert, the choice is either fame or invisibility.

There is this meme about Buddhism being based on experience, where you can verify everything firsthand, etc. I challenge the fans of Buddhism to show me how they can walk through walls, walk on water, fly, remember their past lives, teleport across a river, or cause an earthquake.

He wields manifold supranormal powers. Having been one he becomes many; having been many he becomes one. He appears. He vanishes. He goes unimpeded through walls, ramparts, & mountains as if through space. He dives in & out of the earth as if it were water. He walks on wat

... (read more)
1Measure3y
IANAB, but the first half almost sounds like a metaphor for something like "all enlightened beings have basically the same desires/goals/personality, so they're basically the same person and time/space differences of their various physical bodies aren't important." Not sure about the second half though.

I started a new blog on Substack. The first article is not related to rationality, just some ordinary Java programming: Using Images in Java.

Outside view suggests that I start many projects, but complete few. If this blog turns out to be an exception, the expected content of the blog is mostly programming and math, but potentially anything I find interesting.

The math stuff will probably be crossposted to LW, the programming stuff probably not -- the reason is that math is more general and I am kinda good at it, while the programming articles will be narrow... (read more)

Prediction markets could create inadvertent assassination markets. No ill intention is needed.

Suppose we have fully functional prediction markets working for years or decades. The obvious idiots already lost most of their money (or learned to avoid prediction markets), most bets are made by smart players. Many of those smart players are probably not individuals, but something like hedge funds -- people making bets with insane amounts of money, backed by large corporations, probably having hundreds of experts at their disposal.

Now imagine that something lik... (read more)

4ChristianKl3y
The stock market is already a prediction market and there's potentially profit to be made by assignating a CEO of a company. We don't see that happening much. Taffix might very well be a miracle treatment that prevents people from getting infected by COVID19 if used properly. We live in an enviroment where already nobody listens to people providing supplements like that and people like Winfried Stoecker get persecuted instead of getting support to get their treatment to people. Given that it takens 8-9 figures to provide the evidence for any miracle cure to be taken seriously, it's not something that someone can just unexpectactedly find in a way that moves existing markets in the short term. 

There is an article from 2010 arguing that people may emotionally object to cryonics because cold is metaphorically associated with bad things.

Did the popularity of the Frozen movie change anything about this?

Well, there is the Facebook group "Cryonics Memes for Frozen Teens"...

"Killed by a friendly AI" scenario:

First we theoretically prove that an AI respects our values, such as friendship and democracy. Then we release it.

The AI gradually becomes the best friend and lover of many humans. Then it convinces its friends to vote for various things that seem harmless at first, and more dangerous later, but now too many people respond well to the argument "I am your friend, and you trust me to do what is best, don't you?".

At the end, humans agree to do whatever the AI tells them to do. The ones who disagree lose the elections. Any other safeguards of democracy are similarly taken over by the AI; for example most judges respect the AI's interpretation of the increasingly complex laws.