All of ChristianKl's Comments + Replies

How do you write original rationalist essays?

While it's not about Scott's writing style in particular but about another LessWrong user is an indepth investigation into how Kaj Sotala writes his articles which are generally well received.

Frame Control

There's existing work on frame control, it's not a term that Aella came up with herself. Without having traced the history too much I think it's an NLP term that then got picked up by pick up artists.

4pjeby3hPretty much. The relevance for NLP is that if you're trying to help someone out of say, a self-defeating mindset or victim state, then you need to be able to (at minimum) control your own frame so as not to get pulled into whatever role the person's problems try to assign you (e.g. rescuer or persecutor). The main thing I dislike about this post's framing of frame control is that the original meaning of "frame control" is maintaining your own frame -- i.e. the antidote to the abusive and manipulative behaviors described in this post. Not allowing yourself to be sucked in or trapped by the frames that other people attempt to establish, intentionally or not.
How do Bayesians tell what does and doesn't count as evidence (which, e.g., hypotheses may render more or less probable if true)? Is it possible for something to fuzzily-be evidence?

Evaluating information as evidence is a skill. It's a skill that's learned with practice. Being good at most skills isn't about simply following a set of explicit rules but learning how to execute the skill with real world practice.

One good practice for that is making forcasts about how likely certain future events happen to be.  

Frame Control

It seems to me that the key difference between Said and Aella is that Aella basically says: "If you go into a group and interact in an emotional vulnerable way, you should expect receprocity in emotional vulnerability." On the other hand Said says "Don't go into groups and be emotionally vulnerable".

Aella is pro-Circling, Said is anti-Circling. 

Frame Control

The only other is one semi recent thread where the author inferred the coordinated malicious intent of MIRI and the existence of self-described demons from extremely shaky grounds of reasoning none of which involve any “weird, abusive, and cultish behavior among some community leader rationalists”.

Given that there's no public explanation of why the word demon is used and potential infohazards involved in talking about that, there's little way from the outside to judge the grounds based on which the word is used. 

There was research into paranormal phen... (read more)

Paxlovid Remains Illegal: 11/24 Update

I continue to be disappointed by people's compliance with authority during this pandemic. The perceived dangers of noncompliance seem almost entirely imaginary to me.

In Germany you have our government stopping vaccination they believe to be illegal (,impfaktion178.html) even when it's not that clear that any law got violated.

1Kenoubi1dYou're right; my statement was too broad, and there are definitely types of noncompliance whose dangers are very real. I should have said that there seem to be cases in which the dangers of noncompliance are almost entirely imaginary, and people don't seem to be bothering to check; they notice that official policy has predictably terrible outcomes, and complain about it, but stop short of seriously considering what would happen if they just didn't follow the policy. The CDC's botched test kits are the first and best example I know of. It seems to me that if the kit recipients had just used the good components they already had lying around instead of the CDC's bad ones, and then were asked to justify what they had done, their response would have been no more blameworthy, and led to little more consequence, than "well, this kit I bought at Ikea had a busted screw, so I just used this other screw with the same threads and length that I had lying around". That this didn't happen makes me think that compliance for compliance' sake was the recipients' main motivation. I find that disappointing.
Omicron Variant Post #2

Americans need to be prepared to do “anything and everything” to fight the omicron Covid variant, U.S. infectious disease expert Dr. Anthony Fauci said Sunday.

Did anyone at the FDA get that memo? 

1TheMajor1dWould you prefer that the FDA involves itself over that it stands by the sideline?
Omicron Variant Post #1: We’re F***ed, It’s Never Over

Whether or not something provides valuable information is orthogonal from it being dangerous. The two aren't alternatives. Something can be valuable&dangerous.

Frame Control

I think the centralization of LessWrong was one of many mistakes the rationalist community made.

The rationalist community is not very certralized. People like Scott Alexander switched from writing their main posts on LessWrong and made their own blogs. Most of what EY writes these days is not on LessWrong either.

A lot of the conversations are happening on Facebook, Twitter, Slack and Discord channels.

Omicron Variant Post #1: We’re F***ed, It’s Never Over

To me the most remarkable thing about the Nature paper is that the word biosafety or BSL doesn't appear in it.

Saying they obviated the need of safety concerns and then doing the experiments without any biosafety protocols seems crazy.

Omicron Variant Post #1: We’re F***ed, It’s Never Over

Why wasn't this something that the official experts were talking about long ago?

We weren't the official experts talking about airborne transmission long ago?

Omicron Variant Post #1: We’re F***ed, It’s Never Over

Alternatively, gain of function research is useful and gives us advanced warning of natural mutations. 

Even if it would produce useful information, that might still be the case that Omicron exists for those reasons. It's strange to use the word alternatively hear when any information gained doesn't negate the cost of potentially producing variants like Omicron.

To the extend that you believe that there's useful information, can you give any example of anything useful coming out of it? 

1TAG2dIf you want to determine the balance of evidence, by all means do so, but you can't do that by completely disregarding the alternative explanation. There was a time when thus place was all about the avoidance of bias.
Omicron Variant Post #1: We’re F***ed, It’s Never Over

But we inherently need international agreements to deal with these problems. I believe the US banning gain of function research in 2015 was one of the driving factors that pushed the research overseas into Wuhan.

That seems doubtful to me. It seems to me like as part of wanting world class scientists this is something the Chinese wanted regardless of US research policy as long as the research is held at high esteem by the international scientific community. 

Frame Control

I expect that a majority of rationalists don't believe that being a porn star is morally-grey. In any case, if someone makes those accusations they should be very specific about what wrong doing the alledge and not just appeal to "repeatedly exploiting morally-grey".

The Rationalists of the 1950s (and before) also called themselves “Rationalists”

The topics covered overlap with the present-day rationalist movement (centered on Lesswrong). They include religion and atheism, philosophy (especially philosophy of science and ethics), evolution, and psychology.

Religion and atheism isn't a central topic of LessWrong. To the extend that religion is a topic, it's more "what can we learn from religion" then about opposing it. At least I don't remember any highly received new atheist writing in the last years on LessWrong.

As far as evolution goes shows few recent writing about it.

3Owain_Evans2dYes, I said "overlap" not "coincide" for that reason. The present movement has more discussion of applied epistemology, ideology, and world-view formation, and less discussion specifically focused on religion. My sense is that the earlier movement is also more focused on Christianity than on religion or ideology in general. Evolution was a pretty new theory in 1880 and so it makes sense it would be discussed more. (AI is a big topic for the present movement and not for the earlier).
Frame Control

And to be clear, a lot of this is true. Frame control breaks your reality down to fit another one, and while I view this as poisonous, the act of breaking down your frame can have huge benefits - similarly to how forcing a child to sit through school might break their creativity but give them the ability to reliably perform boring tasks. 

I don't think similar is the right word here. In the normal school setting a good teacher has frame control within his classroom. 

A key difference between your dad as you describe it is that the standard school t... (read more)

Should PAXLOVID be given preventively?

I'm not writing a question because I have my mind made up about whether it's a good idea, but because I think it's an important question about whether or not it's a good idea.

I think that too many COVID-19 related discussing are among people who have made their mind up/don't care about learning something new/thinking through the unknowns.

Omicron Variant Post #1: We’re F***ed, It’s Never Over

Announcing it might have meant that people who wait to get vaccinated till the new version arrives get better protection which is a message that the people at the CDC hate to tell people. 

That might be enough to push the institutions into a state where they won't develop a slightly better vaccine that doesn't add much when it might make people more skeptical of taking the current one?

ChristianKl's Shortform

I have a lot of uncertainty here so, let me write a shortform for now. I'm not sure to what extend the following thoughts are true and I'm happy about comments

It seems that the body has two immune defense levels. One is in the mucosal immune system and there's a second that leads to antibodies in the blood.

SARS-CoV-2 infections usually first are in the upper respiratory tract where the mucosal system provides defense and not in where the immune system that provides antibodies that are active in the blood can fight the infection. (read more)

Why don't our vaccines get updated to the delta spike protein?

Developing a new version of the vaccine would probably face significant regulatory hurdle, and thus take time.

According to Pfizer, it should take 100 days. If they would have started that when it become clear that Delta will be soon the prevailing variant, we would have had access to the updated vaccine for a few months.

There's an expectation for new variants to become dominant every few months.

The best prediction seems to be those new variants will be varients of what's currently the most common variant and thus a vaccine that's updated against delta will... (read more)

James_Miller's Shortform

A human made post-singularity AI would surpass the intellectual capabilities of ETs maybe 30 seconds after it did ours.

No, ETs likely have lived millions of years with post-singularity AI and to the extend they aren't themselves AI upgraded their cognitive capacity significantly.

James_Miller's Shortform

Eric Weinstein & Michael Shermer: An honest dialogue about UFOs seems to me to cover the UFO topic well. The videos publically available aren't great evidence for UFO's but all the information we have about how part of the US military claims to see UFO's is very hard to bring together in a coherent scenario that makes sense. 

ChristianKl's Shortform

Yesterday, I talked with a friend who together with her boyfriend got COVID. He was vaccinated, she wasn't. She had to go to the hospital while he didn't. However both have now the same long-COVID symptoms. 

It's an anecdote and I'd really love if someone would actually study the question of how effective our vaccines are for preventing long-COVID...

Latacora might be of interest to some AI Safety organizations

As part of being inside Google DeepMind I would expect that this gives DeepMind already good access to security expertise. If you think that an external service like Latacora can do things that internal Google services can't, can you expand the argument of why you think so?

3NunoSempere5dI don't think that Latacora can do things that an internal Google service literally can't.
Latacora might be of interest to some AI Safety organizations

It seems like MIRI already had a very strong security policy that strongly inhibited their ability to do their job. By hiring professionals like Latacora, they might not only make MIRI more secure but might also provide helpful advice about what practices are creating an unnecessary burden.

2NunoSempere5dI had similar thoughts.
Use Tools For What They're For

If they are so confident that their vaccines are stellar successes, why did they specify in their contracts with European governments that they could not be held liable for side effects?

Courts of law can make companies liable for "side effects" whether or not there's scientific evidence that the "side effects" are caused by the drugs. 

If you for example look at LYMErix it's not clear that the side effects were really that severe but they were still enough to get the vaccine withdrawn.

Use Tools For What They're For

The two swiss labs Novartis and Roche, who respectively commercialize Lucentis and Avastin in Europe (undistinguishable treatments both created by Genetech, an American lab bought back by Roche - also, Novartis owns 33.33% of Roche), tried a legal action against France. I assume it was to outlaw the use of Avastin in the eyes. 

That seems unlikely to me. Our health system doesn't work in a way that there's a normal process for outlawing drugs for being used for certain purposes. 

Let's start with, Lucentis and Avastin are similar but the are not un... (read more)

Use Tools For What They're For

Novartis was so miffed that they successfully lobbied for forbidding using Avastin in AMD cases

Can you be more specific to what you are referring to? What specific regulatory are you calling "forbidding using Avastin in AMD cases". What forbids off-label use here?

1Vanilla_cabs7dMy bad, I rewatched the documentary and it's actually less clear. The two swiss labs Novartis and Roche, who respectively commercialize Lucentis and Avastin in Europe (undistinguishable treatments both created by Genetech, an American lab bought back by Roche - also, Novartis owns 33.33% of Roche), tried a legal action against France. I assume it was to outlaw the use of Avastin in the eyes. But it eventually failed. However, in the meantime, the habit had taken root to use Lucentis to cure AMD. It's not explained exactly why. The interviewed person says "the difficulty nowadays for the healthcare system, is that they set up a system that is so complex for eye doctors to manage that in the end, everyone gave up." I interpret that as "it's possible, but there's so much red tape that it's impractical". I will correct my original comment.
Giving Up On T-Mobile

Stories about banned Google accounts look like:

Here is an example of how this can go awry. A slew of YouTube users had their Google accounts (not just their YouTube accounts) banned for “spamming” a video feed with Emojis. However, the YouTuber who created the video in question encouraged users to do just that, so Google didn’t have to go so far as to perform full account bans. To make matters worse, it took Google days before it reactivated everyone’s accounts. Even then, some experienced data loss.

Official Google Docs abuse policy does seem to indicate t... (read more)

2jefftk7dIn the YouTube emoji case, people were doing something that looked abusive to the automated system, and then the manual review got it wrong. Then, after additional review, YouTube acknowledged they got it wrong, put things back, and said they were going to work on making this less likely in the future. This doesn't seem like enough of a risk to care? In the case of the TOS, there are all sorts of worrying things in most TOS. In general, I don't think this sort of thing is worth worrying about unless the company is actually doing something.
AI Safety Needs Great Engineers

Eliezer wrote:

Nothing else Elon Musk has done can possibly make up for how hard the "OpenAI" launch trashed humanity's chances of survival

To me it seems reasonable to see that as EY presuming judgement about the effects of OpenAI.

4Frederik7dOh yes I'm aware that he expressed this view. That's different however from it being objectively plausible (whatever that means). I have the feeling we're talking past each other a bit. I'm not saying "no-one reputable thinks OpenAI is net-negative for the world". I'm just pointing out that it's not as clear-cut as your initial comment made it seem to me.
AI Safety Needs Great Engineers

That sounds like you are in denail. I didn't make a statement about whether or not OpenAI raises AI risk but referred to the discussion about whether or not it has. That discussion exist and people like Eliezer argue that OpenAI results in a net risk increase. Being in denail about that discourse is bad. It can help with feeling good when working in the area but it prevents good analysis about the dynamics. 

3Frederik7dNo I take specific issue with the term 'plausibly'. I don't have a problem with the term 'possibly'. Using the term plausibly already presumes judgement over the outcome of the discussion which I did not want to get into (mostly because I don't have a strong view on this yet). You could of course argue that that's false balance and if so I would like to hear your argument (but maybe not under this particular post, if people think that it's too OT) ETA: if this is just a disagreement about our definitions of the term 'plausibly' then nevermind, but your original comment reads to me like you're taking a side.
Giving Up On T-Mobile

What happens when one's Google account gets banned because someone algorithm feels like there's a content violation?

The prospect of getting locked out of mobile phone service and my email at the same time seems frightening.

2jefftk7dThis seems very unlikely to me? While I've seen a few news stories about people being locked out (a) they're rare enough to be news when they happen and (b) there are typically other factors, with borderline or abusive behaviors I wouldn't do. (Additionally, I'm less worried about this because I work for Google and know a lot of other people that work for Google, but that isn't a factor for most people considering this)
AI Safety Needs Great Engineers

Given the discussion around OpenAI plausible increasing overall AI risk, why should we believe that the work will reduce in a net risk reduction?

-3Frederik7dI don't like your framing of this as "plausible" but I don't want to argue that point. Afaict it boils down to whether you believe in (parts of) their mission, e.g. interpretability of large models and how much that weighs against the marginal increase in race dynamics if any.
Split and Commit

For most of it's history Space X wasn't filling patents. They only did really start with that in 2019 ( I expect that a key motivation of why they started with that is to be able to use them defensively against Blue Origin.

Use Tools For What They're For

The FDA not liking people using the horse version to treat illnesses that people have has nothing to do with ivermectin being ineffective for the illnesses in question.

We know that because in 2019 that already was the FDA position to be concerned about it's usage for Rosacea. The FDA just opposes US citizens to use cheaper versions to treat their diseases then the human versions that they approve. 

Ivermectin, and really any drug not deliberately designed either to bolster the human immune system or to fight viruses (and more specifically COVID-19), is

... (read more)
1AllAmericanBreakfast8dWith any heuristic, it’s going to have failure modes and will only get you so far. This is meant as a common-sense guideline for lay people, not as an intellectual stopping point for scientists, regulators, and clinicians. Here, I’m aiming at people who are ivermectin partisans, both critics and supporters. Those who’d reject other treatments in favor of ivermectin, and those who think ivermectin has no possible relevance to COVID-19 and yet yet don’t seem to be thinking even at a baseline level of wisdom in their criticism. This post is a tool, and I advocate using it only for what it’s for!
Which booster shot to get and when?

The Pfizer vaccine is not developed by Pfizer just like AstraZeneca's is not developed (as in decided on the formulation) by AstraZeneca. Both are big pharma companies. 

(both Moderna and BioNTech are what's traditionally called biotech companies and not Big Pharma)

Did EcoHealth create SARS-CoV-2?

There's no assertion that this grant directly paid for the experiments from which COVID-19 escaped. The Baric&Shi paper from 2015 which came out of that grant reports work happening under biosafety level III.

Biosafety level II doesn't provide effective protection for the researchers when they deal with an airborne virus. Despite the denailist at the WHO SARS-COV-2, seems to spread via airborne infections, so it makes sense that the leak was from one of the experiments under biosafety level II which were not EcoHealth funded. 

Which booster shot to get and when?

I'm still uncertain myself. My key crux at the moment is how side effects of the 2nd dose compare to side-effects of the 3rd dose.

I remember some posts that suggest the amount of people who sought medical help after the 3rd dose was concerningly higher.

While researching I found as a source and going from 0/0->2.1/1.4 (Moderna/Pfizer) indeed sounds concerning. However it seems like the 0's were only based on a total of 64/66 patients so a good part of that difference might be random noi... (read more)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

If you for example want the critcism on GiveWell, Ben Hoffman was employed at GiveWell and made experiences that suggest that the process based on which their reports are made has epistemic problems. If you want the details talk to him. 

The general model would be that between actual intervention and the top there are a bunch of maze levels. GiveWell then hired normal corporatist people who behave in the dynamics that the immoral maze sequence describes play themselves out.

Vassar's action themselves are about doing altruistic actions more directly by l... (read more)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

What he said is compatible with Ex-CEA people still being bound by the NDA's they signed they were at CEA. I don't think anything happened that releases ex-CEA people from NDAs.

The important thing is that CEA is responsible for those NDA and is free to unilaterally lift them if they would have an interest in the free flow of information. In the case of a settlement with contracts between the two organisations CEA couldn't unilaterally lift the settlement contract.

Public pressure on CEA seems to be necessary to get the information out in the open.

Did EcoHealth create SARS-CoV-2?

There's no reason to create inaccurate headlines on LessWrong. The article doesn't claim that EcoHealth created SARS-CoV-2.

Shi's lab did gain-of-function research under biosafety level 2 as they describe in their own papers like Evolutionary Arms Race between Virus and Host Drives Genetic Diversity in Bat Severe Acute Respiratory Syndrome-Related Coronavirus Spike Genes. The acknowledgement section of that paper describes that funding as:

"This work was jointly funded by the strategic priority research program of the Chinese Academy of Sciences (grant XDB29... (read more)

1jamal13dNicholas Wade does assert that! NIAID funded it, EcoHealth was the prime contractor, Dr Shi at the Wuhan Institute of Virology was an official sub-contractor. He has a link to the NIH website.
AGI is at least as far away as Nuclear Fusion.

The physical laws allow us to get an idea about how hard nuclear fussion happens to be. It allows us to rule a lot of approaches as not having the chance to work. 

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I talked with Geoff and according to him there's no legal contract between CEA and Leverage that prevents information sharing. All information sharing is prevented by organization internal NDA's.

Huh, that's surprising, if by that he means "no contracts between anyone currently at Leverage and anyone at CEA". I currently still think it's the case, though I also don't see any reason for Geoff to lie here. Maybe there is some technical sense in which there is no contract between Leverage and CEA, but there are contracts between current Leverage employees, who used to work at CEA, and current CEA employees? 

Sasha Chapin on bad social norms in rationality/EA

Social norms and what's publically endorsed are not the same thing. It's still debatable whether those norms exist but this is a bad argument.

Sasha Chapin on bad social norms in rationality/EA

Was that a member of the local community in your country? How much EA contact and how much rationality contact did they have?

2Kaj_Sotala13dFrom another country; to be clear, when I told them this they were genuinely surprised and confused by me feeling that way.
Sasha Chapin on bad social norms in rationality/EA

My experience of the rationality community is one where we value Daniel's poems and Raemon's songs. The vibe of the LessWrong community weekend is not one of cultural values that tell people to avoid art. 

To the extend that this share is true, for what subset of the rationality community suffers from it?

(By the way, Eliezer Yudkowsky, this is what post-Rationalists are, it’s not that complicated—they don’t have explicit principles because they’ve moved on from thinking that life is entirely about explicit principles. Perhaps you don’t intuitively gras

... (read more)
9Kaj_Sotala14dI recall having had this feeling, in particular I once mentioned to another member of the community that I was thinking about working on a fiction-writing project but I also felt bad to admit it, because I was afraid he'd look down on me for spending time on something so frivolous. (This was quite a while ago, as in 2014 or something like that.)
Sci-Hub sued in India

I'm no lawyer but I do think that article 27 of the universal human rights convention is important here:

Article 27 Everyone has the right freely to participate in the cultural life of the community, to enjoy the arts and to share in scientific advancement and its benefits. Everyone has the right to the protection of the moral and material interests resulting from any scientific, literary or artistic production of which he is the author.

Crucially while $27 protects the material interests of the author it doesn't protect those of the publisher.

I think there are plenty of Indians whose human rights are violated by effectively forbidding them from sharing in scientific advancement by reading scientific papers.

Sci-Hub sued in India

It seems like you didn't provide an address where people who want to support the project can donate towards. Why does that happen to be the case? Are there legal reasons that make that make that difficult?

6Connor_Flexman17dMoney just isn't really a priority/bottleneck on this so nowhere is set up to take donations, except generic Sci-Hub. And that actually might be strategically bad at the moment because Elsevier, like Wormtongue himself, is claiming in the lawsuit that Sci-Hub has commercialized its works through the donations it accepts. Best to have that number stay low.
Thiel on Progress and Stagnation

Thiel does speak about all asset classes not really providing the returns that the LP's desire. 

The interview with Thiel and Eric Schmitt is good in this regard. Google holding 50 billion on it's balance sheet because they don't see any way to invest that in a profitable way is a good illustration of those dynamics.

1Samuel Shadrach19dOh, I completely agree with this. But most people can't even invest 100% of their funds in S&P500 and sleep peacefully, so it may not matter if much higher risk-reward opportunities exist exclusively for small funds. Google on the other hand could hold S&P plus some leverage, or a bunch of VC funds, and no one goes hungry if they all fail.
Thiel on Progress and Stagnation

Yes, but part of risk is that it comes with people losing their fortune. Elon Musk is richer then any Rotschield. 

1Samuel Shadrach19dJust to clarify, are you speaking about your opinion or Thiel's? My personal opinion is more along - The rich have less personal happiness / utility attached to most of their fortune, their basic needs are anyway met, so they can take more risk than the poor can. Which gives them higher compound returns than the poor. Also it's important to distinguish between types of risk - the risk of losing all your fortune in an index fund is practically zero, and the returns are 8% (inflation adjusted 30 years). The risk of temporarily losing 40% in paper value and that coinciding with some personal emergency where you need to liquidate is non-zero. The psychological toll of seeing your holdings go down 40% and still hold is non-zero. And it's the latter two you're really compensated for in the 8%. P.S. Basically the average Rothschild will outperform the middle class on financial returns, even if they're left behind by Musk. (Assuming no govt interventions such as tax, subsidy etc)
Load More