All of JessRiedel's Comments + Replies

The Best Software For Every Need

Does excalidraw have an advantage over a slides editor like PowerPoint or Keynote?

I would choose it for very different use cases to slides; I've never diagrammed anything in a slides editor. I have historically drawn things in excalidraw, screenshotted them, then pasted them into a slides editor though.
The Best Software For Every Need

Let me also endorse the usefulness of .  Highly recommended.

The Case for Extreme Vaccine Effectiveness

You've given some toy numbers as a demonstration that the claim needn't necessarily be undermined, but the question is whether it's undermined by the actual numbers.

I thought about this for a while, and I think the entailment you point out is correct and we can't be sure the numbers turn out as in my example. But also, I think I got myself confused when writing the originally cited passage. I was thinking about how there will be a smaller absolute number of false-positive deaths than the absolute number of false-positive symptomatic cases, because there are fewer death generally. That doesn't require the false-positive rates to be different to be true. Also thinking about it, the mechanisms by which the false-positive rate would be lower on severe outcomes that I'd been thinking of don't obviously hold. It's probably more like if someone had a false-positive test and then had pneumonia symptoms, it'd be mistaken for Covid, and the rate of that happening is only dependent on the regular Covid test false-positive rate.
The Case for Extreme Vaccine Effectiveness

> Of course, the outcomes we’re interested in are hospitalization, severe Covid, and death. I’d expect the false positives on these to be lower than for having Covid at all, but across tens of thousands of people (the Israel study did still have thousands even in later periods), it’s not crazy that some people would be very ill with pneumonia and also get a false positive on Covid.

Does this observation undermine the claim of a general trend in effectiveness with increasing severity of disease? That is, if false positives bias the measured effectiveness ... (read more)

I don't think it undermines it. What matters is the relative frequency of true cases [1] vs false positives. With less severe disease (e.g. symptomatic), we might have a frequency of 1% true cases in the population, plus 0.1% false-positive rate. The true cases greatly outnumber the false-positives. In contrast, vaccinated death from Covid might be only 0.001% in the population, while false-positive deaths are 0.01%. Here the false-positives dominate. So even though the absolute false-positive rate is lower in more severe cases (because it's harder to misattribute deaths than get wrong test results), it still dominates the effectiveness results more because it's larger than the rate of actual occurrences of the event. [1] I say "true cases" deliberately instead of true-positives, because I mean to say the objective underlying frequency of the event, not true-positive detection rate.
TAI Safety Bibliographic Database

The automated tools on Zotero are good enough now that getting the complete bibtex information doesn't really make it much easier.  I can convert a DOI or arXiv number into a complete listing with one click, and I can do the same with a paper title in 2-3 clicks.  The laborious part is (1) interacting with each author and (2) classifying/categorizing the paper.

TAI Safety Bibliographic Database

Does the org have an official stance?  I've seen people write it both ways.  Happy to defer to you on this, so I've edited.

4Daniel Kokotajlo2y
I don't know, but I've only ever heard the people who work there use CLR.
TAI Safety Bibliographic Database

If we decide to expand the database in 2021 to attempt comprehensive coverage of blog posts, then a machine-readable citation system would be extremely helpful.  However, to do that we would need to decide on some method for sorting/filtering the posts, which is going to depend on what the community finds most interesting.  E.g., do we want to compare blog posts to journal articles, or should the analyses remain mostly separate?  Are we going to crowd-source the filtering by category and organization, or use some sort of automated guessing b... (read more)

Search versus design

Somewhat contra Alex's example of a tree, I am struck by the comprehensibility of biological organisms. If, before I knew any biology, you had told me only that (1) animals are mechanistic, (2) are in fact composed of trillions of microscopic machines, and (3) were the result of a search process like evolution, then the first time I looked at the inside of an animal I think I would have expected absolutely *nothing* that could be macroscopically understood. I would have expected a crazy mesh of magic material that operated at a level way outside my ab... (read more)

But a lot of that feeling depends on which animal's insides you're looking at. A closely related mammal's internal structure is a lot more intuitive to us than, say, an oyster or a jellyfish.
6Ben Pace2y
+1. It's hard to remember how surprised I'd be to see reality for the first time, but it is shocking to look inside a biological creature and have a sense of "oh yeah, I have some sense of how many of these things connect together". I'd expect things to look more like they do in weird sci-fi like "Annihilation" or something. Although I remember people didn't get basic stuff like what the brain was for for ages, so maybe it did look insane as well.
Review of "Lifecycle Investing"

Agreed. The optimal amount of leverage is of course going to be very dependent on one's model and assumptions, but the fact that a young investor with 100% equities does better *on the margin* by adding a bit of leverage is very robust.

Review of "Lifecycle Investing"

I endorse ESRogs' replies. I'll just add some minor points.

1. Nothing in this book or the lifecycle strategy rests on anything specific to the US stock market. As I said in my review

The fact that, when young, you are buying stocks on margin makes it tempting to interpret this strategy is only good when one is not very risk averse or when the stock market has a good century. But for any time-homogeneous view you have on what stocks will do in the future, there is a version of this strategy that is better than a conventional strategy. (A large fr
... (read more)
SARS-CoV-2 pool-testing algorithm puzzle
The problem is that there are other RNA viruses besides SARS-CoV-2, such as influenza, and depending when in the disease course the samples were taken, the amount of irrelevant RNA might exceed the amount of SARS-CoV-2 RNA by orders of magnitude

There is going to be tons of RNA in saliva from sources besides SARS-CoV-2 always. Bits of RNA are floating around everywhere. Yes, there is some minimum threshold of SARS-CoV-2 density at which the test will fail to detect it, but this should just scale up by a factor of N when pooling over N people. I don't see why other RNA those people have will be a problem any more than the other sources of RNA in a single person are a problem for a non-pooled test.

Why would panic during this coronavirus pandemic be a bad thing?
"The government" in the US certainly doesn't have the authority to do most of these things.

Both the federal and state governments have vast powers during public health emergencies. For instance, the Supreme Court has made clear that the government can hold you down and vaccinate you against your will. Likewise, the Army (not just National Guard) can be deployed to enforce laws, including curfew and other quarantine laws.

Yes, it's unclear whether government officials would be willing to use these options, and how much the public would... (read more)

Alignment Newsletter #13: 07/02/18

Hi Rohin, are older version of the newsletter available?


This sounds mostly like a claim that it is more computationally expensive to deal with hidden information and long term planning.

One consideration: When you are exploring a tree of possibilities, every bit of missing information means you need to double the size of the tree. So it could be that hidden information leads to an exponential explosion in search cost in the absence of hidden-information-specific search strategies. Although strictly speaking this is just a case of something being "more computationally expensive", exponential penalties generically push things from being feasible to infeasible.

2Rohin Shah4y
Hey Jess, as Ben mentioned I keep all newsletter-related things on my website []. I agree that in theory hidden information leads to an exponential explosion. In practice, I think you don't need to search over all the exponentially many ways the hidden information could be in order to get good results. (At least, you don't need to do that in order to beat humans, because humans don't seem to do that.) I think overall we agree though -- when I said "it wasn't clear how to make things work with hidden information -- you could try the same thing but it was plausible it wouldn't work", I was primarily thinking that the computational cost might be too high. I was relatively confident that given unbounded compute, AlphaGo-style algorithms could deal with hidden information.
6Ben Pace4y
They're all available at his LW profile [] and also at his offsite blog [].
The simple picture on AI safety

What is the core problem of your autonomous driving group?!

1Alex Flint4y
It doesn't matter! :P
Less Wrong: Progress Report

Marshall, I would keep in mind that good intentions are not sufficient for getting your comments up-voted. They need to contribute to the discussion. Since your account was deleted, we can't to judge one way or the other.

Less Wrong: Progress Report

I think there is some truth to Marshall's critique and that the situation could be easily improved by making it clear (either on the "about" page or in some other high-visibility note) what the guidelines for voting are. That means guidelines would have to be agreed upon. Until that happens, I suspect people will continue to just vote up comments they agree with, stifling debate.

I've previously suggested a change to the voting system, but this might require more man-power to implement than is available.

Issues, Bugs, and Requested Features

It seems like the only criterion for the rating of comment/post be the degree to which it contributes to healthy discussion (well-explained, on-topic, not completely stupid). However, there is an strong tendency for people to vote comments based on whether they disagree with them or not, which is very bad for healthy discussion. It discourages new ideas and drives away visitors with differing opinions when they see a page full of highly rated comments for a particular viewpoint (cf. reddit).

The feature I would recommend most for this website is a dual ... (read more)

I disagree, because I see these factors as necessarily closely connected, in any person's mind. I rate not quality of prose, but quality of communicated idea, as it comes through. If I think that the idea is silly, I rate it down. If the argument moves me, communicating a piece of knowledge that I at least give a chance of changing my understanding of something, then the message was valuable. It doesn't matter whether the context was to imply a conclusion I agree or disagree with, it only matters whether the idea contributes something to my understanding.
4Eliezer Yudkowsky13y
This makes... quite a lot of sense, actually. And of course the posts would be sorted by quality votes, not agreement votes.
I'm not sure this is obviously right. I would probably insist upon some usability study to determine how people actually use such features. Of course, if the cost is low such a study could just be implementing them and seeing how it works. I imagine there's a name for this cognitive bias, but I've noticed well-informed folks tend to think agreeable opinions are better-argued, and less agreeable ones are worse-argued (probably a species of confirmation bias). For example, someone posting against physicalism might get downvoted quickly by people who say "but they didn't even consider Dennett's response to this premise". But they might not have the same objections on-hand to an unsound argument in favor of physicalism.
Also, I am going with the crowd and changing to a user name with an underscore
You Only Live Twice

I'm confused. What is the relationship between Alcor and the Cryonics Institute? Is it either-or? What is the purpose of yearly fees to them if you can just take out insurance which will cover all the costs in the event of your death?

Magical Categories

Eliezer, I believe that your belittling tone is conducive to neither a healthy debate nor a readable blog post. I suspect that your attitude is borne out of just frustration, not contempt, but I would still strongly encourage you to write more civilly. It's not just a matter of being nice; rudeness prevents both the speaker and the listener from thinking clearly and objectively, and it doesn't contribute to anything.

Can't agree with this enough.
Timeless Physics

Günther: Of course my comments about Barbour were (partially) ad hominem. The point was not to criticize his work, but to criticize this post. Very few people are qualified to assess the merit of Barbour's work. This includes, with respect, Eliezer. In the absence of expertise, the rational thinker must defer to the experts. The experts have found nothing of note in Barbour's work.

Albert Einstein was not performing philosophy when he developed GR. He was motivated by a philosophical insight and then did physics.

Timeless Physics

You've drawn many vague conclusions (read: words, not equations or experimental predictions) about the nature of reality from a vague idea promoted by a non-academic. It smacks strongly of pseudo-science.

Julian Barbour's work is unconventional. Many of his papers border on philosophy and most are not published in prominent journals. His first idea, that time is simply another coordinate parameterizing a mathematical object (like a manifold in GR) and that it's specialness is an illusion, is ancient. His second idea, that any theory more fundamental tha... (read more)

I find this contrast you're drawing confusing. Making it relational is an attempt to justify the gauge freedom.
Faster Than Science

I definitely agree that there is truth to Max Planck's assertion. And indeed, the Copenhagen interpretation was untenable as soon as it was put forth. However, Everett's initial theory was also very unsatisfying. It only became (somewhat) attractive with the much later development of decoherence theory, which first made plausible the claim that no-collapse QM evolution could explain our experiences. (For most physicists who examine it seriously, the claim is still very questionable).

Hence, the gradual increase in acceptance of the MW interpretation is a product both of the old guard dying off and the development of better theoretical support for MW.

Decoherence is Falsifiable and Testable

Psy-Kosh: Oh, I almost forgot to answer your questions. Experimental results are still several years distant. The basic idea is to fabricate a tiny cantilever with an even tinier mirror attached to its end. Then, you position that mirror at one end of a photon cavity (the other end being a regular fixed mirror). If you then send a photon into the cavity through a half-silvered third mirror--so that it will be in a superposition of being in and not in the cavity--then the cantilever will be put into a correlated superposition: it will be vibrating if t... (read more)

Decoherence is Falsifiable and Testable

Psy-Kosh: It is an awesome experiment. Here are links to Bouwmeester's home page , the original proposal, and the latest update on cooling the cantilever.(Bouwmeester has perhaps the most annoying web interface of any serious scientist. Click in the upper left on "research" and then the lower right on "macroscopic quantum superposition". Also, the last article appeared in nature and may not be accessible without a subscription.)

Obviously, this is a very hard experiment and success is not assured.

Also, you might be interested to know t... (read more)

Decoherence is Falsifiable and Testable

Excellent post Eliezer. I have just a small quibble: it should be made clear that decoherance and the many worlds interpretations are logically distinct. Many physicists, especially condensed matter physicist working on quantum computation/information, use models of microscopic decoherance on a daily basis while remaining agnostic about collapse. These models of decoherance (used for so-called "partial measurement") are directly experimentally testable.

Maybe a better term for what you are talking about is macroscopic decoherance. As of right ... (read more)

Surely the prior is that the laws of physics hold at all scales? Why wouldn't you extrapolate? Edit: Just noticed how redundant this comment is..
On Being Decoherent

"And both spatial infinity and inflation are standard in the current model of physics."

As mentioned by a commenter above, spatial infinity is by no means required or implied by physical observation. Non-compact space-times are allowed by general relativity, but so are compact tori (which is a very real possibility) or a plethora of bizarre geometries which have been ruled out by experimental evidence.

Inflation is an interesting theory which agrees well with the small (relative to other areas of physics) amount of cosmological data which has bee... (read more)

In the prolog to the QM sequence he does actually repeatedly say <this all is my opinion and others have different opinions and I'll talk about that later>
Which Basis Is More Fundamental?

Eliezer:I wouldn't be surprised to learn that there is some known better way of looking at quantum mechanics than the position basis, some view whose mathematical components are relativistically invariant and locally causal. There is. Quantum Field theory takes place on the full spacetime of special relativity, and it is completely lorentz covariant. Quantum Mechanics is a low-speed approximation of QFT and neccessarily chooses a reference frame, destroying covariance.

Hal Finney: The Schrodinger equation (and the relatavistic generalization) dictate local evolution of the wavefunction. Non-locality comes about during the measurement process, which is not well understood.


CPT symmetry is required by Quantum Field Theory, not General Relativity.

Feynman Paths

The Feynman path integral (PI) and Schrödinger's equation (SE) are completely equivalent formulations of QM in the sense that they give the same time evolution of an initial state. They have exactly the same information content. It's true that you can derive SE from the PI, while the reverse derivation isn't very natural. On the other hand, the PI is mathematically completely non-rigorous (roughly, the space of paths is too large) while SE evolution can be made precise.

Practically, the PI cannot be used to solve almost anything except the harmonic oscil... (read more)

1Robert Wilson III2mo
They're only syntactically equivalent. Their semantics are completely different. In my opinion, Feynman's semantics is objectively correct regarding the 'literal path' of a particle through spacetime. Given we don't officially know their paths, but we do know their end destinations (wave equation), we can figure all possible paths and have the practically impossible paths cancel each other out: leaving only the probable literal paths of a particle complete with a graph of their trajectories. Schrodinger's equation is far behind semantically. I think Feynman's path integrals are superior.
The Quantum Arena

Psy-Kosh: Position-space is special because it has a notion of locality. Two particles can interact if they collide with each other traveling at different speeds, but they cannot interact if they are far from each other traveling at the same speed.

The field, defined everywhere on the 4-D spacetime manifold, is "reality" (up until the magical measurement happens, at least). You can construct different initial value problem (e.g. if the universe is such-and-such at a particular time, how will it evolve?) by taking different slices of the spacetim... (read more)

The Quantum Arena

Chris, in case you didn't see me ask you last time...

do you know of a good survey of decoherence?

The Quantum Arena

Psy-Kosh: In Quantum Field Theory, the fields (the analog of wavefunctions in non-relativistic Quantum Mechanics) evolve locally on the spacetime. This is given a precise, observer-independant (i.e. covariant) meaning. This property reduces to the spatially-local evolution of the wavefunction in QM which Eliezer is describing. Further, this indeed identifies position-space as "special", compared to momentum-space or any other decomposition of the Hilbert space.

Eliezer: The wavefunctions in QM (and the fields in QFT) evolve locally under norma... (read more)

I'm pretty sure Many Worlds doesn't have waveform collapse. Also, I don't think they're talking about configuration space. They're saying that particle a being in point A and particle b being in point B interacting is non-local. That configuration is one point, so it's completely local.
Where Philosophy Meets Science

Chris, could you recommend an introduction to decoherence for a grad student in physics? I am dumbstruck by how difficult it is to learn about it and the seeming lack of an authoritative consensus. Is there a proper review article? Is full-on decoherence taught in any physics grad classes, anywhere?

Configurations and Amplitude

Psy-Kosh: I have never heard of anyone ever successfully formulating quantum (or classical) mechanics without the full spectrum of real numbers. You can't even have simple things, like right triangles with non-integer side length, without irrational numbers to "fill in the gaps". Any finite-set formulation of QM would look very different from what we understand now.

Configurations and Amplitude

Psy-Kosh: I have never heard of anyone ever successfully formulating quantum (or classical) mechanics without the full spectrum of real numbers. You can't even have simple things, like right triangles with non-integer side length, without irrational numbers to "fill in the gaps". Any finite-set formulation of QM would look very different from what we understand now.

Configurations and Amplitude

Psy-Kosh, when QM is formulated rigorously (something that is rarely done, and only by mathematical physicists) the amplitudes must be able to take on any number in the complex plane, not just the rationals.

Sebastian Hagen, I believe Eliezer is explaining to us the best model physicists have for the way the world works on the (sorta) lowest level we understand, not his personal beliefs on the nature of reality. This model must include the irrationals, to be self-consistent. This does not prevent the universe from being discretized (no uncountable sets) on a more fundamental level from QM.

Configurations and Amplitude

I guess, Eliezer, that I would be concerned about convincing everyone that the universe runs along like a computer, computing amplitudes locally (which seems to be the gist of your discussion). To do so would certainly make people feel like QM isn't confusing; it would just be wave mechanics. But this would give people a false confidence, I think, and is not how the universe appears to operate.

But this is the first post, so I'll try to confine my criticism until you've wrapped up your discussion.

Configurations and Amplitude

Eliezer, in case you plan to discuss Bell's-inequality-type experiments in future posts, I suggest that you use the GHZ state (not the EPR pair) to show how local realism is ruled out in QM. The GHZ state is a much cleaner result, and is not obscurred by the statistics inherent in Bell's inequality.