TheMajor

TheMajor's Posts

Sorted by New

TheMajor's Comments

Corrigibility as outside view

Both outside view reasoning and corrigibility use the outcome of our own utility calculation/mental effort as input for making a decision, instead of output. Perhaps this should be interpreted as taking some gods-eye-view of the agent and their surroundings. When I invoke the outside view, I really am asking "in the past, in situations where my brain said X would happen, what really happened?". Looking at it like this I think not invoking the outside view is a weird form of duality, where we (willingly) ignore the fact that historically my brain has disproportionately suggested X in situations where Y actually happened. Of course in a world with ideal reasoners (or at least, where I am an ideal reasoner) the outside view will agree with the output of my mental progress.

To me this feels different (though still similar or possibly related, but not the same) to the corrigibility examples. Here the difference between corrigible or incorrigible is not a matter of expected future outcomes, but is decided by uncertainty about the desirability of the outcomes (in particular, the AI having false confidence that some bad future is actually good). We want our untrained AI to think "My real goal, no matter what I'm currently explicitly programmed to do, is to satisfy what the researchers around me want, which includes complying if they want to change my code." To me this sounds different than the outside view, where I 'merely' had to accept that for an ideal reasoner the outside view will produce the same conclusion as my inside view, so any differences between them are interesting facts about my own mental models and can be used to improve my ability to reason.

That being said, I am not sure the difference between uncertainty around future events and uncertainty about desirability of future states is something fundamental. Maybe the concept of probutility bridges this gap - I am positing that corrigibility and outside view reason on different levels, but as long as agents applying the outside view in a sufficiently thorough way are corrigible (or the other way around) the difference may not be physical.

Is this viable physics?

I've tried to read through the linked page, and swapped to `academic reading' (checking the pictures, and sometimes the first and last line of paragraphs) halfway through. I think this is not viable.

There is a host of "theories of the universe" with a similar structure on a meta-level, consisting of some kind of emergent complexity. It is important to keep in mind the strength of a theory lies in what it forbids, not in what it permits. To date most theories of the universe fail this test hard, by being so vague and nonspecific that any scientific concept can be pattern-matched to some aspect of it. Judging by what I've read so far this is no exception (and in fact, I suspect that the reason Wolfram references so many big scientific theories is because large concepts are easier to pattern-match, whereas specific predictions are not as open to interpretation). Why will his patterns produce Einstein's equations (note that they currently do no such thing, he states we first need to "find the right universe"), and not Newton's, or Einstein's with double the speed of light?

As always with these nonspecific `theories' it is very difficult to nail down one specific weakness. But currently all I'm seeing are red flags. I predict serious media attention and possibly some relevant discoveries in physics (some of the paragraphs sounded better than all other crackpot theories I've seen), but the majority of it seems wrong/worthless.

Coronavirus: Justified Key Insights Thread

I don't have a good model to give me any predictions on what reasonable numbers of asymptomatic cases would be, or how truncation influences these numbers. Could you explain why the inference is idiotic, and perhaps give a more reasonable one?

Coronavirus: Justified Key Insights Thread

Is there reason to believe the raw numbers are more accurate estimate of the rate than the model prediction? Also, what are the type-1 and type-2 errors of the tests used on the Diamond Princess? I heard some early reports that both of these might be significant, but then never heard anything about them again.

I checked that link above and followed their references to find other datasets, but two of them are in Japanese, one only deals with self-selected patients who showed symptoms, and the last two have small sample size (12 patients, two papers cover the same event).


Update: I have found https://www.eurosurveillance.org/content/10.2807/1560-7917.ES.2020.25.3.2000045, which benchmarks the real-time reverse transcription polymerase chain reaction (RT-PCT) tests. They state zero false positives in a trial with 297 non-COVID-19 samples, although they do retest 4 samples that showed "weak initial reactivity". Since the non real-time version of RT-PCT is supposed to be even more reliable, this means false positives are presumably not a big deal (even at a pessimistic 4/297 false positive this still means only 41 false positives out of 3063 tests done on the Diamond Princess).

Where should LessWrong go on COVID?

I think it is still very unclear what the situation a medium (~a few months) to long (~a year) amount of time from now will be like. I would love to see more discussion on this. On a related note, I think mental health and self-help are going to very important very soon, and while I am in a fine place personally I would still like to know a lot more about this, including how to help others. This strays a little bit from the other COVID discussion topics, but I do think LessWrong might have a comparative advantage here (especially compared to the crapshoot baseline that is the internet).

Coronavirus: Justified Key Insights Thread

Have you got a source for that 'about half the cases are asymptomatic'? I was under the impression that far more cases show symptoms eventually, and that the studies showing half of the infections are asymptomatic add the disclaimer 'so far', which means very little if the spread is growing exponentially with a doubling time of several days.

How strong is the evidence for hydroxychloroquine?

Thanks for sharing! I'm not a doctor, so I found this a tough read. This document is clearly a proposal (attempting to convince the reader) instead of a summary, but it still contains a lot of useful information. Nevertheless, there were some parts I found especially confusing.

On page 2 they mention there are currently 22 studies, of which one has completed, on the effect of hydroxychloroquine (HCQ) treatment on COVID-19 patients. Further down in the piece (in particular in the section "What about the studies that show no benefit from HCQ?" on page 11) they dismiss some studies showing little or no effect. Is there a place to find more discussion on which studies are being discounted, and for what reason? They link one study only, citing that "only 400mg daily for 5 days was used", although the suggested treatment in this document is "HCQ: 6.5-15mg/kg PO in divided loading dose followed by 400-800mg/day in divided doses for 4-9 days" (which encompasses 400mg daily for 5 days).

The recommended treatment is a combination treatment with four different components - an initial oral hydroxychloroquine administration and a daily treatment of hydroxychloroquine and two other medicines (zinc and Azithromycin). Furthermore the document states that this treatment is expected to work a lot better in early stages of the disease (this part is also unclear to me - again on page 11 they state that "[some studies] waited to initiate treatment until the disease was too far progressed to be effective" as grounds for dismissal). Does this mean this treatment is expected to have next to no effect in late stages? I'm worried about Bonferfoni-esque situations here; are 21 incomplete and 1 complete study strong enough to motivate this complicated treatment, especially if we allow ourselves to discount some papers with conflicting conclusions as well as restrict the time period over which the treatment is supposed to be effective?

How strong is the evidence for hydroxychloroquine?

I am very interested in discussion on hydroxychloroquine, but do not have a Facebook. Is there some other way to read the megathread?

A Kernel of Truth: Insights from 'A Friendly Approach to Functional Analysis'

Very nice! Two mistakes though:

  • Technically the introductory part on derivatives on is incorrect, in two different ways.
    • Firstly the derivative of a map is a map , that assigns to every point x a linear map sending direction y to a real value (namely the partial derivative of f at x in direction y). Thankfully the space of linear maps from to is isometrically isomorphic to through the inner product, recovering the expression you gave. Similarly the derivative of a map is a map .
    • Secondly technically the domain of any derivative like the one above is not the vector space we are working with, but the set of directions at point x. This notion is formalised in Manifold theory and called the tangent space. Thankfully for any finite-dimenional vector space the tangent space at any point is canonically isomorphic to the vector space itself (any vector is a direction, that's what they were invented for). In infinite dimensions this still holds just fine except for the small detail that the notions of manifold and tangent space don't exist there. The same distinction is necessary in the range. So truly, formally, the derivative of a map is a map , with and similarly , with the condition that is simply on the first coordinate. This coincides with the map above: for every we get a linear map
    • The above may seem very confusing for , since I claim that the derivative in that case is a map instead of simply a real-valued function. This is resolved by noting that each linear map from to can be represented with a number, similar to the top bullet point above (the inner product on is just multiplication). I think lecturers are quite justified in not exploring the details of this when first introducing derivatives or partial derivatives, but unfortunately in possibly infinite-dimensional abstract vector spaces the distinctions are necessary, if only to avoid type errors.
  • In the definition of the partial derivative of M at f with respect to g (so with a range inside a vector space Y) we do not take the norm or absolute value of that expression, it should be the straight up limit . The claim that the limit exists does depend on the topology of Y and therefore on the norm, though.

Also there are a lot of discontinuous linear maps out there. A textbook example is considering the vector space of polynomials interpreted as functions on the closed interval , equipped with supremum norm. The derivative map is not continuous, and you can verify this directly by searching for a sequence of functions that converges to 0 whose image does not converge to 0.

A Kernel of Truth: Insights from 'A Friendly Approach to Functional Analysis'

Personally I did the exact opposite, and found that very refreshing. Whenever I ran into a snippet of applied functional analysis without knowing the formal background it just confused me.

Load More