How much harder is it to revive a neuro-only cryonics patient?

I would point out that it is in fact (a lot) easier to heal a rabbit kidney from the toxins that are put into it to vitrify it than to completely recreate a rabbit kidney.

A Healthy News Diet

That's surprising low for LibGen. Are you perhaps looking for fiction works and forgetting to search the fiction database? It's separate from the nonfiction (one of a number of really bad usability issues LG has is fragmented databases).

What trade should we make if we're all getting the new COVID strain?

Markets can remain irrational, and inflated by monetary/fiscal policy or consumer repression, longer than your options can remain unexpired. Among other things - the real world is far more imaginative than any trader. You might as well ask, 'how did markets go up with the first coronavirus?' Better to bet on large changes through the VIX or something, rather than their exact direction & timing...

What trade should we make if we're all getting the new COVID strain?

Perhaps I'm missing something, but it seems like the obvious strategy is to just buy volatility-related instruments: you don't know if the market is going to go up or down, but if the new strains work anything like one would predict based on the infectiousness and the vaccine rollouts continue their shambolic slow-walk out, there will be extreme movements (even if the Fed or USG then step in) in financial instruments of all sorts.

If you can't find anyone willing to sell you bets on stability at reasonable prices, then it would seem that the general import of the strains - 'the future just got very uncertain and potentially even worse than everyone feared' - is already priced in.

2020 AI Alignment Literature Review and Charity Comparison

OpenAI was initially funded with money from Elon Musk as a not-for-profit.

This is commonly said on the basis of his $1b pledge, but AFAICT Musk wound up contributing little or nothing before he resigned ~2018. If you look at the OA Form 990s, Musk is never listed as a donor, only a board member; the only entities that are listed as contributing money or loans are Sam Altman, Y Combinator Research, and OpenAI LP.

Extrapolating GPT-N performance

Finally, the scramble task is about shuffling around letters in the right way, and arithmetic is about adding, subtracting, dividing, and multiplying numbers. The main interesting thing about these tasks is that performance doesn’t improve at all in the beginning, and then starts improving very fast. This is some evidence that we might expect non-linear improvements on particular tasks, though I mostly interpret it as these tasks being quite narrow, such that when a model starts getting the trick, it’s quite easy to systematically get right.

To beat my usual drum: I think the Arithmetic/Scramble task curves are just due to BPEs. The 'trick' here is not that scrambling or arithmetic are actually all that difficult, but that it needs to memorize enough of the encrypted number/word representations to finally crack the BPE code and then once it's done that, the task itself is straightforward. The 'breakthrough', so to speak, is seeing through the scrambled BPE representations. I predict that using tricks like rewriting numbers to individual digits or BPE-dropout to expose all possible tokenizations, or better yet, character-level representations, would show much smoother learning curves and that much smaller models would achieve the GPT-3-175b performance.

Load More