France had a military coup in 1958 followed by 6 months of dictatorship. What threshold had France not passed in 1958 to not count as a full democracy? Does the Dictator's Handbook actually say this?
Did you click through from Paul's LW post to his blog? He gives a proof that a reversible computer can implement a PSPACE algorithm with only polynomially many erasures, and thus only polynomially much energy consumption, at the cost of running a little longer, hardly a noticeable difference compared to the exponential time required. But he also provides context which I suspect you need.
Right, the point is that a Reversible PSPACE appears physically realizable, while currently existing computers could not actually run for the exponential time necessary to compute PSPACE problems because they would also require exponentially much (free) energy.
It took 10 years from mass residential refrigeration to lead to use of CFCs. It took another half-century to detect atmospheric CFCs and the damage they were causing.
This makes it sound like it's an important point in the timeline, that substantial use of CFCs can be dated to c1930. This seems fundamentally wrong to me.
Mass introduction of modern residential refrigeration took place from 1914-1922.
What do you mean? Cooling food? I think that is a rounding error. A single wall AC has 10x as much freon as a refrigerator. Thus I think the bulk of the freon came later and there was not so long a delay from deployment to discovery. But it should be possible to look up actual freon production.
I think the growth of air conditioning was contained by the cost of electricity, not freon. It's hard for me to imagine electricity cheap and widespread enough to allow refrigerators without becoming in a few decades cheap enough to cool houses. But maybe I can imagine a 19th century with Einstein refrigerators yet without electricity. I don't think that would have destroyed the ozone layer.
I think talking about Google/DeepMind as a unitary entity is a mistake. I'm gonna guess that Peter agrees, and that's why he specified DeepMind. Google's publications identify at least two internal language models superior to Lambda, so their release of Bard based on Lambda doesn't tell us much. They are certainly behind in commercializing chatbots, but is that a weak claim. How DeepMind compares to OpenAI is difficult. Four people going to OpenAI is damning, though.
I assume you know this, but to be clear, OpenAI has already used pirated books. GPT-3 was trained on "books2" which appears to be all the text on libgen (and pretty much all the books on libgen have been through OCR). It was weighted the same as the common crawl, lower than Gutenberg or Reddit links. This seems to answer your second question: they will likely treat pdfs on the libgen the same as pdfs on the open web. If you're asking about whether they will train the model on the pixels in these pdfs, which might make up for losses in OCR, I have no idea.
Since you have to manually activate plugins, they don't take any context until you do so. In particular, multiple plugins don't compete for context and the machine doesn't decide which one to use.
Please read the documentation and the blog post you cited.
Someone just told me that the solution to conflicting experiments is more experiments. Taken literally this is wrong: more experiments just means more conflict. What we need are fewer experiments. We need to get rid of the bad experiments.
Why expect that future experiments will be better? Maybe if the experimenters read the past experiments, they could learn from them. Well, maybe, but maybe if you read the experiments today, you could figure out which ones are bad today. If you don't read the experiments today and don't bother to judge which ones are better, what incentive is there for future experimenters to make better experiments, rather than accumulating conflict?