The main problem with crawlers is that their usage patterns don't match those of regular users, and most optimization effort is focused on the usage patterns of real users, so bots sometimes wind up using the site in ways that consume orders of magnitude more compute per request than a regular user would.
And Twitter has recently destroyed his API, I think? Which perhaps has the effect of de-optimizing the usage patterns of bots.
Right. From what I've seen, the people that support censoring misinformation are almost never doing it out of worries that themselves will get misinformed.
I'm assuming dsj's hypothetical scenario is not one where GPT-6 was prompted to simulate an actor playing a villain.
It's a nice analogy, but it all rests on whether infinite evidence is a thing or not, and there aren't arguments one way or the other here. (Sure, infinite evidence would mean "whatever log odds you come up with, this is even stronger", but that doesn't rule out it is a thing).
Like, how much evidence for the hypothesis "I'll perceive the die to come up a 4" does the event "Ok, die was thrown and I am perceiving it to be a 3" provide? Or how much evidence do I have of being conscious right now when I am feeling like something? I think any answer different from infinity is just playing a word game.
Aiming for convergence on truth. I guess it's true this might lead to a failure mode where one seeks for convergence more than anything else. But taken literally, this should not discourage exploring new wild hypotheses. If you are both equally wrong, by growing your uncertainty you get nearer to converging on truth.
True. Still, using 1960's prices with current production assumes a 1960 flat demand curve, right? It's like using off-season avocado prices when no one buys them to compute real GDP during avocado season.
Maybe the UK's case curve has flattened after the end of the spike due to the asymptomatic people that are getting tested for whatever reasons and turn positive for the reason you state? It doesn't feel likely (perhaps it's just the other omicron subvariant giving it a push? or just the "control system" of people relaxing?). The hospital admissions continued to go down as one would expect if this was the case, though the data at ourworldindata is a few days behind.
I know it's unlikely, but if it was indeed omicron, its faster generation time also would make its numbers drop faster if they managed to move R under 1
The largest models should be expected to compress the less than smaller ones though, right?