In our use case, however, we're not talking about sending one large recording up to the server. Instead, we'd be sending batches of samples off every, perhaps, 200ms. Many compression systems do better if you give them a lot to work with; how efficient is opus if we give it such short windows?
One way to test is to break the input file up into 200 ms files, encode each one with opus, and then measure the total size. The default opus file format includes what I measure as ~850 bytes of header, however, and since we control both the client and the server we don't have to send any header. So I count for my test file...
Based on my understanding of what you are building, this splitting is not a good model for how you would actually implement it. If you have a sender that is generating uncompressed audio, you can feed it into the compressor as you produce it and get a stream of compressed output frames that you can send and decode on the other end, without resetting the compressor in between.
It seems to me that ordinary people are usually obsessed with scandals and drama, so I think Occam's Razor says that journalists are also obsessed with them and report on them because that's what they want to do. That is, I don't think all journalists are secretly wishing they could ignore Donald Trump but regrettably recognize that their incentives point in another direction -- I think they enjoy reporting on Donald Trump, so they do it.
Small correction -- AltspaceVR can be used without a VR headset using normal FPS controls, just like Hubs.
Hey, I'm not sure what is wrong with your WiFi, but something fixable is probably wrong. I can run speedtest.net (on my roughly 70/10 connection) while pinging my router (an Archer C7 running OpenWRT which is roughly across the living room -- I have a home network with one AP at each end of the apartment) over WiFi and the ping time never exceeds 4ms. I live in an apartment with dozens of other visible networks.
You might want to check that it's using 802.11ac with 5 GHz on a not crowded channel, check that the signal strength looks good, make sure your client network drivers are up to date, check whether you have router hardware/software known for poor performance under load or bufferbloat problems, etc.
(edit: I notice you say that the router is three rooms away -- my guess is that the signal strength is just really bad and if you put an access point in between you could make it much better.)
Mea culpa, that's more of a condemnation than I thought.
It seems only skeptical of cloth masks as compared to surgical masks, which isn't really very interesting to me in the current circumstances, since most people don't have access to surgical masks.
The data here doesn't give a clear idea of how the transitions from 1 to 2, or from 2 to 3 are proceeding. Nonetheless, it may offer some clues. So first, let's backtrack and think: let's say California going to level 2 or level 3 did in fact effectively stop coronavirus in its tracks. What should we see?Ideally, we should see the number of people with coronavirus getting the test drop a lot. However, that doesn't necessarily mean that the total number of people getting the test drops, because many people who don't have the disease may also start getting tested, causing the total number of people getting tested to increase. So, more accurately, we should see one of these:- A drop in the incremental number of tests each day.- A drop in the confirmed positive rate on tests (but this metric is available at a further lag of 5 to 7 days).
I think it could take longer before either of these reflects the change in true cases. Here's an argument. Suppose:- Current testing policy is declining to test many symptomatic people due to lack of capacity. I believe this is true (high-risk people, essential workers, and known contacts to existing cases are being prioritized.)- As test availability improves, testing policy will change to test broader categories of symptomatic people, up to the testing capacity.- The number of true cases is substantially higher than the number of other ailments that basically look the same. As a result, P(positive test | symptomatic) remains high and doesn't change much even if you halve the number of true cases. Probably variance in testing policy and test accuracy will drown out the change.If this is right, as long as the number of true cases remains above the threshold of testing capacity, we get roughly the same number output on the metrics you mentioned, no matter whether it's 10 times above capacity or 1000 times above capacity. So if we're way above capacity right now, we won't see a decrease in true cases show up in those metrics for a while.
Here's a preprint published on March 10th testing how long coronavirus can last on a variety of surfaces.
What a wonderful page to come across -- I had misunderstood this for a decade.
Wow, this post really got me thinking. Dealing with this kind of pervasive filtering seems super important and also difficult. Thanks for writing it.
Your proposal that you can clue other people into your filtering mechanism seems hard in practice. Any effort in this vein means that you are saying that you suspect the consensus is misleading, which means that you are saying that non-consensus beliefs are more likely to be true, and people can pick up on this. I tried to come up with ways to express "this is cherry-picked, but" without triggering that but I couldn't figure out ways that seemed plausible.
Approaches like Raemon's, where you say "I'm just never going to talk about controversial thing X" seem mentally hard to update on -- in a world where there are a million people who say "I'm never going to talk about X" and a thousand who are constantly presenting cherry-picked evidence about X, it's very difficult for my mind to interpret the filtering disclaimers as object-level evidence about X that can fight with the evidence provided by the cherry-pickers.