Google has had a competing product in google hangouts for a while, but it's not as high quality.
Koko the gorilla had partial language competency.
Koko the gorilla had partial language competency.
AFAICT this is highly disputed. Many people think that her handlers had an agenda, and that the purported examples of her combining words were her randomly spamming sign language to get treats. Raw data was never realeased, and no one was allowed to interact with or see them interact with Koko except her handlers.
It seems plausible that the purported examples are a case of selective reporting, wishful thinking, and the Clever Hans effect.
I'm not super familiar with the competitive math circuit, but my understanding is that this is part of it? People are given a hard problem and either individually or as a team solve it as quickly as possible.
Increased sense of relatedness seems a big one missed here.
Something else in the vein of "things EAs and rationalists should be paying attention to in regards to Corona."
There's a common failure mode in large human systems where one outlier causes us to create a rule that is a worse equilibrium. In the PersonalMBA, Josh Kaufman talks about someone taking advantage of a "buy any book you want" rule that a company has - so you make it so that you can no longer get any free books.
This same pattern has happened before in the US, after 9-11 - We created a whole bunch of security theater, that caused more suffering for everyone, and gave government way more power and way less oversight than is safe, because we over-reacted to prevent one bad event, not considering the counterfactual invisible things we would be losing.
This will happen again with Corona, things will be put in place that are maybe good at preventing pandemics (or worse, making people think they're safe from pandemics), but create a million trivial conveniences every day that add up to more strife than they're worth.
These types of rules are very hard to repeal after the fact because of absence blindness - someone needs to do the work of calculating the cost/benefit ratio BEFORE they get implemented, then build a convincing enough narrative to what seems obvious/common sense measures given the climate/devastation.
Reality actually exists and has properties you can determine through study and experimentation.
Conclusions follow from their premises and it’s unreasonable to expect a plurality of truths.
Our universe is consistent
your understanding of the pieces should fit together.
Why? This seems like a classic mistake of naive rationalism. Models exist for reasons, and brains have limits. Depending on my use case, It may make sense for me to have a set of heuristics I know don't fit together becasue they're useful.
To use an example from elsewhere in the post, tensile strength is a leaky abstraction compared to something like the universal wave function, but the former is much more useful for building bridges. Meanwhile, a firefighter is going to be using heuristics like "close to breaking" and "how much weight it can bear" or even a vague feeling of "danger" that is only a leaky abstraction of things like tensile strength.
Now, 'should' my understanding of all those pieces fit together? Not really, depends on what I want to use those models for.
Like Cicero, he was doing philosophy at a time when philosophy meant living a better life. He found suggestions farmed from different systems of thought, but which practically helped one live a better life, and helped form a foundation for stoicism, a set of tools and heuristics for living better. "Should" he have created a set of principles that were completely logically consistent? Well, it may have helped him generate more. But, in terms of a set of practical tools and thoughts that helped one live better, he did quite well curating using an eclectic style.
I did update from this quite significantly.
It depends on the size of the window. If schizophrenia shows up between 20-25 years later, then the 1 year effects of the quarantine get distributed over that 5 year window, and are much harder to detect above other fluctuations.
It was brought to my attention on Lesswrong that depressions actually save lives.
Which would make it much harder to build a simple "two curves to flatten" narrative out of.
It's interesting because you would intuitively think this, but there is actually not terrible evidence linking periods of economic growth to increased mortality.
Wow that is fascinating. It does make the case harder to make because you have to start quantifying happiness/depression, etc and trade off against lives. Much much harder to simplify enough to make it viral. Updates towards capitalism being horrible.
Is non-profit funding really that inelastic in depression?
It probably varies quite a bit by sector, and where funding comes from for different non-profits. In the case of AI safety I think it's likely more inelastic than AI capability.