Pandemic Prediction Checklist: H5N1
Pandemic Prediction Checklist: Monkeypox
Correlation may imply some sort of causal link.
For guessing its direction, simple models help you think.
Controlled experiments, if they are well beyond the brink
Of .05 significance will make your unknowns shrink.
Replications show there's something new under the sun.
Did one cause the other? Did the other cause the one?
Are they both controlled by what has already begun?
Or was it their coincidence that caused it to be done?
Hm. It seems like the extent to which there is an increased risk of insolvency due to a popped AI bubble would partly depend on the extent to which these institutions had sold other assets or used leverage to pay for equity in or lend to AI companies and the suppliers that are most dependent on AI company business.
My understanding is that the great financial crisis resulted from extremely leveraged investments in mortgages due to lenient rules and a perception that American mortgages were extremely reliably paid. I don't know to what extent important institutions may be overleveraged or overweighted in their investments in AI.
But my modal prediction is that an AI bubble would cause hedged AI investors to become less valuable without becoming insolvent, a bunch of distressed assets to be purchased for low low prices by those who kept their powder dry, and a bunch of cancelled orders and perhaps layoffs and restructuring by suppliers who expanded to meet the temporary surge in demand by AI companies. That could cause turmoil, but I really don't have a sense of to what extent the American or global economy has reshaped itself to build out AI. It's hard to know particularly because with Trump's tariffs, there has been so much coincident market turmoil that it's hard to know how much is AI and how much is tariffs/end of ZIRP (as others have pointed out before).
What's your reasoning or assumptions for why it would tell you ~nothing to witness name recognition increasing like that? I'm assuming it's not just because "visible confusion" isn't a perfect proxy for lack of name recognition (and vice versa).
I guess I'm more bullish on vibes being a more powerful way to gauge name recognition than you seem to be. So here is a toy model to explain why. ChatGPT was released in Nov. 2022, so prior to that we can say it had approximately zero name recognition.
I've never read a survey on ChatGPT name recognition. I have only my anecdotal evidence to go on. But I am extremely confident based on what I think can fairly be called "vibes" that ChatGPT has massive name recognition, at least in America, compared to November 2022. If there was a reliable way to test this, I'd be willing to wager big money on it, provided I could feel confident in my ability to truly pin a number on what I mean by "massive name recognition." Do you agree with this model but feel bearish on vibes more generally? Are you skeptical of my perception that ChatGPT's name recognition has exploded since its release?
I'm actually uncertan about whether an AI bubble would trigger a recession (period of broad decline in economic activity). What I have seen reported is that the economy's generally stagnant now, except for AI, and that American jobs may have been declining for months already, contrary to published statistics, motivating a recent rate cut by the Fed. If true, would an AI bubble really have substantial broader ripple effects outside the AI sector, and would those ripples even necessarily be negative?
In particular, I have been suspecting that the biotech winter that's been going on for a few years is partly due to routing of investor money into the AI craze. Maybe that money just ends up funding other economic activity instead? I'm really unsure of how to think about this and am quite curious.
Why does this make you more bearish on vibes? The reason I ask is that I think of "vibes" as aggregating over a much wider (but siloed) social network and a lot more sources of information. It would be interesting to know about to what extent rigorous high-n survey methods would reveal discrepancies between assumptions and reality about people's perceptions in this and other areas to do with pressing social issues.
Do you think these procrastinators are making a conscious choice, or more that they're driven by a panic that only sets in at the last minute?
Literal unfinished bridges provide negative value to all users, and stand as a monument to government incompetence, degrading the will to invest in future infrastructure.
Short bike lanes provide positive value to at least some users. They stand as a monument to the promise of a substantial, interconnected bike grid. They incrementally increase people's propensity to bike. They push the city toward a new, bike-friendly equilibrium. The same is true for mass transit generally when the components that have been built work well. Portland ought to be thinking about its population-level utility over the long run, not your personal utility now.
As such, I think that "unfinished bridges" is a misleading metaphore for the S-curve phenomenon you are talking about, and "beware" is probably not the right take when it comes to the idea of incrementally moving along S-curves. In fact, when the S-curve is the payoff for incremental steps toward a desired goal, "take heart" seems more correct. Incremental moves along an S-curve bring you closer to saturation faster than it would seem, judging by linear extrapolation of the marginal gains of each incremental step. I added labels to Tiuto's plot that reflect this change in interpretation.
Edit: There are multiple reasons all the incremental steps toward saturation of the S-curve aren't always taken at once. In the case of bike lanes, availability of funds, land, and political will are sporadic. Incremental opportunities must be seized where possible over the course of years or decades. In other cases, there may be uncertainty in how to design the buildout to maximize the level of saturation that becomes incrementally resolved over time, such as a startup that's trying to find product/market fit or a politician who's trying to determine what platform would maximize their vote share by gauging the public's reaction to a sequence of political appearances.
Very, very few people are at all about scientific accomplishments unless it's directly affecting somebody they personally care about, and particularly themselves or their kids. The technical accomplishments you list are in fundamental bio or medical innovations that have so far only affected a small number of individuals, so for the time being, virtually nobody will care about them. The reality is that turning the technical accomplishments you list into safe and effect medicines ready for doctors to give to patients have been extremely time consuming, expensive, and limited in scope, or have not yet resulted in bona fide medical breakthroughs in humans.
In biology, innovations in earlier times (vaccines and antibiotics) were cheap, saved a lot more young people and prevented more common and deadly illnesses. mRNA vaccines and gene editing are applicable to much smaller numbers of people, often in more distant countries, often primarily benefitting older people with comorbidities. You can see this in tapering off of lifespan, the growing gap between lifespan and healthspan, and the rising cost of medical care.
You can also see it in the exceptions -- how GLP-1 agonists like semaglutide (ozempic), which benefitted numerous young people in the first word in a very tangible way -- provoked much more news coverage, popular awareness, and grassroots optimism about progress in this domain, as well as populist anger at high prices, limited availability, worries about side effects, and so on.
I think we will see similar excitement if substantial strides are made in bringing down IVF costs, improving fertility for women in their late 30s and beyond, and ability to predict and prevent or terminate pregnancies when the fetus is expected to show profound autism or other serious cognitive impairments; widespread deployment of xenotransplantation (pig kidneys transplanted into humans), drastically improving antipsychotics or treatments for substance abuse, and continued improvements to GLP-1 agonists and potentially other drugs performing related functions based on insights gained from initial success.
We will also see improvements in the US health situation if congress increases the number of funded residency positions so that we can expand the doctor workforce, as well as through expansion of telehealth. But given everything that's going on, I think this is unlikely in the near term future.
The big question about working memory (WM) training is whether it results in transfer -- better performance on tasks other than WM itself. Near transfer is for tasks that are similar but not identical to WM training. Far transfer is for tasks that are quite different from WM training. Typically, studies find that WM training strongly boosts performance on the WM task and near transfer, but results in weak far transfer.
I am curious about whether any gains in far transfer might be masked by test insensitivity, noise, overshadowing by learning effects, or interactions between impacts of WM and learning rate. A good test needs to have its difficulty calibrated so that there is substantial variability in scores, with that variability influenced primarily by fluid intelligence. Test-retest reliability should be high after controlling for learning. Insofar as these don't hold (the test is noisy or insensitive), then it'll be hard to detect real effects, if they exist: there will be little signal, and what there is will be swamped by noise. If learning effects aren't controlled for adequately and are more important than the impact of transfer due to WM exercise, then this too could swamp any signal.
One point in favor of the idea that these tests are adequately measuring signal is that they do find substantial improvements to WM itself as well as near-transfer tasks. This makes me think that psych tests of cognition are capable of picking up differences, and we're just not finding strong ones for far-transfer tasks.
Another point I'm uncertain of is the interaction between learning and task nearness. I would expect that WM and fluid intelligence generally are most influential in a) one's learning rate and b) one's maximum cognitive performance capacity. But insofar as learning effects are minimal, which is a primary goal of psych studies attempting to measure pre vs. post training effects, then for subjects closer to the beginner end of the spectrum, I'd expect that even boosted WM and fluid intelligence might play only a limited role in far transfer performance. By contrast, near transfer might be enhanced, because WM training is portable to the near transfer task. Perhaps if WM improves fluid intelligence, this effect could best be observed when measuring performance on tasks where the subject has a high level of expertise already, or where the subject is engaged in continuous study over time. Maybe this is already taken into account in these studies, but most of the ones I've seen look like they give people unfamiliar tests of cognitive function before and after the WM training and assess improvements while controlling for learning. Maybe we're going about trying to measure far transfer all wrong?
Given this post was written post-election and US gov actions are a key component of the prediction, it’s odd to see the lack of consideration about the character of the particular President and his administration and priorities who’ll be presiding over this scenario should it come to pass. Very hard for me to see reason to expect any predictable response by this government.
Reasonable, I also don't expect that I could pick up on a 1.5x increase in name recognition over a year based on vibes - didn't read closely enough to notice you were talking about a 10% increase, so sorry about the time waste.