Any ideas on what may have caused companies to trend towards short-termism?
Hi there, hope this it the right place to post this. I've written an article related to Progress Studies and want to post it on LW, but before doing so I'd like to have someone read it and provide feedback. Would anyone here be interesting in doing so?
On the question of subsidization and liquidity, has anyone considered investing money locked into a prediction to create outcomes which aren't zero-sum? I imagine prediction markets like PredictIt invest the funds sunk into their markets in some manner already, but those returns/losses aren't accessible to bettors.
Currently, the money placed into a prediction market just idles there from the perspective of bettors.
Suppose instead of distributing earnings/losses based solely on the market outcome, in the meantime the money was placed into high yields savings. At the resolution of the market, the payout would include not only the money put in the market but also any interest earned. On the flipside investment losses could erode winnings.
Of course, this gives rise to the possibility of markets which have major market repercussions being mispriced, reflecting concerns over investment losses. But if anything that teases out more useful information from the market. It also erases the opportunity cost of storing money a betting market over other financial markets.
It may be worth mentioning that increasing national savings rates can have some unintended consequences.
Can't speak for Jason but maybe I can change your mind. IMO, the a case for progress can be made pretty simply to anyone who cares about the welfare of people living today and their future welfare. Assuming that, I'll make two observations.
"Try not to make things worse than they were before" has been a classic argument made against nuclear power for decades. Maybe with fewer regulatory constraints, nuclear power could provide more energy for a lower cost than fossil fuels, but is it really worth the tail risk of nuclear proliferation or catastrophic meltdowns? Society decided it wasn't and now we find ourselves in a slow motion global catastrophe while "good enough" living standards remain fundamentally out of reach for most of the world. Perhaps renewables will save us from this mess, but surely not if technological progress were to end today.
That's not to say that all technological progress is good. Asbestos having some really cool insulating properties doesn't mean it was a net benefit. But technological progress in general is desirable if you wish to avoid present "good enough" living conditions from deteriorating and want to make such standards attainable for the whole world.
Technological progress isn't just a chance to do more, it's also often a chance to pivot from one resource to another so as to avoid depletion. Ultimately, freezing it won't insulate present society from the risk of things getting worse. On the contrary, halting progress condemns our society to a slow rot.
No it wouldn't. TFP is in a sense, a lagging indicator. It captures economic benefits of technological progress but does not evaluate emerging technologies which have yet to make an economic imprint. That said, no AI I'm aware of that presently exists is remotely comparable to a human level AI. Level 5 self driving doesn't even exist yet and once the computational power used to power AI catches up with Moore's Law, the field seems due for a slowdown.
If AlphaFold 2 is as accurate as its creators have claimed it undoubtedly represents an enormous technical leap. However, it remains to be seen how regulatory constraints and IP laws will erode its value. mRNA vaccines also represent a huge advancement, but were it not for COVID-19 lowering normal regulatory barriers (and providing a deluge of capital), they would still be a decade away, despite most of the fundamental technology already being ready.
With or without advanced protein folding simulations, so long as we remain in an environment where it costs billions to get a novel medical treatment to mass market, there's little doubt the full potential of this breakthrough will not be realized anytime soon. The question is, how much progress will remain possible working within these constraints. I still expect it will aid in numerous future medical breakthroughs but I dunno about it unilaterally ushering in a new era of progress.
TFP doesn't mean productivity per worker. It's designed to identify economic progress which can't be attributed to increases in labor or capital intensification aka technological progress applied to make an economy more efficient. Advances in automation should be captured under such a measurement.
The same trend can be found in every country which was developed by the 70s. Britain is simply a particularly good example because of the amount of record keeping they performed in the 18th and 19th centuries compared to other countries.
However, just looking at data from the 20th century onward, accessible for any developed country, growth at the technological frontier has slowed tremendously since the 70s. Jason Crawford already aggregated a lot of data pertaining to the slowdown here. Long story short, not much technological progress has been made outside of computing in decades.
No doubt very significant advances in AI have occurred within the past decade or so. AlphaFold practically "solving" the problem of protein folding for example, is a hopeful glitter of technological progress and the promise of artificial intelligence.
However, it remains an open question how far AI will advance before it runs out of track. Because does appear to be approaching a wall. OpenAI observes that the rate at which added computational power is being supplied to create ever more advanced AI models is far outstripping Moore's Law. It doubles every 3.4 months. This can't be sustained for much longer.
Meanwhile, many of the advancements in the quality of the actual algorithms utilized seem to be ephemeral. Numerous studies have discovered that many of the actual models used by AI today aren't objectively better than those which already existed years ago.
Given that progress in the quality of models seems to be progressing relatively slowly and the brute force method of adding more computational power isn't sustainable, another AI winter is well within the cards.
To drag civilization out of a technological stagnation, AI doesn't need to reach human levels, but it also needs to be able to do much more than it can today. Enabling level 5 autonomous vehicles would probably be a feat on the same scale as the triumphs of the 20th century, but so far AI has continued to fail to deliver complete self driving and it isn't guaranteed that it manages to before hitting the winter.