Personally I think there is a major problem on how productivity is measured.
productivity = production/time
But here is the major flaw: how is production currently measured?
It is measured by how much money you sell that production!
So basically as it stands:
productivity = (money made)/time
Imho that way of measuring productivity is really dumb and gives a completely undervalued measurement of production.
To take a simple example imagine you create (with thousands of other people) an OS like Linux that powers billions & billions of computing devices throughout the entire world (and even in space) and give away that OS for free:
Your productivity for this Linux production is measured as zero (0) because you didn't make any money from the direct selling of it. The fact that your measured production and productivity for this is zero is completely absurd because you actually produced something extremely useful and transformative on a huge scale. There are many other examples like that of free or very cheap things which are measured as having a very low productivity not because they are useless but because they are (or have become) free / very cheap.
To take another example, let's say you have speculated on the markets and got lucky and made a huge amount of money very quickly: you haven't really produced anything but your productivity is measured as being huge!
So basically imho anything / any argument which is based on how productivity is currenly measured is completely flawed.
Please correct me if I am wrong so that I am less wrong thank you :)
I tend to intuitively strongly agree with James Miller's point (hence me upvoting it).
There is a strong case to make that a TAI would tend to spook economic agents which create products/services that could easily be done by a TAI.
For an anology think about a student who wants to decide on what xe (I prefer using the neopronoun "xe" than "singular they" as it is less confusing) wants to study for xir future job prospects: if that student thinks that a TAI might do something much faster/better than xem in the future (translating one language into another, accounting, even coding, etc...) that student might be spooked into thinking "oh wait maybe I should think twice before investing my time/energy/money into studying these.", so basically a TAI could create lot of uncertainty/doubt/... for economic actors and in most cases uncertainty/doubt/... have an inhibiting effect on investment decisions and hence on interest rates, don't they?
I am very willing to be convinced of the opposite and I see a lot of downvotes for James Miller hypothesis but not many people so far arguing against it.
Could someone please who downvoted/disagrees with that argument kindly make the argument against James Miller hypothesis? I would very much appreciated that and then maybe change my mind as a result but as it stands I tend to strongly agree with James Miller well stated point.
You make good/interesting points:
1) About AGI being different from ASI: basically this is the question of how fast we go from AGI to ASI i.e. how fast is the takeoff. This is debated and no one can exactly predict how much time it will take i.e. if it would/will be a slow/soft takeoff or a fast/hard takeoff. The question of what happens economically during the AGI to ASI takeoff is also difficult to predict. It would/will depend on what (the entity controlling) the self-improving AGI decides to do, how market actors are impacted, if they can adapt to it or not, government intervention (if the AGI/ASI makes it possible), etc...
2) With regard to the impact of an ASI on the economic world and society I would distinguish between
2a) The digital/knowledge/etc... economy basically everything that can be thought of as "data processing" that can be done by computing devices:
an ASI could take over all of that very quickly.
2b) The "physical" economy... i.e. basically everything that can be thought as "matter processing" that can be done by human bodies, machines, robots, ...:
an ASI could take over all of that but it would take more time than the digital world of course as the ASI would/will need to produce many machines/robots/etc... and there could indeed be a bottleneck in terms of resources and laws of physics but if you imagine that the ASI would quickly master fast space travel, fast asteroid mining, fast nuclear fusion, fast robot production, etc... it might not take that long neither. The question of what would happen economically while this happens is also difficult to predict. Traditional/existing economic actors could for example just basically stop as soon as the ASI starts providing any imaginable amount of great quality goods and services to any living entities if the ASI is benovolent/utilitarian (within the constraints of the laws of physics if it is in the real/physical world and if the ASI don't find ways to overcome the laws of physics in the real/physical world) basically what is called "post-scarcity".
But there could be other scenarios including economic scenarios as well basically essentially depending on what (the entity controlling) the ASI decides to do, it could decide that people still need to be forced to work to keep some meaning of life so it could artificially maintain a working capitalist economy, etc...
When the digital and physical world are essentially mastered at will then basically it "just" becomes a question of how things are then organized/allocated. Money/interest rates/etc... become unnecessary to do that (but could still be used if that is the choice).
Thank you for your interesting answer :)
I agree that in all likelihood a TS/ASI would be very disruptive for the economy.
Under some possible scenarios it would benefit most economic actors (existing and new) and lead to a general market boom.
But under some other possible scenarios (like for example as you mentioned a monopolistic single corporation swallowing up all the economic activity under the command of a single ASI) it would lead to an economic and market crash for all the other economic actors.
Note that a permanent economic and market crash would not necessarily mean that the standards of living would not drastically improve, in this scenario (the monopolistic ASI) the standards of living would not depend on an economic and market crash but on how benevolent/utilitarian the entity controlling the ASI is.
In economic/market terms there are plenty of possible scenarios depending mostly on what the entity controlling the ASI decides to do with regard to economic trade which is indeed the key word here as you rightly mentioned.
Given that it is imho impossible (or at least very speculative) to predict which economic trade configuration and economic scenario would be the most likely to emerge, it is also impossible (or at least very speculative) to predict what the interest rates would become (if they still exist at all).
So to come back to the original question about EMH and AGI/ASI/TS, as it is imho impossible (or very speculative) to predict the economic scenario that will emerge in case of the emergence of an AGI/ASI, the EMH is kept safe by the markets currently not taking into account what impact an AGI/ASI will have on interest rates.
Note that, as mentioned, imho, in case of an AGI/ASI/TS the standards of living would not depend on an economic and market boom or crash but on how benevolent/utilitarian the (entity controlling the) AGI/ASI is.
Thank you for your answer :)
Imho there will definitely be a flood of already existing products and services being produced at rock-bottom prices and a flood of new products and services at cheap prices, etc... coming from the entity having created / in control of the ASI, but will that make the economy as a whole booming? I am not sure.
To take an analogy imagine a whole new country appears from under the ocean (or an extraterrestrial alien spaceship, etc...) and flood the rest of the world with very cheap products and services as well as new products and services etc... at very cheap prices completely outcompeting each and every company in the rest of the world, what would that mean for the economy of the rest of the world: absolutely trashed, wouldn't it?
All the companies, workers, means of production of the rest of the world would become very quickly valueless, even commodities as the SI could, if it wanted to, get them in almost unlimited quantity from outerspace and nuclear fusion etc...
Maybe Earth land would still have some value for people who enjoy living / traveling there rather than on giant artificial Earth satellites, etc...
The company having created / in control of ASI will be economically booming (in the short term at least) for sure but the rest of the economy and markets completely outcompeted by it, I am not sure, it would depend if the company having created / in control of ASI is willing to share some of its economic value / activity to the rest of the world or just quickly absorb all the economic activity into an economic singularity.
What do you think?
Thank you for your kind words and clarifying the scope of your question
and sorry for having slightly deviated away from it.
When I have the time I will try to find or create a relevant thread and move my post in there if that is possible.
In any case very glad to see long Covid and ME/CFS being discussed/addressed on LW, many thanks for that.
If ever at some point I feel I can contribute in answering your question within its scope and I have the energy & time to do it, I will gladly do it. In the meantime I will read with interest any answers/comments from the participants to this (from a personal point of view at least) interesting and useful thread.
Interesting discussion here :)
Just my 2 cents:
I am wondering about 2 things:
What happens just before, during and after the TS is ultra-speculative in nature.
But we have seen already that entities which work explicitely/implicitely on AGI/TS and which obtain tangible results tend to fair well in terms of valuation growth (Google, OpenAI, Amazon, etc...) and tend, at least at first, to have a deflationary effect on the cost of many products/services but might impede that deflationary effect if/when they start behaving in monopolistic and rent-seeking ways (then government(s) intervention comes into play).
Imho the effect on interest rates of a TS depends on so many difficult to predict variables either just before, during and after a TS that the EMH is kept safe by the markets not taking into account what impact TS will have on interest rates.
Loved you essay, so interesting, exciting, fun & educative, thank you for it :)
Here is a few points I would like to make as they come "as is" in my limited brain:
1) I totally subscribe to it as well as to the many "longevicists" (or whatever it is called, is there a name for it?) before you like Aubrey about addressing human aging as a disease/condition/self-damage/... that has to be addressed directly. As would determined any simple "root cause analysis" why indeed not spending much more resources into directly addressing the root cause rather than all its individual numerous symptoms (i.e. all the diseases/damage which result from the aging process)?
2) The obvious dangers of experimenting with aging therapy could be solved using simulation, there is already some work on going trying to digitally simulate at molecular level some organs like liver, brain, etc... once we have a fully functioning human simulation at atom/molecular levels, it will be much safer to unleash/test any kind of aging therapy on it. But then some people will argue that if we have a working atom-by-atom biological simulation of the human body (including its brain) then we have reached the TS and we can simply accelerate that simulation as much as we can to make that human body and brain simulation work on solving its own aging process :) I love these conundrums about human body/brain simulations :)
3) This leads me to the 2 separate approaches of solving aging that have been mentioned here:
3a) Trying to solve aging using the technology we currently have.
3b) Just work on SI (Super intelligence) and wait until we have SI to then ask SI to solve human aging.
I would say wait not combining 3a) and 3b) together into
3c) Using "centaur intelligence" (as in Kasparov "centaur chess") to try to solve human aging by using the combined force of human research and AI/AGI together to work on solving human aging as it would have the following benefits:
• Working efficiently & effectively on human aging right now.
• Applying AI/AGI to a hard problem like human aging would also likely lead to further advances in AI/AGI research.
(bullet point character shortcut tip: on Windows: [Alt]-[Numpad 7] for those who do not yet know!)
4) If we have SI and we ask it to solve human aging I guess one possible and quite rational answer that it would give is: "OMG the human body is so lame, why not digitalize/store/upload the human brain/mind/consciousness (and the old human body if you really feel like it) data in the cloud and then embody that brain/mind/consciousness into whatever much better body/bodies than a lame natural human body full of design flaws and limitations? And if you really want me to solve that lame natural human body aging, yes of course I can do that, here are 1000 different solutions from "least invasive/transformative" to "most invasive/transformative" to implement (...)" :)
5) Just one small detail I have spotted, at some point you mention than Covid-19 IFR is 2%, may I ask you where you get that number from? From what I've read through the pandemic this number started at around 1% (specially it is the IFR not the CFR) and has decreased ever since with probably an average of 0.5% through the pandemic so far. But if I am wrong please let me know where you got your 2% number from so you make me less wrong :)
Newbie here, first post on LW.
Sorry in advance if I make some mistakes in using this website/forum, please let me know when/where relevant what I should do / shouldn't do when using the LW website/forum, thank you.
Jumping in on that thread as I happen to have a strong interest in AI since around 1992 and AGI, AC & TS since around 2004 in big part because of Eliezer's writings at that time (early/mid 2000s) then Ray's book and Ben's book in 2005, etc... In 2009 I have also made a bet with myself that the TS would happen in 2027 and I still stand by that timeframe specially when I see quick & major breakthrough AI/AGI advances like ChatGPT which begin to have the potential/ability to work/improve their own cognitive/metacognitive processes/algorithms/architecture/data/...
And I also happen to have ME/CFS since around December 2019 possibly as a result of long Covid (was very ill with an illness that exactly looked like Covid-19 in December 2019).
I also happen to have tinnitus (since December 2020) which is also a very complex & nefarious health condition.
I would love in the coming months/years to see AI/AGI being applied to these health conditions which are extremely complex etiologically and in terms of the very numerous, convoluted and (so far) murky biological and cerebral mechanisms involved.
Is any of you aware of any efforts using AI/AGI to try helping with solving ME/CFS and/or tinnitus?
And/or do you know any way to advocate for the use of AI/AGI to help trying solving these very complex health conditions?
Many thanks in advance for any clues with regard to these questions.
Edited: to mention important Ben's book in 2005 and own 2009 bet that TS would/will happen in 2027.
Started to enter a state that could be described as "meta analysis paralysis" ("meta-[analysis paralysis]" and not "[meta-analysis] paralysis") when I wanted to formulate my comment about your very interesting take on EA Burnout!
Your post screamed to me as a great example of analysis paralysis and bounded rationality.
Then I started to get paralyzed trying to analyse analysis paralysis and bounded rationality in the context of EA burnout and I quickly burnt out solutionless writing this comment.
Oh the irony!
Even burnt out I was still stuck in analysis paralysis so in the end I told myself:
"Tomorrow I will ask Google and ChatGPT: 'how to solve analysis paralysis?'".
And then submitted that above comment which does not really help you... or maybe it does?!
Damned still paralyzed!
Anyway pushing the submit button now, not sure if is the right thing to do but my bounded rationality tells me that at least it is one thing done, even if I could have spent much more time on a much more thorough and thoughtful answer that would have allowed me to formulate a better (less wrong / more helpful) comment but maybe also hitting diminishing returns!