If everything goes according to the plans being discussed on LW today, with UBI from AI profits for US citizens and with international treaties limiting the spread of AI to other countries, then 90% of humanity will be prevented by the US from having AI and also won't get UBI from AI money. What's your take on this?
My take is that even within the US, redistributing the profits will leave the masses at the mercy of elites, and we've all seen how elites can easily cut benefits to the masses. So a better path would be distributing intelligence itself, not the profits from it, and preferably worldwide.
All the international treaty talk I've seen is to prevent the development of frontier AI by anyone, not to restrict mundane AI use to particular countries. It's possible I'm missing some talk though.
Excited for this!
just a week ago I wrote a shortform about the political economy problems of automating labor, and an okay solution that I'm not happy about but beats other alternatives I've seen proposed. Would you be interested in somebody fleshing it out? I don't know how exciting/original/workable the idea is, in some sense it feels like "the most obvious possible way to try to solve this while not being obviously dumb."
The proposal with windfall shares as-written finds it hard to prevent, say, the heirloom of Africa from being forever locked in poverty (or is it until the population sharing the USA's heirloom becomes ~100 times bigger than that of Africa)? I would add something like "windfall shares with benefits being cut down on+ potential worldwide UBI+some zero-sum game with clear rules/game where the prize is either given from a foundation or not given at all if no one is worthy". In this case, perverse incentives to cause wealth overconcentration by having fewer kids could be weakened by other paths to becoming rich.
Labor Market Restructuring and Workforce Development: New or reformed initiatives to retrain workers displaced by AI or to restructure/re-regulate the labor market so people can still access paid employment, perhaps with reduced hours
Could you explain that point? How do we prevent the AIs and robots from commiting genocide of destroying all human jobs by being more capable and having lower sustainment cost than the humans in any cognitive or physical task? By introducing severe protectionism (e.g. if North Korea banned AIs, robots and import of anything involving the two wholesale) or requiring token hiring?
As for the points 1,2,4, I would like to understand how people who aren't US citizens are covered. If Safer-4 removes jobs from Europeans or Russians and fails to provide compensation, then they are screwed.
commiting genocide of human jobs
What... does it mean to "commit genocide of jobs"? Do you just mean "displace jobs"?
Please, words mean things! You can genocide humans, but you cannot genocide "jobs", that's not how that word works!
It might have given you a thrill of satisfaction to write, "commiting genocide of human jobs," but please consider the effect on readers who don't yet agree with you: namely, they are tempted to dismiss you as hopelessly irrational because they immediately notice the vast gulf between any central example of a genocide and AI's out-competing people in the labor market.
Don't make it so easy for your opponents and the undecided to dismiss you as shrill, extremist or too emotional to be able to reason properly.
Both good questions. On the first one: obviously, if we all die or if labor and capital become perfectly substitutable such that the economy collapses to y=AK, then there's no amount of workforce restructuring you can do to preserve a real labor market.
But if, say, robotics remains bottlenecked by a lack of data even after we get a software-only singularity; or if demand persists for purely relational human services that AIs definitionally can't provide; or we for whatever other reason do not see the returns to labor collapse, then we'll still have a labor market, and it'll look very different from the one today. We want to solicit some ideas for how to structure that labor market in the event that one of those scenarios ensues.
In terms of non-Americans — speaking only for myself, my first-best version of a response would include a safety net for people in all countries. Because of where the main labs are based, the country capable of taxing them or taking equity stakes is the United States, and after last year the politics of foreign aid from the US are unbelievably bleak. If there appears to be a path to a system of globally shared AI growth, I will be the first to support it. As it stands I'm very pessimistic.
I wanted to share the launch of a project I've been working on with pollster David Shor, Obama/Biden veteran Stef Feldman, political strategist Morris Katz, Harvard historian Marc Aidinoff, and a few other folks*.
The Center for Shared AI Prosperity is an attempt to force DC policy elites, particularly (given our team's backgrounds) liberals/progressives, to take the impending economic impacts of advanced AI more seriously. We do not think this is a normal economic shock. We are deeply uncertain about what kind of economic shock it will be, but even if humans manage to survive the advent of superintelligence, we'll be left with a world of extreme power and wealth concentration, increasing political instability arising from that growing inequality, and deep questions about how to fund governments that have for a century-plus relied on income and payroll taxes.
Our main purpose as an organization is to surface tractable ideas across four main areas:
We have two tracks for idea proposals. Track 1 involves submitting a 500-1,000 word writeup of an idea; this track does not offer compensation but can involve Blue Rose Research, a leading political polling firm, doing public opinion research to see if US voters are broadly receptive to it. We may also elevate and share the best ideas submitted under Track 1.
Track 2 allows submitters to potentially receive compensation for promising ideas.** We will select our favorite Track 2 submissions and commission longer policy briefs from their authors, offering payment for the additional writing and policy development.
Our hope, at the end of this, is to have a suite of viable policy ideas that we can demonstrate have broad public buy-in and that we and allies can lobby Congress, the administration, and others to adopt.
We are trying to solicit submissions from a wide pool, and purposefully don't want to just ask the usual think tanks, economists, academics, etc. LW as a community was taking these issues seriously far before I or anyone else on this team was, and I think its members likely have excellent ideas for dealing with economic impacts (alongside proposals to prevent takeover, catastrophic misuse, etc.)
Please do not hesitate to apply if you think you have a workable idea, or several. Feel free to reach out to me if you have any questions about the program.
* The rest of the board/founding team is Jason Goldman, Josh Hendler, Morris Katz, Lindsay Lamont, and Jesse Stinebring.
** This seems like a good place to give the disclaimer that I am working on CSAIP in my personal capacity, not as an employee of Coefficient Giving. As of this writing CG has not funded CSAIP and the two groups have no affiliation beyond my service on the CSAIP board.