I agree this distinction is very important, thank you for highlighting it. I'm in camp B and just signed the statement.
It seems to me that such "unhealthiness" is pretty normal for labor and property markets: when I read books from different countries and time periods, the fear of losing one's job and home is a very common theme. Things were easier in some times and places, but these were rare.
So it might make more sense to focus on reasons for "unhealthiness" that apply generally. Overregulation can be the culprit in today's US, but I don't see it applying equally to India in the 1980s, Turkey in the 1920s, or England in the early 1800s (these are the settings of some books on my shelf whose protagonists had very precarious jobs and housing). And even if you defeat overregulation, the more general underlying reasons might still remain.
What are these general reasons? In the previous comment I said "exploitation", but a more neutral way of putting it is that markets don't always protect one particular side. Markets are two-sided: there's no law of economics saying a healthy labor market must be a seller's market, while housing must be a buyer's market. Things could just as easily go the other way. So if we want to make the masses less threatened, it's not enough to make markets more healthy overall; we need to empower the masses' side of the market in particular.
I think questions of power differences between the "elites" and the "masses" are very relevant to the AI transition, both as a model for intuitions and as a way to choose policy directions now, because AI will tend to amplify and lock-in these power differences and at some point it'll get too late. For more context, see these comment threads of mine: 1, 2, 3, or this book review.
Yeah, I wouldn't have predicted this response either. Maybe it's a case of something we talked about long ago - that if a person's "true values" are partly defined by how the person themselves would choose to extrapolate them, then different people can end up on very diverging trajectories. Like, it seems I'm slightly more attached to some aspects of human experience that you don't care much about, and that affects the endpoint a lot.
Despite our superior technology, there are many things that Western countries could do in the past that we can’t today—e.g. rapidly build large-scale infrastructure, maintain low-crime cities, and run competent bureaucracies.
Why do you focus on these problems? I mean, sure, the average person in the West can feel threatened by crime, infrastructure decay, or incompetent bureaucracy. But they live every day under much bigger threats, like the threat of losing their job, getting evicted, getting denied healthcare, or getting billed or fee-d into poverty. These seem to be the biggest societal (non-health, non-family) threats for our hypothetical average person. And the common pattern in these threats isn't decay or incompetence, it's exploitation by elites.
That tweet doesn't sound right to me. Or at least, to me there's a simpler and more direct explanation of bubbles in terms of real resources, without having to mention money supply or central banks at all.
During a bubble, people are having fun because resources are being misallocated: misallocated to their fun. Some rich chumps are throwing their resources at something useless, like buying tulips. That bankrolls the good times for everyone else: the tulip-growers, the hairdressers that serve the tulip-growers and so on. But at some point the rich chumps realize that tulips aren't that great, and that they burned their resources just to make a big bonfire and make everyone warm for awhile. When they realize that, the tulip growers will lose their jobs, and then the hairdressers who served them and so on. That's the pain of the bubble ending, and it's unavoidable, central bank or no.
(This thread is getting a bit long, and we might not be convincing each other very much, so hope it's ok if I only reply with points I consider interesting - not just push-pull.)
With the concert pianist thing I think there's a bit of type error going on. The important skill for a musician isn't having fast fingers, it's having something to say. Same as: "I'd like to be able to write like a professional writer" - does that mean anything? You either have things you want to write in the way that you want to write, or there's no point being a writer at all, much less asking an AI to make you one. With music or painting it's the same. There's some amount of technique required, but you need to have something to say, otherwise there's no point doing it.
So with that in mind, maybe music isn't the best example in your case. Let's take an area where you have something to say, like philosophy. Would you be willing to outsource that?
Well, there's no point in asking the AI to make me good at things if I'm the kind of person who will just keep asking the AI to do more things for me! That path just leads to the consumer blob again. The only alternative is if I like doing things myself, and in that case why not start now. After all, Leonardo himself wasn't motivated by the wish to become a polymath, he just liked doing things and did them. Even when then they're a bit difficult ("chores").
Anyway that was the theoretical argument, but the practical argument is that it's not what's being offered now. We started talking about outsourcing the task of understanding people to AI, right? That doesn't seem like a step toward Leonardo to me! It would make me stop using a pretty important part of my mind. Moreover, it's being offered by corporations that would love to make me dependent, and that have a bit of history getting people addicted to stuff.
There's no "line" per se. The intuition goes something like this. If my value system is only about receiving stuff from the universe, then the logical endpoint is a kind of blob that just receives stuff and doesn't even need a brain. But if my value system is about doing stuff myself, then the logical endpoint is Leonardo da Vinci. To me that's obviously better. So there are quite a lot of skills - like doing math, playing musical instruments, navigating without a map, or understanding people as in your example - that I want to do myself even if there are machines that could do it for me cheaper and better.
Hi Felix! I've been thinking about the same topics for awhile, and came to pretty much the opposite conclusions.
No nononono. So many people making this argument and it's so wrong to me.
The thing is: altruistic urges aren't the only "nonzero urges" that people have. People also have an urge to power, an urge to lord it over others. And for a lot of people it's much stronger than the altruistic urge. So a world where most people are at the whim of "nonzero urges" of a handful of superpowerful people will be a world of power abuse, with maybe a little altruism here and there. And if you think people will have exit rights from the whims of the powerful, unfortunately history shows that it won't necessarily be so.
I think we'll never be at a point where a handful of people can defeat the strongest entities. Bioweapons are slow; drone swarms can be stopped by other drone swarms. I can't imagine any weapon at all that would allow a terrorist cell to defeat an army of equal tech level. Well, maybe if you have a nanotech-ASI in a test tube, but we're dead before then.
It is however possible that a handful of people can harm the strongest entities. And that state of affairs is desirable. When the powerful could exploit the masses with impunity in the past, they did so. But when firearms got invented, and a peasant could learn to shoot a knight dead, the masses became politically relevant. That's basically why we have democracy now: the political power of the masses comes from their threat-value. (Not economic value! The masses were always economically valuable to the powerful. Without threat-value, that just leads to exploitation. You can be mining for diamonds and still be a slave.) So the only way the masses can avoid a world of total subjugation to the powerful in the future is by keeping threat-value. And for that, cheap offense-dominant weapons are a good thing.
Making an analogy with altruism here is strange. North Korea is a horrifying oppressive regime. The fact that they can use the nuke threat to protect themselves, and their citizens have no analogous "gun" to hold to the head of their own government, is a perfect example of the power abuse that I described above. A world with big actors holding all threat-power will be a world of NKs.
There's a standard response to this argument: namely, inequality of money always tries to convert itself into inequality of power, through lobbying and media ownership and the like. Those at the bottom may have comfort, but that comfort will be short lived if they don't have the power to ensure it. The "Gini coefficient of power" is the most important variable.
So yeah, to me these all converge on a pretty clear answer to your question. Concentration of power, specifically of threat-power, offense-power, would be very bad. Spreading it out would be good. That's how the world looks to me.