I think this is a real phenomenon, although I don't think the best point of comparison is the Baumol effect. The Baumol effect is all about the differential impact on different sectors, wheras this would be a kind of universal effect where it's harder to use money to motivate people to work, once they already have a lot of money.
I think a closer point of comparison is simply the high labor costs in rich first-world nations, compared to low labor costs in third-world nations. You can get a haircut or eat a nice meal in India for a tiny fraction of what it costs to buy a similar service in the USA. Partly you could say this is due to a Baumol effect of a sort, where the people in the USA have more productive alternative jobs they could be working, because they're living in a rich country with lots of capital, educated workers, well-run firms, etc. But maybe another part of the equation is that even barbers and cooks in the USA are pretty rich by global standards?
As a person becomes richer, it's perfectly sensible IMO for them to become less willing to do various menial tasks for low pay. But of course there are still some menial tasks that must get done! Imagine a society much richer than ours -- everyone is the equivalent of today's multimillionares (in the sense that they can easily afford lots of high-quality mass-manufactured goods -- they own a big home, plus a few vacation homes, a couple of cars, they can afford to fly all over the world by jet, etc), and many people are the equivalent of billionaires / trillionaires. This society would be awesome, but it would't really be quite as rich as it seems at first glance, because people would still have to perform a bunch of service tasks; we couldn't ALL be retired all the time. I suppose you could just go full-Baumol and pay people exorbitant CEO-wages just to flip burgers at mcdonalds. But in real life society would probably settle on a mix of strategies:
I think strategies like these are already at work when you look at the difference between poor vs rich nations -- jobs in rich countries not only pay more but are also generally more automated, have better working conditions, etc. It's funny to imagine how the future might be WAY further in the rich-world direction than even today's rich world, since it seems so unbalanced to us (just like how paying 30% of GDP for healthcare would've seemed absurd to preindustrial / pre-Baumol-effect societies). But it'll probably happen!
Agreed that the ideas are kind of obvious (from a certain rationalist perspective); nonetheless they are :
1. not widely known outside of rationalist circles, where most people might consider "utopia" to just mean some really mundane thing like "tax billionares enough to provide subsidized medicaid for all" rather than defeating death and achieving other assorted transhumanist treasures
2. potentially EXTREMELY important for the long-term future of civilization
In this regard they seem similar to the idea of existential risk, or the idea that AI might be a really important and pivotal technology -- really really obvious in retrospect, yet underrated in broader societal discourse and potentially extremely important.
Unlike AI & x-risk, I think people who talk about CEV and viatopia have so far done an unimpressive job of exploring how those philosophical ideas about the far-future should be translated into relevant action today. (So many AI safety orgs, billion-dollar companies getting founded, government initiatives launched, lots of useful research and lobbying and etc getting done -- there is no similar game plan for promoting "viatopia" as far as I know!)
"The religious undertones that there is some sort of convergent nirvana once you think hard enough is not true." -- can you argue for this in a convincing and detailed way? If so, that would be exciting -- you would be contributing a very important step towards making concrete progress in thinking about CEV / etc, the exact tractability problem I was just complaining about!! But if you are just asserting a personal vibe without actual evidence or detailed arguments to back it up, then I'd not baldly assert "...is not true".
bostrom uses "existential security" to refer to this intermediate goal state IIRC -- referring to a state where civilization is no longer facing significant risk of extinction or things like stable totalitarianism. this phrase connotes sort of a chill, minimum-viable utopia (just stop people from engineering super-smallpox and everything else stays the same, m'kay?), but I wonder if actual "existential security" might be essentially equivalent to locking in a very specific and as-yet-undiscovered form of governance conducive to suppressing certain dangerous technologies without falling into broader anti-tech stagnation, avoiding various dangers of totalitarianism and fanaticism, etc... https://forum.effectivealtruism.org/posts/NpYjajbCeLmjMRGvZ/human-empowerment-versus-the-longtermist-imperium
yudkowsky might have had a term (perhaps in his fun-theory sequence?) referring to a kind of intermediate utopia where humanity has covered "the basics" of things like existential security plus also some obvious moral goods like individual people no longer die + extreme suffering has been abolished + some basic level of intelligence enhancement for everybody + etc
some people talk about the "long reflection" which is similar to the concept of viatopia, albeit with more of a "pause everything" vibe that seems less practical for a bunch of reasons
it seems like it would be pretty useful for somebody to be thinking ahead about the detailed mechanics of different idealization processes (since maybe such processes do not "converge", and doing things in a slightly different way / slightly different order might send you to very different ultimate destinations: https://joecarlsmith.com/2021/06/21/on-the-limits-of-idealized-values), even though this is probably not super tractable until it becomes clearer what kinds of "idealization technologies" will actually exist when, and what their possible uses will be (brain-computer interfaces, nootropic drugs or genetic enhancement procedures, AI advisors, "Jhourney"-esque spiritual-attainment-assistance technologies, improved collective decisionmaking technologies / institutions, etc)
Okay, yup, that makes sense!
I guess personally:
I enjoyed reading this post. But I feel like you are making a mistake by being too manichaean about this. You talk as if your soul is split in two, with an evil "edgelord" half battling a good "raised by tumblr SJW" half. You think of yourself as fighting a doomed rearguard battle to defend the tumblr SJW values of "equality and social justice" against an encroaching army of elitist, misanthropic sentiment.
To me this feels bizarre -- you're writing your "bottom line" first (ie that tumblr SJW ethics and tumblr SJW like... tone of how it's acceptable to talk about people... are correct) (https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line), then putting yourself into contortions (imagining two inner personalities, using "arguments as soldiers", etc) to maintain your belief in this bottom line.
It feels kind of like a socialist learning more about economics and being like "no!! if I start believing that markets and price signals are often the best way to distribute scarce resources, i'll become the same kind of callous, selfish evildoer I've sworn to destroy!!". Wheras instead they should probably just keep learning about economics, and remain a good person by combining their new economics knowledge with their preexisting moral ideas about making the world a better and fairer place for everyone (perhaps by becoming a georgist, an Abundance dem, a pigouvian-taxation guy, or whatever).
If I were you, I would simply accept that it's possible to be very elitist (believing that some people are smarter than others, better than others, even more morally valuable than others) without necessarily transforming into an evil "edgelord" misanthrope. I myself am pretty elitist in various ways, am sort of introverted and arrogant similar to how you describe yourself, etc -- but I still consider myself to really love humanity, I work for effective altruist organizations, I often enjoy hanging out with normies, etc. In fact one of the things I find inspiring about EA is its emphasis that being a good person isn't about having your heartstrings pulled all the time and being really emotionally empathetic (i'm just not a very emotional kind of guy, and previously I thought this somehow made me a bad person!), rather it's about working hard to improve the world, taking ideas seriously, actually acting on your moral beliefs, etc.
Then, instead of fighting a cartoony battle to stop yourself from believing in elitism and thereby becoming elitist "edgelord" (which, you imagine, would turn you evil and be a betrayal of all that is good), you could just neutrally explore what's actually true about the external world (how much do people vary in their abilities? are you just being self-servingly arrogant, or mistakenly shy and insular, to think there's no value in hanging out with normies, or is this actually correct? is "elite persuasion" generally a better way of influencing politics than mass activism? etc etc) without weirdly tying the outcome to a sense of whether you yourself are good or evil.
For some examples of people who are elitist in various ways but who still seem to have much empathy and goodness, and you want further examples beyond "practically the entire EA & rationalist community", you can consider the philosophies of Richard Hannania and Matthew Yglesias as described here: https://www.astralcodexten.com/p/matt-yglesias-considered-as-the-nietzschean
Sorry if some of this comment was harsh, it kind of paints an exaggerated picture for dramatic/pedagogical effect and for brevity. The theme of the post is grumpy misanthropy so I figured this would be acceptable! :P
lilkim isn't speculating about the cause of anti-immigrant politics; he's saying that there's less desire to automate truck, driving, because truck-driver wages have decreased in recent years (because lots of people have recently decided to go into truck driving, apparently).
For anyone considering niplav's offer, the most obvious tax-deductible-in-Germany donation options for EAs / rationalists is probably Effektiv Spenden's "giving funds":
Lots of good options! (Personally, I won't be itemizing my US taxes this year, so I won't benefit from charitable deductions even to the US-based Manifund. So, in the name of maximum tax-efficiency, ideally somebody who does itemize their US donations should take niplav up on their offer!)
The 100 - 150 ton numbers that SpaceX has offered over the years are always referring to the fully-reusable version launching to LEO. I believe even Falcon 9 (though not Falcon Heavy) has essentially stopped offering expendable flights; the vision for Starship is for them to be flying full-reusable all the time.
That said:
Agreed with you that the heat shield (and reusable upper stage in general) seems like it could easily just never work (or work but only with expensive refurbishment, or only from returning from LEO orbits not anything higher-energy, or etc), perhaps forcing them to give up and have Starship become essentially a big scaled-up Falcon 9. This would still be cheaper per-kg than Falcon 9 (economies of scale, and the Raptor engines are better than Merlin, etc), but not as transformative. I think many people are just kind of assuming "eh, SpaceX is full of geniuses, they've done so many astounding things, they'll figure out the heat shield", but this is an infamously hard problem (see Shuttle, Orion, X-33...), so possibly they'll fail!
Some other tidbits:
Personally I'm doubtful that they ever hit the crazy-ambitious $20/kg mark, which (per Thomas Kwa) would require not just a reusable upper stage (very hard!) but also hyper low-cost, airline-like turnaround on every part of the operation. But $200/kg (1 OOM cheaper from where Falcon 9 is today, using the rumored internal cost of $30m/launch and 17.5 ton capacity) seems pretty doable -- upper stage reuse (even if somewhat ardurous to refurbish) probably cuts your costs by like 4x, and the much greater physical size of Starship might give you another almost 2x. Cheap materials (steel and methane vs aluminum and RP1) + economies of scale in Raptor manufacturing might take you the rest of the way.
Ex-aerospace engineer here! (I used to work at Xona Space Systems, who are working on a satellite constellation to provide a kind of next-gen GPS positioning. I'm also a longtime follower of SpaceX, fan of Kerbal Space Program, etc) Here is a rambling bunch of increasingly off-topic thoughts:
Oh, I think they probably try to adapt in a variety of ways to be more hospitable & compatible with me when I'm around. (Although to a certain extent, maybe I'm more weird (less "normie") than they are, plus I'm from a younger generation, so the onus is more on me socially to adapt myself to their ways?) But the focus of my comment was about the ways that I personally try to relate to people who are quite different from me. So I didn't want to dive into how they might find it difficult or annoying being around me and how they might deal with this (though I'm sure they do find me annoying in some ways -- another reason to be grateful, have humility, etc!).