Actually, they are extremely well known outside of rationalist circles. Many subgroups of the Jewish and Buddist faiths are pretty much built upon these principles. My parents told me "don't put all your chips on the table" and ~keep optionality open. Some might even argue this is the core principle that has led to "democracy". And yes as you rightly mentioned these are clearly foundational principles behind LW and EA. That's why I use the strong language of "circlejerk". This is really unnecessarily reinventing common english phrases. Viatopia perhaps gives the idea a bit of an action relevant flavor so I guess it extends a bit beyond the others, still not particularly new or insightful.
"can you argue for this in a convincing and detailed way?"
I mean the argument is so underpowered it's hard to even know where to start. I actually don't even think the concept is coherent tbf but I'll try.
Assuming you are coming from the view that:
you can take some sentient (or intelligent) being, and you keep the "essence" of that sentient being but make it smarter and give it more inference time, that then all sentient beings will start whipping, dabbing, and hitting the nae nae in synchronicity.
(Which I would say there is no coherent concept of self modification/enchancement that preserves the original essence so already meaningless but if I cast that aside. )
Then sure, take a sentient beings whose value function is completely determined. It can never change it's mind, taughtologically. So it will never hit this convergent nirvana. it's values are already fixed.
I must be confused because I don't see how this could be any other way. And the funny thing is, even if i'm wrong about this, and somehow if you jack up the iq and inference to wazooh and the atoms start vibing out this still wouldn't make their goals correct. You still haven't solved the is ought problem.
i have used tons of personal photos w/ kelsey's prompt, it has been extremely successful (>75% + never get's it wrong if one of my friends can guess it too), I'm confident none of these photos are on the internet and most aren't even that similar to existing photos. Creepily enough it's not half bad at figuring out where people are indoors as well (not as good, but like it got the neighborhood in Budapest I was in from a photo of a single room, with some items on a table).
I've seen this take a few times about land values and I would bet against it. If society gets mega rich based on capital (and thus more or similarly inequality) I think the cultural capitals of the US (LA, NY, Bay, Chicago, Austin, etc.) and most beautiful places (Marin/Sonoma, Jackson hole, Park City, Aspen, Vail, Scotsdale, Florida Keys, Miami, Charleston, etc.) will continue to outpace everywhere else.
Also the idea that New York is expensive because that's where the jobs are doesn't seem particularly true to me. Companies move to these places as much because they are trying to attract talent as the other way around. I know lots of students who went to my T20 university and got remote jobs. Approximately 0 of them want to move to ugly bumfuck even if it's basically free. The suburbs/exurbs maybe, but not rural Missouri.
Now if there is a large wealth redistribution, which seem extremely unlikely given the timelines and current politics, I would agree. Also thinking construction will get cheaper is pretty questionable. The cost of construction in the US has skyrocketed largely because of regulations, new tech won't necessarily be able to fix this.
its a public externality, you don't need a government division to run bathrooms, you just need to do 1. + provide a subsidy
Yea, the Cochrane meta-study aggregates a bunch of heterogenous studies so the aggregated results are confusing to analyze. The unfortunate reality is that it is complicated to get a complete picture - one may have to look at the individual studies one by one if they truly want to come to a complete understanding of the lit.
Betting against republicans and third parties on poly is a sound strategy, pretty clear they are marketing heavily towards republicans and the site has a crypto/republican bias. For anything controversial/political, if there is enough liq on manifold I generally trust it more (which sounds insane because fake money and all).
That being said, I don't like the way Polymarket is run (posting the word r*tard over and over on Twitter, allowing racism in comments + discord, rugging one side on disputed outcomes, fake decentralization), so I would strongly consider not putting your money on PM and instead supporting other prediction markets, despite the possible high EV.
As a trust fund baby who likes to think I care about the future of humanity, I can confidently say that I would at least consider it, though I'd probably take the money.
I also want to separately add that part of my frustration here (and the "can-kicking" part i mention) is that I worry this is just going to be weaponized as a reason to keep EA and LW glued together, even as obvious cracks develop. That would be fine - if we had a democracy - but we don't. So at some point glue is a weapon for those in the community with de facto control to keep trudging forward without having to account for the increasing differences in moral views of those within.