All of Meena Kumar's Comments + Replies

The main thing seems to be money. How do we get more money into alignment in a way that is not detrimental or repulsive to people we want to attract to working there?

Yudkowsky is not the right person to start this stuff in China.

I'd like to see more work done on this myself. It is difficult to take blog posts seriously, especially for people who are already sceptics. "Oh, most of the writing is some books by people I've never heard of and a bunch blogposts? Sure sounds like bs"

"Sure, but in general, given enough adequate tools, most people contribute to keeping those things in check. " What are you basing this on? And the issue is if the contributions will be enough. And not even making and providing adequate tools, but trying to figure out what tools will be adequate in the first place and then after that, getting people to actually use those tools. Still don't see what this has to do with UBI.

Society not just spontaneously collapsing under the pressure of the few antisocial types trying to destroy it? True, that's all part of the difficulty of this whole endeavour. I explained why I think this is relevant to UBI in the response to your other comment.
It does because UBI presumes there's just no work for people to do, while what I'm saying is that we shouldn't abolish work entirely - because of what you say, the lack of power (and thus leverage). Instead we should distribute as much as possible the work of governance and decision-making, the only work that would be left and needs to be definitely in human hands, because automating it doesn't relieve us of toil, but it pretty much makes us just spectators to our own history.

There are already people trying to make an AI that destroys humanity "for the lols". Many people will try to use AI against each other. As they are now. E.g. militaries, scammers, revenge porn makers, etc.

Sure, but in general, given enough adequate tools, most people contribute to keeping those things in check. This doesn't hold if AIs turn out to be "glass cannons" - tremendously powerful attackers, but poor at setting up defences against other AIs - but in that case, well, there's pretty much no way out. Full self-sufficiency would require everyone to have a whole productive chain at their fingertips, it just doesn't seem realistic as something to be achieved before AIs are already incredibly advanced, and there's a lot of danger from here to there. Besides, not sure it'd be realistic with Earth resources either (nor that everyone would want it; but I guess small self-sufficient tribe-sized communities might actually work a lot better for many people). The worries now concern much more realistic, close at hand scenarios.
1Meena Kumar10mo
And I don't see how this has anything to do with UBI.

One of the biggest problems in a world with UBI- the complete lack of power of the average person.

One worst things about the abuses of people in welfare by those in power (intentional abuse or not) is the utter disparity in power. And the deep psychological effects of that. Having been in such a position at one point, I felt humiliated, ashamed and really worthless. This is not how all people felt or do feel, but by and large, it is not good.

I think a much better alternative to UBI- but one that might be even harder to do- would be to work to make everyone... (read more)

If the powerlessness of those relying on UBi comes from those dispensing UBI being able to withdraw it, then the solution is a robust set of rights ... and rights are also a necessary component of any alternative solution, like self sufficiency.
I think with lots of AIs the work everyone should best do is keeping the AIs aimed at their goals. Direct them, orient them, keep them in line with our overall interests. Everyone a foreman for a few instances of AIs doing something. Doubles as a possible alignment solution: give as little agency as possible to the AIs, delegate the decisions to the human overseers. Humans' contribution to the economy will be "their values".