Posts

Sorted by New

Wiki Contributions

Comments

xiann5mo10

I agree with the central point of this, and the anti-humanism is where the e/acc crowd turn entirely repugnant. But in reference to the generative AI portion, the example doesn't really land for me because I think the issue at its core is pitting two human groups against each other; the artists who would like to make a stable living off their craft, and the consumers of art who'd like less scarcity of art, particularly the marginally-creative stock variety that nonetheless forms the majority of most artists' paycheck (as opposed to entirely original works at auction or published novels).

The AI aspect is incidental. If there were a service that Amazon Turk'd art commissions for $2 a pop but you didn't have to finagle a model, you'd have the same conflict.

xiann5mo10

Assuming Sam was an abuser, what would hacking wifi signals do that the level of shadowbanning described not do? It strikes me as unlikely because it doesn't seem to have much reward in the world where Sam is the abuser.

xiann5mo0-3

I know this post will seem very insensitive, so I understand if it gets downvoted (though I would also say that's the very reason sympathy-exploitation tactics work), but I would like to posit a 3rd fork to the "How to Interpret This" section: That Annie suffers from a combination of narcissistic personality disorder and false memory creation in service of the envy that disorder spawns. If someone attempted to fabricate a story that was both maximally sympathy-inducing and reputation-threatening for the target, I don't think you could do much better than the story laid out here within the confines of the factual public events of Annie & Sam's life. 

If Annie's story turns out to be true and is proven to be, the outcome is that the general public would perceive Sam as:

A) Greedy to the extent of moral perversity in making a diamond out of his father against his consent.

B) A rapist for what he did to Annie.

C) An implied pedophile, even if Sam was himself a minor.

In addition, the public would also perceive Sam's brother as at least B & C as well, and Annie would likely win some sort of legal settlement from the abuse. All of objectives met for both someone suffering genuine abuse as well as someone who did not but did suffer from narcissism and felt wronged by more successful siblings.

Besides this, the amount of disorders Annie has is itself a red flag to me; not only is having a litany of less physiologically-visible disorders rather statistically-unlikely in the general population but also a more common trait in people who exploit social charity/sympathy for gain, such as those running low-level welfare benefit scams or who falsely pose as homeless for charity, many of whom suffer from what has become known as "vulnerable" narcissism as opposed to the classic grandiose variety. I wish it were the case that every ADHD sufferer with nerve pinching & chronic anxiety/depression was really someone who is trying their best to become whole, but anyone with experience in the system (such as civil servants) knows that's not the case.

The other red flags to me are the Zoloft prescription (weak) as well as claims of shadowbanning (stronger), more that someone might have abnormal (and possibly exploitative) psychology or be more prone to false memory creation than to actually being exploitative directly. I find it difficult to believe even a Valley insider like Sam could get Annie simultaneously shadowbanned on that many platforms simultaneously, while somehow not touching her sex work accounts which are more grey area and skittish legally.

That being said, all of this are rather weak evidence on its own. I figured I'd offer my perspective as someone more working-class than most LWers, who's met their share of narcissistic/crab-in-a-bucket people who've falsely gone after their own more successful (though not nearly as much as Sam) siblings.

xiann6mo11

I agree, I'm reminded of the quote about history being the search for better problems. The search for meaning in such a utopic world (from our perspective) thrills me, especially when I think about all the suffering that exists in the world today. The change may be chaotic & uncomfortable, but if I consider my personally emotions about the topic, it would be more frightening for the world to remain the same.

xiann8mo41

I should have been more precise. I'm talking about the kind of organizational capabilities required to physically ensure no AI unauthorized by central authority can be created. Whether aligned AGI exists (and presumably in this case, is loyal is said authority over other factions of society that may become dissatisfied) doesn't need to factor into the conversation much.

That may well be the price of survival, nonetheless I felt I needed to point out the very likely price of going down that route. Whether that price is worth paying to reduce x-risk from p(x) to p(x-y) is up to each person reading this. Again, I'm not trying to be flippant, it's an honest question of how we trade off between these two risks. But we should recognize there are multiple risks.

I'm not so much implying you are negative as not sufficiently negative about prospects for liberalism/democracy/non-lock-in in a world where a regulatory apparatus strong enough to do what you propose exists. Most democratic systems are designed to varying degrees so as to not concentrate power in one actor or group of actors, hence the concept of checks & balances as well as different branches of government; the governments are engineered to rely as little as possible on the good will & altruism of the people in said positions. When this breaks down because of unforeseen avenues for corruption, we see corruption (ala stock portfolio returns for sitting senators).

The assumption that we cannot rely on societal decision-makers to not immediately use any power given to them in selfish/despotic ways is what people mean when they talk about humility in democratic governance. I can't see how this humility can continue to occur with the kind of surveillance power alone that would be required both to prevent rebellion over centuries to millenia, much less global/extraglobal enforcement capabilities a regulatory regime would need.

Maybe you have an idea for an enforcement mechanism that could prevent unaligned AGI indefinitely that is nonetheless incapable of being utilized for non-AI regulation purposes (say, stifling dissidents or redistributing resources to oneself), but I don't understand what that institutional design would look like.

xiann8mo4837

This might sound either flippant or incendiary, but I mean it sincerely: Wouldn't creating a powerful enough enforcement regime to permanently, reliably guarantee no AGI development necessitate both the society implementing that regime being far more stable over future history than any state has thus far been, and more importantly introduce incredible risk of creating societies that most liberal democracies would find sub-optimal (to put it mildly) that are then locked-in even without AGI due to the aforementioned hyper-stability?

This plan seems likely to sacrifice most future value itself, unless the decision-making humans in charge of the power of the enforcement regime act purely altruistically.

xiann8mo30

"Normally when Cruise cars get stuck, they ask for help from HQ, and operators there give the vehicles advice or escape routes. Sometimes that fails or they can’t resolve a problem, and they send a human driver to rescue the vehicle. According to data released by Cruise last week, that happens about an average of once/day though they claim it has been getting better."

From the Forbes write-up of GM Cruise's debacle this weekend. I think this should update people downward somewhat in FSD % complete. I think commenters here are being too optimistic about current AI, particularly in the physical world. We will likely need to get closer to AGI for economically-useful physical automation to emerge, given how pre-trained humans seem to be for physical locomotion and how difficult the problem is for minds without that pre-training.

So in my opinion, there actually is some significant probability that we either have AGI prior to or very soon after robotaxis, at least the without-a-hitch, no weird errors, slide-deck presentation form of it your average person thinks of when one says "robotaxis".

xiann10mo50

That is one example, but wouldn't we typically assume there is some worst example of judicial malpractice at any given time, even in a healthy democracy? If we begin to see a wave of openly partisan right or left-wing judgements, that would be a cause for concern, particularly if they overwhelm the ability of the supreme court to overrule. The recent dueling rulings over mifepristone was an example of this (both the original ruling and the reactive ruling), but it is again a single example so far.

I actually think the more likely scenario then a fascistic backslide is a civil conflict or split between red & blue America, which would significantly destabilize global geopolitics by weakening American hegemony. The military leans conservative but not overwhelmingly, so if put under pressure individual battalions may pledge loyalty to either side in a conflict.

However, even this I would say is low-probability because of the partisan geography of America; red & blue areas intermingle and do not form a coherent front like the north & south did in the Civil War.

xiann10mo152

When you say "force demand to spread out more", what policies do you propose, and how confident are you that this is both easier to accomplish than the YIMBY solution and leads to better outcomes?

My default (weak) assumption is that a policy requiring more explicit force is more likely to produce unintended negative consequences as well as greater harm if unpopular. So a ban on A has a higher bar to clear for me to be on board than a subsidy of B over A. My initial reaction to the sentence "force demand to spread out more" is both worry at how heavy-handed that sounds at first blush, as well as confusion as to what the upside of this is over the YIMBY solution, besides preserving the utility of some currently-exurban houses/land that would otherwise go unused. That's good, but I don't see why it's so good that it justifies not pursuing greater density instead, since as you say the money is already in the city.

xiann1y135

Feeling unsafe is probably not a free action though; as far as we can tell cortisol has a deleterious effect on both physical health & mental ability over time, and it becomes more pronounced w/ continous exposure. So the cost of feeling unsafe all the time, particularly if one feels less safe/more readiness than the situation warrants, is to hurt your prospects in situations where the threat doesn't come to pass (the majority outcome).

The most extreme examples of this are preppers; if society collapses they do well for themselves, but in most worlds they simply have an expensive, presumably unfun hobby and inordinate amounts of stress about an event that doesn't come to pass.

Load More