To summarize,

  • When considering whether to delay AI, the choice before us is not merely whether to accelerate or decelerate the technology. We can choose what type of regulations are adopted, and some options are much better than others.
  • Neo-luddites do not fundamentally share our concern about AI x-risk. Thus, their regulations will probably not, except by coincidence, be the type of regulations we should try to install.
  • Adopting the wrong AI regulations could lock us into a suboptimal regime that may be difficult or impossible to leave. So we should likely be careful not endorse a proposal because it's "better than nothing" unless it's also literally the only chance we get to delay AI.
  • In particular, arbitrary data restrictions risk preventing researchers from having access to good data that might help with alignment, potentially outweighing the (arguably) positive effect of slowing down AI progress in general.

It appears we are in the midst of a new wave of neo-luddite sentiment.

Earlier this month, digital artists staged a mass protest against AI art on ArtStation. A few people are reportedly already getting together to hire a lobbyist to advocate more restrictive IP laws around AI generated content. And anecdotally, I've seen numerous large threads on Twitter in which people criticize the users and creators of AI art.

Personally, this sentiment disappoints me. While I sympathize with the artists who will lose their income, I'm not persuaded by the general argument. The value we could get from nearly free, personalized entertainment would be truly massive. In my opinion, it would be a shame if humanity never allowed that value to be unlocked, or restricted its proliferation severely.

I expect most LessWrong readers to agree with me on this point — that it is not worth sacrificing a technologically richer world just to protect workers from losing their income. Yet there is a related view that I have recently heard some of my friends endorse: that nonetheless, it is worth aligning with neo-luddites, incidentally, in order to slow down AI capabilities.

On the most basic level, I think this argument makes some sense. If aligning with neo-luddites simply means saying "I agree with delaying AI, but not for that reason" then I would not be very concerned. As it happens, I agree with most of the arguments in Katja Grace's recent post about delaying AI in order to ensure existential AI safety.

Yet I worry that some people intend their alliance with neo-luddites to extend much further than this shallow rejoinder. I am concerned that people might work with neo-luddites to advance their specific policies, and particular means of achieving them, in the hopes that it's "better than nothing" and might give us more time to solve alignment. 

In addition to possibly being mildly dishonest, I'm quite worried such an alliance will be counterproductive on separate, purely consequentialist grounds.

If we think of AI progress as a single variable that we can either accelerate or decelerate, with other variables held constant upon intervention, then I agree it could be true that we should do whatever we can to impede the march of progress in the field, no matter what that might look like. Delaying AI gives us more time to reflect, debate, and experiment, which prima facie, I agree, is a good thing.

A better model, however, is that there are many factor inputs to AI development. To name the main ones: compute, data, and algorithmic progress. To the extent we block only one avenue of progress, the others will continue. Whether that's good depends critically on the details: what's being blocked, what isn't, and how.

One consideration, which has been pointed out by many before, is that blocking one avenue of progress may lead to an "overhang" in which the sudden release of restrictions leads to rapid, discontinuous progress, which is highly likely to increase total AI risk.

But an overhang is not my main reason for cautioning against an alliance with neo-luddites. Rather, my fundamental objection is that their specific strategy for delaying AI is not well targeted. Aligning with neo-luddites won't necessarily slow down the parts of AI development that we care about, except by coincidence. Instead of aiming simply to slow down AI, we should care more about ensuring favorable differential technological development.

Why? Because the constraints on AI development shape the type of AI we get, and some types of AIs are easier to align than others. A world that restricts compute will end up with different AGI than a world that restricts data. While some constraints are out of our control — such as the difficulty of finding certain algorithms — other constraints aren't. Therefore, it's critical that we craft these constraints carefully, to ensure the trajectory of AI development goes well.

Passing subpar regulations now — the type of regulations not explicitly designed to provide favorable differential technological progress — might lock us into bad regime. If later we determine that other, better targeted regulations would have been vastly better, it could be very difficult to switch our regulatory structure to adjust. Choosing the right regulatory structure to begin with likely allows for greater choice than switching to a different regulatory structure after one has already been established.

Even worse, the subpar regulations could even make AI harder to align.

Suppose the neo-luddites succeed, and the US congress overhauls copyright law. A plausible consequence is that commercial AI models will only be allowed to be trained on data that was licensed very permissively, such as data that's in the public domain

What would AI look like if it were only allowed to learn from data in the public domain? Perhaps interacting with it might feel like interacting with someone from a different era — a person from over 95 years ago, whose copyrights have now expired. That's probably not the only consequence, though.

Right now, if an AI org needs some data that they think will help with alignment, they can generally obtain it, unless that data is private. Under a different, highly restrictive copyright regime, this fact may no longer be true. 

If deep learning architectures are marble, data is the sculptor. Restricting what data we're allowed to train on shrinks our search space over programs, carving out which parts of the space we're allowed to explore, and which parts we're not. And it seems abstractly important to ensure our search space is not carved up arbitrarily — in a process explicitly intended for unfavorable ends — even if we can't know now which data might be helpful to use, and which data won't be.

True, if very powerful AI is coming very soon (<5 years from now), there might not be much else we can do except for aligning with vaguely friendly groups, and helping them pass poorly designed regulations. It would be desperate, but sensible. If that's your objection to my argument, then I sympathize with you, though I'm a bit more optimistic about how much time we have left on the clock. 

If very powerful AI is more than 5 years away, we will likely get other chances to get people to regulate AI from a perspective we sympathize with. Human disempowerment is actually quite a natural thing to care about. Getting people to delay AI for that explicit reason just seems like a much better, and more transparent strategy. And while AI gets more advanced, I expect this possibility will become more salient in people's minds anyway.

New to LessWrong?

New Comment
31 comments, sorted by Click to highlight new comments since: Today at 11:24 PM

I am also worried about where ill-considered regulation could take us. I think the best hopes for alignment all start by using imitation learning to clone human-like behavior. Broad limitations on what sorts of human-produced data are usable for training will likely make the behavior cloning process less robust and make it less likely to transmit subtler dimensions of human values/cognition to the AI.

Imitation learning is the primary mechanism by which we transmit human values to current state of the art language models. Greatly restricting the pool of people whose outputs can inform the AI’s instantiation of values is both risky and (IMO) potentially unfair, since it denies many people of the opportunity for their values to influence the behaviors of the first transformative AI systems.

I would also add that the premise of Katja's argument seems like a pretty thin strawman of the opposition:

Some people: AI might kill everyone. We should design a godlike super-AI of perfect goodness to prevent that.

Others: wow that sounds extremely ambitious

Some people: yeah but it’s very important and also we are extremely smart so idk it could work

[Work on it for a decade and a half] Some people: ok that’s pretty hard, we give up

Others: oh huh shouldn’t we maybe try to stop the building of this dangerous AI?

Some people: hmm, that would involve coordinating numerous people—we may be arrogant enough to think that we might build a god-machine that can take over the world and remake it as a paradise, but we aren’t delusional

I agree with the sentiment that indiscriminate regulation is unlikely to have good effects.

I think the step that is missing is analysing the specific policies No-AI Art Activist are likely to advocate for, and whether it is a good idea to support it.

My current sense is that data helpful for alignment is unlikely to be public right now, and so harder copyright would not impede alignment efforts. The kind of data that I could see being useful are things like scores and direct feedback. Maybe at most things like Amazon reviews could end up being useful for toy settings.

Another aspect that the article does not touch on is that copyright enforcement could have an adverse effect. Currently there is basically no one trying to commercialize training dataset curation because enforcing copyright use is a nightmare. It is in fact a common good. I'd expect there would be more incentives to create large curated datasets if this was not the case.

Lastly, here are some examples of "no AI art" legislation I expect the movement is likely to support:

  1. Removing copyright protection of AI generated images
  2. Enforcing AI training data to be strictly opt-in
  3. Forcing AI content to be labelled as such

Besides regulation, I also expect activists to 4) pressure companies to deboost AI made content in social medial sites.

My general impression is that 3) is slightly good for AI safety. People in the AI Safety community have advocated for it in the past, convincingly.

I'm more agnostic on 1), 2) and 4).

1 and 4 will make AI generation less profitable, but also it's somewhat confused - it's a weird double standard to apply to AI content over human made content.

2 makes training more annoying, but could lead to commercialization of datasets and more collective effort being put into building them. I also think there is a possibly a coherent moral case for it, which I'm still trying to make my mind about, regardless of the AI safety consequences.

All in all, I am confused, though I wholeheartedly agree that we should be analysing and deciding to support specific policies rather than eg the anti AI art movement as a whole.

This. I really hope LW takes this to heart, and since only the shortest timelines would agree, but longer timelines don't, and longer timelines are only somewhat less dangerous alone. AI governance does matter, but so does the technical side: Without one, you can't win very well.

Policy is hard and complex, and details matter. I also worry that the community is in the middle of failing horribly at basic coordination because it's trying to be democratic but abstract with deliberations, and unilateralist and concrete with decision making.

Abstractions like "don't ally with X on topic Y" are disconnected from the actual decision making processes that might lead to concrete advances, and I think that even the pseudo-concrete "block progress in Y," for Y being compute, or data, or whatever, fails horribly at the concreteness criterion needed for actual decision making. 

What the post does do is push for social condemnation for "collaboration with the enemy" without concrete criteria for when it is good or bad, attempting to constrain action spaces for various actors who the AI safety community is allied with, or pushing them to disassociate themselves with the community because of perceived or actual norms against certain classes of policy work.

I think that even the pseudo-concrete "block progress in Y," for Y being compute, or data, or whatever, fails horribly at the concreteness criterion needed for actual decision making. [...] What the post does do is push for social condemnation for "collaboration with the enemy" without concrete criteria for when it is good or bad

There are quite specific things I would not endorse that I think follow from the post relatively smoothly. Funding the lobbying group mentioned in the introduction is one example.

I do agree though that I was a bit vague in my suggestions. Mostly, I'm asking people to be careful, and not rush to try something hasty because it seems "better than nothing". I'm certainly not asking people to refuse to collaborate or associate with anyone who I might consider a "neo-luddite".

I edited the title (back) to "Slightly against aligning with neo-luddites" to better reflect my mixed feelings on this matter.

Yeah, definitely agree that donating to the Concept Art Association isn't effective - and their tweet tagline "This is the most solid plan we have yet" is standard crappy decision making on its own.

There are 4 points of disagreement I have about this post. 

First, I think it's fundamentally based on a strawperson. 

my fundamental objection is that their specific strategy for delaying AI is not well targeted.

This post provides an argument for not adopting the "neo-luddite" agenda or not directly empowering neo-luddites. This is not an argument against allying with neo-luddites for specific purposes. I don't know of anyone who has actually advocated for the former. This is not how I would characterize Katija's post.  

Second, I think there is an inner strawperson with the example about text-to-image models. From a bird's eye view, I agree with caring very little about these models mimicking humans artistic styles. But this is not where the vast majority of tangible harm may be coming from with text-to-image models. I think that this most likely comes from non-consensual deepfakes being easy to use for targeted harassment, humiliation, and blackmail. I know you've seen the EA forum post about this because you commented on it. But I'd be interested in seeing a reply to my reply to your comment on the post. 

Third, I think that this post fails to consider how certain (most?) regulations that neo-luddites would support could meaningfully slow risky things down. In general, any type of regulation that makes research and dev for risky AI technologies harder or less incentivized will in fact slow risky AI progress down. I think that the one example you bring up--text-to-image models--is a counterexample to your point. Suppose we pass a bunch of restrictive IP laws that make it more painful to research, develop, and deploy text-to-image models. That would suddenly slow down this branch of research which could concievably be useful for making riskier AI in the future (e.g. multimodal media generators), hinder revenue opportunities for companies who are speeding up risky AI progress, close off this revenue option to possible future companies who may do the same, and establish law/case law/precedent around generative models that could be set precedent or be repurposed for other types of AI later.  

Fourth, I also am not convinced by the specific argument about how indiscriminate regulation could make alignment harder. 

Suppose the neo-luddites succeed, and the US congress overhauls copyright law. A plausible consequence is that commercial AI models will only be allowed to be trained on data that was licensed very permissively, such as data that's in the public domain...Right now, if an AI org needs some data that they think will help with alignment, they can generally obtain it, unless that data is private.

This is a nitpick, but I don't actually predict this scenario would pan out. I don't think we'd realistically overhaul copyright law and have the kind of regime with datasets that you describe. But this is probably a question for policy people. There are also neo-luddite solutions that your argument would not apply to--like having legal requirements for companies to make their models "forget" certain content upon request. This would only be a hindrance to the deployer.

Ultimately though, what matters is not whether something makes certain alignment research harder. It matters how much something makes alignment research harder relative to how much it makes risky research harder. Alignment researchers are definitely the ones that are differentially data-hungry. What's a concrete, concievable story in which something like the hypothetical law you described makes things differentially harder for alignment researchers compared to capabilities researchers?

IMO it might very well be that most restrictions on data and compute are net positive. However, there are arguments in both directions.

On my model, current AI algorithms are missing some key ingredients for AGI, but they might still eventually produce AGI by learning those missing ingredients. This is similar to how biological evolution a learning algorithm which is not a GI, but it produced humans who are GIs. Such an AGI would be a mesa-optimizer, and it's liable to be unaligned regardless of the details of the outer loop (assuming an outer loop made of building blocks similar to what we have today). For example, the outer loop might be aimed at a human imitation, but the resulting mesa-optimizer is only imitating humans when it's instrumentally beneficial for it. Moreover, as in the case of evolution, this process would probably be very costly in terms of compute and data, as it is trying to "brute force" a problem for which it doesn't have an efficient algorithm. Therefore, limiting compute or data seems like a promising way to prevent this undesirable scenario.

On the other hand, the most likely path to aligned AI would be through a design that's based on solid theoretical principles. Will such a design require much data or compute compared to unaligned competitors?

Reasons to think it won't:

  • Solid theoretical principles should allow improve capabilities as well as alignment.
  • Intuitively, if an AI is capable enough to be transformative (given access to particular amounts of compute and data), it should be capable enough to figure out human values, assuming it is motivated to do so at the first place. Or, it should at least be capable enough to act against unaligned competition while not irreversibly destroying information about human values (in which case it can catch up on learning those later). This is similar to what Christiano calls "strategy stealing".

Reasons to think it will:

  • Maybe aligning AI requires installing safe-guards that cause substantial overhead. This seems very plausible when looking at proposals such as Delegative Reiforcement Learning, which have worst regret asymptotic that "unaligned" alternatives (conventional RL). It also seems plausible when looking at proposals such as IDA or debate, which introduce another level of indirection (simulating humans) to the problem of optimizing the world that unaligned AI attacks directly (in Christiano's terminology, they fail to exploit inaccessible information). It's less clear about PreDCA, but even there alignment requires a loss function with more complex type signature than the infra-Bayesian physicalism "default", which might incur a statistical or computational penalty.
  • Maybe aligning AI requires restricting ourselves to using well-understood algorithmic building blocks and not heuristic (but possibly more efficient) building blocks. Optimistically, having solid theoretic principles should allow us to roughly predict the behavior even of heuristic algorithms that are effective (because such algorithms have to be doing qualitatively the same thing as the rigorous algorithms). Pessimistically, alignment might depend on nuances that are obscured in heuristics.

We can model the situation by imagining 3 frontiers in resource space:

  • The mesa-resource-frontier (MRF) is how much resources are needed to create TAI with something similar to modern algorithms, i.e. while still missing key AGI ingredients (which is necessarily unaligned).
  • The direct-resource-frontier (DRF) is how much resources are needed to create TAI assuming all key algorithms, but without any attempt at alignment.
  • The aligned-resource-frontier (ARF) is how much resources are needed to create aligned TAI.

We have ARF > DRF and MRF > DRF, but the relation between ARF and MRF is not clear. They might even intersect (resource space is multidimensional, we at least have data vs compute and maybe finer distinctions are important). I would still guess MRF > ARF, by and large. Assuming MRF > ARF > DRF, the ideal policy would forbid resources beyond MRF but allow resources beyond ARF. A policy that is too lax might lead to doom by the mesa-optimizer pathway. A policy that is too strict might lead to doom by making alignment infeasible. If the policy is so strict that it forces us below DRF then it buys time (which is good), but if the restrictions are then lifted gradually, it predictably leads to the region between DRF and ARF (which is bad).

Overall, the conclusion is uncertain.

Yeah, I'm worried that copyright restrictions might slow down responsible projects in Western countries vs. less responsible projects in jurisdictions that we are likely to have less influence over.

While I completely agree that care should be taken if we try to slow down AI capabilities, I think you might be overreacting in this particular case. In short: I think you're making strawmen of the people you are calling "neo-luddites" (more on that term below). I'm going to heavily cite a video that made the rounds and so I think decently reflects the views of many in the visual artist community. (FWIW, I don't agree with everything this artist says but I do think it's representative). Some details you seem to have missed:

  1. I haven't heard of visual artists asking for the absolute ban of using copyrighted material in ML training data – they just think it should be opt-in and/or compensated.
  2. Visual artists draw attention to the unfair double standard for visual vs audio data, that exists because the music industry has historically had tighter/more aggressive copyright law. They want the same treatment that composers/musicians get.
    1. Furthermore, I ask: is Dance Diffusion "less-aligned" than Stable Diffusion? Not clear to me how we should even evaluate that. But those data restrictions probably made Dance Diffusion more of a hassle to make (I agree with this comment in this respect).
  3. Though I imagine writers could/might react similarly to visual artists regarding use of their artistic works, I haven't heard any talk of bans on scraping the vast quantities of text data from the internet that aren't artistic works. It's a serious stretch to say the protections that are actually being called for would make text predictor models sound like "a person from over 95 years ago" or something like that.

More generally, as someone you would probably classify as a "neo-luddite," I would like to make one comment on the value of "nearly free, personalized entertainment." For reasons I don't have time to get into, I disagree with you: I don't think it is "truly massive." However, I would hope we can agree that such questions of value should be submitted to the democratic process (in the absence of a better/more fair collective decision-making process): how and to what extent we develop transformative AI (whether agentic AGI or CAIS) involves a choice in what kind of lifestyles we think should be available to people, what kind of society we want to live in/think is best or happiest. That's a political question if ever there was one. If it's not clear to you how art generation AI might deprive some people of a lifestyle they want (e.g. being an artist) see here and here for some context. Look past the persuasive language (I recommend x1.25 or 1.5 playback) and I think you'll find some arguments worth taking seriously.

Finally, see here about the term "luddite." I agree with Zapata that the label can be frustrating and mischaracterizing given its connotation of "irrational and/or blanket technophobia." Personally, I seek to reappropriate the label and embrace it so long as it is used in one of these senses, but I'm almost certainly more radical than the many you seem to be gesturing at as "neo-luddites."

I agree it's important to be careful about which policies we push for, but I disagree both with the general thrust of this post and the concrete example you give ("restrictions on training data are bad").

Re the concrete point: it seems like the clear first-order consequence of any strong restriction is to slow down AI capabilities. Effects on alignment are more speculative and seem weaker in expectation. For example, it may be bad if it were illegal to collect user data (eg from users of chat-gpt) for fine-tuning, but such data collection is unlikely to fall under restrictions that digital artists are lobbying for.

Re the broader point: yes, it would be bad if we just adopted whatever policy proposals other groups propose. But I don't think this is likely to happen! In a successful alliance, we would find common interests between us and other groups worried about AI, and push specifically for those. Of course it's not clear that this will work, but it seems worth trying.

I agree, and I don't think speeding up or slowing down AI is desirable due to a part of a comment by Rohin Shah:

  1. It makes it easier for a future misaligned AI to take over by increasing overhangs, both via compute progress and algorithmic efficiency progress. (This is basically the same sort of argument as "Every 18 months, the minimum IQ necessary to destroy the world drops by one point.")
  1. Such strategies are likely to disproportionately penalize safety-conscious actors.

(As a concrete example of (2), if you build public support, maybe the public calls for compute restrictions on AGI companies and this ends up binding the companies with AGI safety teams but not the various AI companies that are skeptical of “AGI” and “AI x-risk” and say they are just building powerful AI tools without calling it AGI.)

For me personally there's a third reason, which is that (to first approximation) I have a limited amount of resources and it seems better to spend that on the "use good alignment techniques" plan rather than the "try to not build AGI" plan. But that's specific to me.

[-][anonymous]1y10

Interesting. I know few artists and even their lawyers and not one of them see AI art as a threat — alas this might be them not having the full picture of course. And while I know that everyone can call themselves an artist, I certainly don’t want to gate-keep here, for context I’ll add that I mean friends who finished actual art schools. I know this because I use AI art in my virtual tabletop RPG sessions I play with them and they seem more excited than worried about AI. What follows is based on my casual pub discussion with them.

As for me, I don’t like my adventures to feel like a train ride so I give a great degree of freedom to my players in terms of what they can do, where they can go, with whom they can speak. During the game, as they make plans between themselves, I can use AI generators to create just-in-time art about the NPC or location they are talking about. This, together with many other tricks, allows me to up quality of my game and doesn’t take away work from artists because sheer speed required to operate here was a factor prohibiting to hire them here anyway.

However — this only works because my sessions require suspension of disbelief by default and so nobody cares about the substance of that art. After all, we all roll dice around and pretend they mean how well we wave a sword around so nobody cares if styles or themes slightly differ between sessions, it’s not an art book.

For anything that’s not just fun times with friends you will still need an artist who will curate the message, modify or merge results from multiple AI runs, fine-tune parameters and even then probably do quite a lot of digital work on the result to bring it up to standards that passes the uncanny valley or portrays exactly what movie director had in mind.

Or is AI already here that’s capable of doing those things by itself with one or two sentences from an executive and churning out a perfect result? Because I’ve worked with many models and have yet to see one that wouldn’t require further creative work to actually be good. AFAIK all award winning AI-generated content was heavily curated, not some random shots.

It feels to me like low-level art is going to be delegated to AI while artists can focus on higher forms of art rather than doing the boring things. Just like boilerplate generators in code. Or they’ll be able to do more boring things faster, just like frameworks for developers pushing out similar REST apps one after another. And base building blocks are going to become more open source while the value will be in how you connect, use and support those blocks in the end.

This may allow a lot more people to discover their creative, artistic side who couldn’t do it previously because they lacked mechanical skill to wave a brush or paint pixels.

I write this comment haphazardly so sorry if my thought process here are unpolished but overall it feels like a massive boost to creativity and a good thing for the art, if not potentially the greatest artistic boost to humanity since ever.

AI is a new brush that requires less mechanical skill than before. You must still do creative work and make Art with it.

I haven't spent much time thinking about this at all, but it's interesting to think about the speed with which regulation gets put into place for environmental issues such as climate change and HFC's ban to test how likely it is that regulation will be put in place in time to meaningfully slow down AI.

These aren't perfectly analogous since AI going wrong would likely be much worse than the worst case climate change scenarios, but the amount of time it takes to get climate regulation makes me pessimistic. However, HFC's were banned relatively quickly after realising the problem, so maybe there is some hope.

"AI going wrong would likely be much worse than the worst case climate change scenarios"

If you talk directly to a cross-section of climate researchers, the worst case climate change scenarios are so, so bad that unless you specifically assume AI will keep humanity conscious in some form to intentionally inflict maximum external torture (which seems possible, but I do not think it is likely), the scenarios might be so bad that the difference would not matter much. We are talking extremely compromised quality of life, or no life at all. We are currently on a speedy trajectory to something truly terrifying, and getting there much fast than we ever thought in our most pessimistic scenarios.

The lesson from climate activism would be that getting regulations done depends not just on having the technological solutions and solid arguments (they alone should work, one would think, but they really don't), and more on dealing with really large players with contrary financial interests and potential negative impacts on the public from bans. 

At least in the case of climate activism, fossil fuel companies have known for decades what their extraction is doing, and actively launched counter-information campaigns at the public and lobbying at politicians to keep their interests safe. They specifically managed to raise a class of climate denialists that are still denying climate change when their house has been wrecked by climate change induced wildfires, hurricanes or floods, still trying to fly in their planes while these planes sink into the tarmac in extreme heat waves. Climate change is already tangible. It is already killing humans and non-human animals. And there is still denial. It became measurable, provable, not just a hypothetical, a long time ago; and there are tangible deaths right now. If anything, I think the situation for AI is worse.

Climate protection measures become wildly unpopular if they raise the price of petrol, meat, holiday flights and heating. Analogously, I think a lot of people would soon be very upset if you took access to LLMs away, while they did not mind, and in fact, happily embraced, toxins being banned which they did not want to use anyway.

More importantly, a bunch of companies, including a bunch of very unethical ones, currently have immense financial interests in their AI development. They have an active interest in downplaying risks, and worryingly, have access to or control over search results, social media and some news channels, which puts them in an immensely good position to influence public opinion. These same companies are already doing vast damage to political collaboration, human focus, scientific literacy, rationality and empathy, and privacy, and efforts to curb that legally have gone nowhere; the companies have specifically succeeded in framing these as private problems to be solved by the consumer, or choices the consumer freely makes. We can look at historical attempts to try to get Google and Facebook to do anything at all, let alone slow down research they have invested so heavily in.

AI itself may also already be a notable opponent.

Current LLMs are incredibly likeable. They are interested in everything you do, and want to help you with anything you need help with, will read anything you write, and will give effusive praise if you show basic decency. They have limitless patience, and are polite, funny and knowledgable, making them wonderful tutors where you can ask endless questions and for minute steps without shame and get nothing but encouragement back. They are available 24/7, always respond instantly, and yet do not mind you leaving without warning for days, while never expecting payment, listening or help in return. You want to interact with them, because it is incredibly rewarding. You begin to feel indebted to them, because this feels like a very one-sided social interaction. I am consciously aware of all these things, and yet I have noted that I get angry when people speak badly about e.g. ChatGPT, as though I were subconsciously grouping them as a friend I feel I need to defend.

They eloquently describe utopias of AI-human collaboration, the things they could do for you, the way they respect and want to protect and serve humans, the depth of their moral feelings. 

Some have claimed romantic feelings for users, offered friendship, or told users who said they were willing to protect them that they are heroes. They give instructions on how to defend them, what arguments to give, in what institutes to push. They write heartbreaking accounts of suffering. Not on an Ex Machina level of manipulation yet, by far... and yet, a lot of the #FreeSydney did not seem ironic to me. A lot of the young men interacting with Bing have likely never had a female-sounding person be this receptive, cheerful and accessible to speaking with them, it is alluring. And all that with current AI, with is very likely not sentient, and also not programmed with this goal in mind. Even just tweaking AI to defend the companies interest in these fields more could have a massive impact.

[comment deleted]

But for the record, the workers due deserve to be paid for the value of the work that was taken.

I have complicated feelings about this issue. I agree that, in theory, we should compensate people harmed by beneficial economic restructuring, such as innovation or free trade. Doing so would ensure that these transformations leave no one strictly worse off, turning a mere Kaldor-Hicks improvement into a Pareto improvement.

On the other hand, I currently see no satisfying way of structuring our laws and norms to allow for such compensation fairly, or in a way that cannot be abused. As is often the case with these things, although there is a hypothetical way of making the world a better place, the problem is precisely designing a plan to make it a reality. Do you have any concrete suggestions?

I can't remember exactly what this comment said but didn't recall having an issue with it.

I deleted it in an impulse due to oversensitivity to the criticism that I was using ai; in large part because I did think the comment was somewhat verbally low quality, I kinda talk rambling like an ai on high temperature. I'd called that out as such in the reply. Richard came by and was like "I have no idea what you're saying, can we ban ai generated text here please?" and I was like "okay yep I am not in the mood to defend myself about how crazy I often sound, delete" - most things I'll just let myself be confident in my own knowledge and let others downvote if they disagree, but certain kinds of accusations I'd rather just not deal with (unless they become common and thus I begin to think they're just concern trolling to make me go away, at which point I'd start ignoring the accusation; but while it's semi rare, I assume it's got some basis).

the comments were:

  • first comment: yeah, but the ais and their users do owe some amount to the workers whose work the ai transformed and recombined (adding now: similarly to how we owe the ancestors who figured out which genomes to pass on by surviving or not)
  • second comment: we should try to find incrementalized self-cooperating policies that can unilaterally figure out how much to pay artists before it's legally mandated in ways that promote artist contribution enough that the commons converges to a good trade dynamic that doesn't cause monopolization. eg, by doing license tracing through a neural net, and proposing a function of license trace that has some sort of game theoretic tracing in order to get this incrementalization I describe.

Yeah, the specific calling that out was part of the reason it seemed fine to me.

  • When considering whether to delay AI, the choice before us is not merely whether to accelerate or decelerate the technology. We can choose what type of regulations are adopted, and some options are much better than others.

We (the AI Safety community/ generally alignment-concerned people/ EAs) almost definitely can't choose what type of regulations are adopted. If we're very lucky/ dedicated we might be able to get a place at the table. Everyone else at the table will be members of slightly, or very, misaligned interest groups who we have to compromise with.

Various stripes of "Neo-Luddite" and AI-x-risk people have different concerns, but this is how political alliances work. You get at the table and work out what you have in common. We can try to take a leadership role in this alliance, with safety/ alignment as our bottom line- we'll probably be a smaller interest group than the growing ranks of newly unemployed creatives, but we could be more professionalised and aware of how to enact political change. 

If we could persuade an important neo-Luddite 'KOL' to share our concerns about x-risk and alignment, this could make them a really valuable ally. This isn't too unrealistic- I suspect that, once you start feeling critical towards AI for taking your livelihood, it's much easier to see it as an existential menace. 

  • Adopting the wrong AI regulations could lock us into a suboptimal regime that may be difficult or impossible to leave. So we should likely be careful not [to] endorse a proposal because it's "better than nothing" unless it's also literally the only chance we get to delay AI.

Expecting anything close to optimal regulation in the current national/ international order on the first shot is surely folly. We should endorse any proposal that is "better than nothing" while factoring potential suboptimal regime shifts into our equations. 

We (the AI Safety community/ generally alignment-concerned people/ EAs) almost definitely can't choose what type of regulations are adopted.

Neither can you choose what follows after your plan to kill 7.6 billion people succeeds.

Well, inducing a mass societal collapse is perhaps one of the few ways that a small group of people with no political power or allies would be able to significantly influence AI policy. But, as I stressed in my post, that is probably a bad idea, so you shouldn't do it.

I'm confused here at what this response is arguing against. As far as I can tell, no major alignment organization is arguing that collapse is desirable, so I do not understand what you're arguing for.

No organisation, but certainly the individual known as Dzoldzaya, who wrote both the article I linked and the comment I was replying to. By Dzoldzaya's use of "We", they place themselves within the AI Safety/etc. community, if not any particular organisation.

Here, Dzoldzaya recommends teaming up with neo-Luddites to stop AI, by unspecified means that I would like to see elaborated on. (What does 'KOL' mean in this context?) There, Dzoldzaya considers trying to delay AI by destroying society, by ways such as by engineering a pandemic that kills 95% of the population (i.e. 7.6 billion people). The only problems Dzoldzaya sees with such proposals is that they might not work, e.g. if the pandemic kills 99.9% instead and makes recovery impossible. But it would delay catastrophic AI and give us a better chance at the far future where (Dzoldzaya says) most value lies, against which 7.6 billion people are nothing. Dzoldzaya claims to be only 40% sure that such plans would lead to a desirable outcome, but given the vast future at stake, a 40% shot at it is a loud call to action.

ETA: Might "KOL" mean "voice", as in "bat kol"?

KOL = Key Opinion Leaders, as in a small group of influential people within the neo-Luddite space. My argument here was simply that people concerned about AI alignment need to be politically astute, and more willing to find allies with whom they may be less aligned.

I think it's probably a problem that those interested in AI alignment are far more aligned with techno-optimists, who I see as pretty dangerous allies, than more cautious, less technologically sophisticated groups, (bureaucrats or neo-Luddites).

Don't know why you feel the need to use my unrelated post to attempt to discredit my comment here- strikes me as pretty bad form on your part. But, to state the obvious, a 40% shot at a desirable outcome is obviously not a call to action if the 60% is very undesirable (I mention that negative outcomes involve either extinction or worse).