Wiki Contributions

Comments

I second GeneSmith’s suggestion to ask readers for feedback. Be aware that this is something of an imposition and that you’re asking people to spend time and energy critiquing what is currently not great writing. If possible, offer to trade - find some other people with similar problems and offer to critique their writing. For fiction, you can do this on CritiqueCircle but I don’t know of an organised equivalent for non-fiction.

The other thing you can do is to iterate. When you write something, say to yourself that you are writing the first draft of X. Then go away and do something else, come back to your writing later, and ask how you can edit it to make it better. You already described problems like using too many long sentences. So edit your work to remove them. If possible, aim to edit the day after writing - it helps if you can sleep on it. If you have time constraints, at least go away and get a cup of coffee or something in order to separate writing time from editing time.

First, I just wanted to say that this is an important question and thank you for getting people to produce concrete suggestions.

Disclaimer, I’m not a computer scientist so I’m approaching the question from the point of view of an economist. As such, I found it easier to come up with examples of bad regulation than good regulation.

Some possible categories of bad regulation:

1 It misses the point.

  • Example: a regulation only focused on making sure that the AI can’t be made to say racist things, without doing anything to address extinction risk.
  • Example: a regulation that requires AI-developers to employ ethics officers or risk management or similar without any requirement that they be effective. (Something similar to cyber-security today: the demand is that companies devote legible resources to addressing the problem, so they can’t be sued for negligence. The demand is not that the resources are used effectively to reduce societal risk.)

NB: I am implicitly assuming that a government which misses the point will pass bad regulation and then stop because they feel that they have now addressed ‘AI safety’. That is, passing bad legislation makes it less likely that good legislation is passed.

2 It creates bad incentives

  • Example: from 2027 the government will cap maximum permissible compute for training at whatever the maximum used by that date was. Companies are incentivised to race to do the biggest training runs they can before that date
  • Example: restrictions or taxes on compute apply to all AI companies unless they’re working on military or national security projects. Companies are incentivised to classify as much of their research as possible as military, meaning the research still goes ahead, but it’s now much harder for independent researchers to assess safety, because now it’s a military system with a security classification.
  • Example: the regulation makes AI developers liable for harms caused by AI but makes an exception for open-source projects. There is now a financial incentive to make models open-source

3 It is intentionally accelerationist, without addressing safety

  • A government that wants to encourage a Silicon Valley type cluster in its territory offers tax breaks for AI research over and above existing tax credits. Result: they are now paying people for going into capabilities research, so there is a lot more capabilities research
  • Industrial policy, or supply chain friendshoring, that results in a lot of new semiconductor fabs being built (this is an explicit aim of America’s IRA). The result is a global glut of chip capacity, and training AI ends up a lot cheaper than in a free-market situation.

Although clown attacks may seem mundane on their own, they are a case study proving that powerful human thought steering technologies have probably already been invented, deployed, and tested at scale by AI companies, and are reasonably likely to end up being weaponized against the entire AI safety community at some point in the next 10 years.

I agree that clown attacks seem to be possible. I accept a reasonably high probability (c70%) that someone has already done this deliberately - the wilful denigration of the Covid lab leak seems like a good candidate, as you describe. But I don't see evidence is that deliberate clown attacks are widespread. And specifically, I don't see evidence that these are being used by AI companies. (I suspect that most current uses are by governments.)

I think it's fair to warn against the risk that clown attacks might be used against the AI-not-kill-everyone community, and that this might have already happened, but you need a lot more evidence before asserting that it has already happened. If anything, the opposite has occurred, as the CEOs of all major AI companies signed onto the declaration stating that AGI is a potential existential risk. I don't have quantitative proof, but from reading a wide range of media across the last couple of years, I get the impression that the media and general public are increasingly persuaded that AGI is a real risk, and are mostly no longer deriding the AGI-concerned as being low-status crazy sci-fi people.

Do you still think your communication was better than the people who thought the line was being towed, and if so then what's your evidence for that?

We are way off topic, but I am actually going to say yes. If someone understands that English uses standing-on-the-right-side-of-a-line as a standard image for obeying rules, then they are also going to understand variants of the same idea. For example, "crossing a line" means breaking rules/norms to a degree that will not be tolerated, as does "stepping out of line". A person who doesn't grok that these are all referring to the same basic metaphor of do-not-cross-line=rule is either not going to understand the other expressions or is going to have to rote-learn them all separately. (And even after rote-learning, they will get confused by less common variants, like "setting foot over the line".) And a person who uses tow not toe the line has obviously not grokked the basic metaphor.

To recap:

  1. original poster johnswentsworth wrote a piece about people LARPing their jobs rather than attempting to build deeper understanding or models-with-gears
  2. aysja added some discussion about people failing to notice that words have referents, as a further psychological exploration of the LARPing idea, and added tow/toe the line as a related phenomenon. They say "LARPing jobs is a bit eerie to me, too, in a similar way. It's like people are towing the line instead of toeing it. Like they're modeling what they're "supposed" to be doing, or something, rather than doing it for reasons."
  3. You asked for further clarification
  4. I tried using null pointers as an alternative metaphor to get at the same concept.

No one is debating the question of whether learning etymology of words is important and I'm not sure how you got hung up on that idea. And toe/tow the line is just an example of the problem of people failing to load the intended image/concept, while LARPing (and believing?) that they are in fact communicating in the same way as people who do.

Does that help?

Not sure I understand what you're saying with the "toe the line" thing.

The initial metaphor was ‘toe the line’ meaning to obey the rules, often reluctantly. Imagine a do-not-cross line drawn on the ground and a person coming so close to the line that their toe touched it, but not in fact crossing the line. To substitute “tow the line”, which has a completely different literal meaning, means that the person has failed to comprehend the metaphor, and has simply adopted the view that this random phrase has this specific meaning.

I don’t think aysja adopts the view that it’s terrible to put idiomatic phrases whole into your dictionary. But a person who replaces a meaningful specific metaphor with a similar but meaningless one is in some sense making less meaningful communication. (Note that this also holds if the person has correctly retained the phrase as ‘toe the line’ but has failed to comprehend the metaphor.)

aysja calls this failing to notice that words have referents, and I think that gets at the nature of the problem. These words are meant to point at a specific image, and in some people’s minds they point at a null instead. It’s not a big deal in this specific example, but a) some people seem to have an awful lot of null pointers and b) sometimes the words pointing at a null are actually important. For example, think of a scientist who can parrot that results should be ‘statistically significant’ but literally doesn’t understand the difference between doing one experiment and reporting the significance of the results, and doing 20 experiments and only reporting the one ‘significant’ result

NB: the link to the original blog on the Copenhagen Interpretation of Ethics is now broken and redirects to a shopping page.

Yes. But I think most of us would agree that coercively-breeding or -sterilising people is a lot worse than doing the same to animals. The point here is that intelligent parrots could be people who get treated like animals, because they would have the legal status of animals, which is obviously a very bad thing.

And if the breeding program resulted in gradual increases in intelligence with each generation, there would be no bright line where the parrots at t-minus-1 were still animals but the parrots at time t were obviously people. There would be no fire alarm to make the researchers switch over to treating them like people, getting informed consent etc. Human nature being what it is, I would expect the typical research project staff to keep rationalising why they could keep treating the parrots as animals long after the parrots had achieved sapience.

(There is separate non-trivial debate about what sapience is and where that line should be drawn and how you could tell if a given creature was sapient or not, but I’m not going down that rabbit hole right now.)

You ask if we could breed intelligent parrots without any explanation of why we would want to. In short, because we can doesn’t mean we should. I’m not 100% against the idea, but anyone trying this seriously needs to think about questions like:

  • At what point do the parrots get legal rights? If a private effort succeeds in breeding intelligent parrots without government buy-in, it will in effect be creating sapient people who will be legally non-persons and property. There are a lot of ways for that to go wrong.
  • ETA: presumably the researchers will want to keep controlling the parrots‘ reproduction, even as the parrots become more intelligent. What happens if the parrots have their own ideas about who to breed with? Or the rejected parrots don’t want to be sterilised? Will the parrot-breeders end up repeating some of the atrocities of the 20th century eugenics movement because they act like they’re breeding animals even once they are breeding people?
  • Is there a halfway state where they bred semi-intelligent parrots that are smarter than normal parrots but not as smart as people? (Could also be the result of a failed project.) What happens to them? At what stage does an animal become intelligent enough that keeping it as a pet is wrong? What consequences will there be if you just release the semi-intelligent parrots into the wild?
  • What protections are there if the parrot-breeding project runs out of funds or otherwise fails? Will it end up doing the equivalent of releasing a bunch of small children or mentally handicapped people into the wild where they’re ill-equipped to survive, because young intelligent parrots don’t get the legal protections granted to human children? 
  •  

If there was a really compelling reason to breed intelligent parrots, then these objections could be overcome. But I don’t get any sense from you of what that compelling reason is. “Somebody thinks it sounds cool” is a good reason to do a lot of things, but not when the consequences involve something as ethically complex as creating a sapient species.

Metaculus lets you write private questions. Once you have an account, it’s as simple as selecting ‘write a question’ from the menu bar, and then setting the question to private not public, as a droplist in the settings when you write it. You can resolve your own questions ie mark them as yes/no or whatever, and then it’s easy to use Metaculus’ tools for examining your track record, including Brier score.

Load More