cata

Programmer, rationalist, chess player, father, altruist.

Comments

I specifically think it's well within the human norm, i.e. that most of the things I read are written by a person who has done worse things, or who would do worse things given equal power. I have done worse things, in my opinion. There's just not a blog post about them right now.

cata5d1311

Speaking for myself, I don't agree with any of it. From what I have read, I don't agree that the author's personal issues demonstrate "some amount of poison in them" outside the human norm, or in some way that would make me automatically skeptical of anything they said "entwined with soulcrafting." And I certainly don't agree that a reader "should be aware" of nonspecific problems that an author has which aren't even clearly relevant to something they wrote. I would give the exact opposite advice -- to try to focus on the ideas first before involving preconceptions about the author's biases.

If you wanted other people to consider this remark, you shouldn't have deleted whatever discussion you had that prompted it, so that we could go look.

Yes, I basically am not considering that because I am not aware of the arguments for why that's a likely kind of risk (vs. the risk of simple annihilation, which I understand the basic arguments for.) If you think the future will be super miserable rather than simply nonexistent, then I understand why you might not have a kid.

I don't agree with that. I'm a parent of a 4-year-old who takes AI risk seriously. I think childhood is great in and of itself, and if the fate of my kid is to live until 20 and then experience some unthinkable AI apocalypse, that was 20 more good years of life than he would have had if I didn't do anything. If that's the deal of life it's a pretty good deal and I don't think there's any reason to be particularly anguished about it on your kid's behalf.

Thanks for the post. Your intuition as someone who has observed lots of similar arguments and the people involved in them seems like it should be worth something.

Personally as a non-involved party following this drama the thing I updated the most about so far was the emotional harm apparently done by Ben's original post. Kat's descriptions of how stressed out it made her were very striking and unexpected to me. Your post corroborates that it's common to take extreme emotional damage from accusations like this.

I am sure that LW has other people like me who are natural psychological outliers on "low emotional affect" or maybe "low agreeableness" who wouldn't necessarily intuit that it would be a super big deal for someone to publish a big public post accusing you of being an asshole. Now I understand that it's a bigger deal than I thought, and I am more open to norms that are more subtle than "honestly write whatever you think."

I am skeptical of the gender angle, but I think it's being underdiscussed that, based on the balance of evidence so far, the person with the biggest, most effective machine gun is $5000 to the richer and still anonymous, whereas the people hit by their bullets are busy pointing fingers at each other. Alice's alleged actions trashing Nonlinear (and 20-some former people???) seem IMO much worse than anything Lightcone or Nonlinear is being even accused of.

(Not that this is a totally foregone conclusion - I noticed that Nonlinear didn't provide any direct evidence on the claim that Alice was a known serial liar outside of this saga.)

I just had a surprisingly annoying version of a very mundane bug. I was working in Javascript and I had some code that read some parameters from the URL and then did a bunch of math. I had translated the math directly from a different codebase so I was absolutely sure it should be right; yet I was getting the wrong answer. I console.logged the inputs and intermediate values and was totally flummoxed because all the inputs looked clearly right, until at some point a totally nonsense value was produced from one equation.

Of course, the inputs and intermediate values were strings that I forgot to parse into Javascript numbers, so everything looked perfect until finally it plugged them into my equation, which silently did string operations instead of numeric operations, producing an apparently absurd result. But it took a good 20 minutes of me plus my coworker staring at these 20 lines of code and the log outputs until I figured it out.

Answer by cataDec 10, 20232515

I think the most basic and true explanation is that the companies we are thinking about started out with unusually high-quality products, which is why they came to our notice. Over time, the conditions that enabled them to do especially good work change and their ability tends to regress to the mean. So then the product gets worse.

Related ideas:

  • High-quality product design is not very legible to companies and it's hard for them to select for it in their hiring or incentive structure.

  • Companies want to grow for economy-of-scale reasons, but the larger a company is the more challenging it is to organize it to do good work.

  • Of course, doing nothing at all seems ridiculous, particularly so for companies whose investors all invested on the premise of dramatic growth.

  • In many cases, a company probably originally designed a product that they themselves liked, and they happened to be representative enough of a potential market that they became successful and their product was well-liked. Then the next step is to try to design for a mass market that is typically unlike themselves (since companies are usually made up of a kind of specific homogeneous employee base.) That's much harder and they may guess wrong about what that mass market will like.

Load More