Wiki Contributions

Comments

Yes, obviously, but they use different strategies. Male sociopaths rarely paint themselves as helpless victims because it is not an effective tactic for men. One does notice that, while the LW community is mostly male, ~every successful callout post against a LW community organization has been built on claims of harm to vulnerable female victims.

When you say "it's clearly the right call to wait a week or two until we have another round of counter-evidence before jumping to conclusions", is this a deliberate or accidental echo of the similar request from Nonlinear which you denied?

Like, on the deliberate way of reading this, the subtext is "While Lightcone did not wait a week or two for counter-evidence and still defends this decision, you should have waited in your case because that's the standard you describe in your article." Which would be a hell of a thing to say without explicitly acknowledging that you're asking for different standards. (And would also misunderstand TracingWoodgrains's actual standard, which is about the algorithm used and not how much clock time is elapsed, as described in their reply to your parent comment.) Or on the accidental way of reading this, the subtext is "I was oblivious to how being publicly accused of wrongdoing feels from the inside, and I request grace now that the shoe is on the other foot." Either of these seems kind of incredible but I can't easily think of another plausible way of reading this. I suppose your paragraph on wanting to take the time to make a comprehensive response (which I agree with) updates my guess towards "oblivious".

On Pace's original post I wrote:

"think about how bad you expect the information would be if I selected for the worst, credible info I could share"

Alright. Knowing nothing about Nonlinear or about Ben, but based on the rationalist milieu, then for an org that’s weird but basically fine I’d expect to see stuff like ex-employees alleging a nebulously “abusive” environment based on their own legitimately bad experiences and painting a gestalt picture that suggests unpleasant practices but without any smoking-gun allegations of really egregious concrete behavior (as distinct from very bad effects on the accusers); allegations of nepotism based on social connections between the org’s leadership and their funders or staff; accusations of shoddy or motivated research which require hours to evaluate; sources staying anonymous for fear of “retaliation” but without being able to point to any legible instances of retaliation or concrete threats to justify this; and/or thirdhand reports of lying or misdirection around complicated social situations.

[reads post]

This sure has a lot more allegations of very specific and egregious behavior than that, yeah.

Having looked at the evidence and documentation which Nonlinear provides, it seems like the smoking-gun allegations of really egregious concrete behavior are probably just false. I have edited my earlier comment accordingly.

This is a bit of a tangent, but is there a biological meaning to the term "longevity drug"? For a layman like me, my first guess is that it'd mean something like "A drug that mitigates the effects of aging and makes you live longer even if you don't actively have a disease to treat." But then I'd imagine that e.g. statins would be a "longevity drug" for middle-aged men with a strong family history of heart disease, in that it makes the relevant population less susceptible to an aging-related disease and thereby increases longevity, yet the posts talk about the prospect of creating the "first longevity drug" so clearly it's being used in a way that doesn't include statins. Is there a specific definition I'm ignorant of, or is it more of a loose marketing term for a particular subculture of researchers and funders, or what?

We can certainly debate whether liability ought to work this way. Personally I disagree, for reasons others have laid out here, but it's fun to think through.

Still, it's worth saying explicitly that as regards the motivating problem of AI governance, this is not currently how liability works. Any liability-based strategy for AI regulation must either work within the existing liability framework, or (much less practically) overhaul the liability framework as its first step.

Cars are net positive, and also cause lots of harm. Car companies are sometimes held liable for the harm caused by cars, e.g. if they fail to conform to legal safety standards or if they sell cars with defects. More frequently the liability falls on e.g. a negligent driver or is just ascribed to accident. The solution is not just "car companies should pay out for every harm that involves a car", partly because the car companies also don't capture all or even most of the benefits of cars, but mostly because that's an absurd overreach which ignores people's agency in using the products they purchase. Making cars (or ladders or knives or printing presses or...) "robust to misuse", as you put it, is not the manufacturer's job.

Liability for current AI systems could be a good idea, but it'd be much less sweeping than what you're talking about here, and would depend a lot on setting safety standards which properly distinguish cases analogous to "Alice died when the car battery caught fire because of poor quality controls" from cases analogous to "Bob died when he got drunk and slammed into a tree at 70mph".

It's fun to come through and look for interesting threads to pull on. I skim past most stuff but there's plenty of good and relevant writing to keep me coming back. Yeah sure it doesn't do a super great job of living up to the grandiose ideals expressed in the Sequences but I don't really mind, I don't feel invested in ~the community~ that way so I'll gladly take this site for what it is. This is a good discussion forum and I'm glad it's here.

Toner's employer, the Center for Security and Emerging Technology (CSET), was founded by Jason Matheny. Matheny was previously the Director of the Intelligence Advanced Research Projects Activity (IARPA), and is currently CEO of the RAND Corporation. CSET is currently led by Dewey Murdick, who previously worked at the Department of Homeland Security and at IARPA. Much of CSET's initial staff was former (or "former") U.S. intelligence analysts, although IIRC they were from military intelligence rather than the CIA specifically. Today many of CSET's researchers list prior experience with U.S. civilian intelligence, military intelligence, or defense intelligence contractors. Given the overlap in staff and mission, U.S. intelligence clearly and explicitly has a lot of influence at CSET, and it's reasonable to suspect a stronger connection than that.

I don't see it for McCauley though.

Suppose you're an engineer at SpaceX. You've always loved rockets, and Elon Musk seems like the guy who's getting them built. You go to work on Saturdays, you sometimes spend ten hours at the office, you watch the rockets take off and you watch the rockets land intact and that makes everything worth it.

Now imagine that Musk gets in trouble with the government. Let's say the Securities and Exchange Commission charges him with fraud again, and this time they're *really* going after him, not just letting him go with a slap on the wrist like the first time. SpaceX's board of directors negotiates with SEC prosecutors. When they emerge they fire Musk from SpaceX, and remove Elon and Kimbal Musk from the board. They appoint Gwynne Shotwell as the new CEO.

You're pretty worried! You like Shotwell, sure, but Musk's charisma and his intangible magic have been very important to the company's success so far. You're not sure what will happen to the company without him. Will you still be making revolutionary new rockets in five years, or will the company regress to the mean like Boeing? You talk to some colleagues, and they're afraid and angry. No one knows what's happening. Alice says that the company would be nothing without Musk and rails at the board for betraying him. Bob says the government has been going after Musk on trumped-up charges for a while, and now they finally got him. Rumor has it that Musk is planning to start a new rocket company.

Then Shotwell resigns in protest. She signs an open letter calling for Musk's reinstatement and the resignation of the board. Board member Luke Nosek signs it too, and says his earlier vote to fire Musk was a huge mistake. 

You get a Slack message from Alice saying that she's signed the letter because she has faith in Musk and wants to work at his company, whichever company that is, in order to make humanity a multiplanetary species. She asks if you want to sign.

How do you feel?

I really don't think you can justify putting this much trust in the NYT's narrative of events and motivations here. Like, yes, Toner did publish the paper, and probably Altman did send her an email about it. Then the NYT article tacitly implies but *doesn't explicitly say* this was the spark that set everything off, which is the sort of haha-it's-not-technically-lying that I expect from the NYT. This post depends on that implication being true.

Load More