David Hornbein

Wiki Contributions

Comments

Sorted by

>Are the near-term prospects of AGI making long-term prospects like suspension less attractive?

No. Everyone I know who was signed up for cryonics in 2014 is still signed up now. You're hearing about it less because Yudkowsky is now doing other things with his time instead of promoting cryonics, and those discussions around here were a direct result of his efforts to constantly explain and remind people.

I agree with your argument here, especially your penultimate paragraph, but I'll nitpick that framing your disagreements with Groves as him being "less of a value add" seems wrong. The value that Groves added was building the bomb, not setting diplomatic policy.

What is the mechanism, specifically, by which going slower will yield more "care"? What is the mechanism by which "care" will yield a better outcome? I see this model asserted pretty often, but no one ever spells out the details.

I've studied the history of technological development in some depth, and I haven't seen anything to convince me that there's a tradeoff between development speed on the one hand, and good outcomes on the other.

I'm coming to this late, but this seems weird. Do I understand correctly that many people were saying that Anthropic, the AI research company, had committed never to advance the state of the art of AI research, and they believed Anthropic would follow this commitment? That is just... really implausible.

This is the sort of commitment which very few individuals are psychologically capable of keeping, and which ~zero commercial organizations of more than three or four people are institutionally capable of keeping, assuming they actually do have the ability to advance the state of the art. I don't know whether Anthropic leadership ever said they would do this, and if they said it then I don't know whether they meant it earnestly. But even imagining they said it and meant it earnestly there is just no plausible world in which a company with hundreds of staff and billions of dollars of commercial investment would keep this commitment for very long. That is not the sort of thing you see from commercial research companies in hot fields.

If anyone here did believe that Anthropic would voluntarily refrain from advancing the state of the art in all cases, you might want to check if there are other things that people have told you about themselves, which you would really like to be true of them, but you have no evidence for other than their assertions, and would be very unusual if they were true.

Ben is working on a response, and given that I think it's clearly the right call to wait a week or two until we have another round of counter-evidence before jumping to conclusions. If in a week or two people still think the section of "Avoidable, Unambiguous falsehoods" does indeed contain such things, then I think an analysis like this makes sense

This was three months ago. I have not seen the anticipated response. Setting aside the internal validity of your argument above, the promised counterevidence did not arrive in anything like a reasonable time.

TracingWoodgrains clearly made the right call in publishing, rather than waiting for you.

Yes, obviously, but they use different strategies. Male sociopaths rarely paint themselves as helpless victims because it is not an effective tactic for men. One does notice that, while the LW community is mostly male, ~every successful callout post against a LW community organization has been built on claims of harm to vulnerable female victims.

When you say "it's clearly the right call to wait a week or two until we have another round of counter-evidence before jumping to conclusions", is this a deliberate or accidental echo of the similar request from Nonlinear which you denied?

Like, on the deliberate way of reading this, the subtext is "While Lightcone did not wait a week or two for counter-evidence and still defends this decision, you should have waited in your case because that's the standard you describe in your article." Which would be a hell of a thing to say without explicitly acknowledging that you're asking for different standards. (And would also misunderstand TracingWoodgrains's actual standard, which is about the algorithm used and not how much clock time is elapsed, as described in their reply to your parent comment.) Or on the accidental way of reading this, the subtext is "I was oblivious to how being publicly accused of wrongdoing feels from the inside, and I request grace now that the shoe is on the other foot." Either of these seems kind of incredible but I can't easily think of another plausible way of reading this. I suppose your paragraph on wanting to take the time to make a comprehensive response (which I agree with) updates my guess towards "oblivious".

On Pace's original post I wrote:

"think about how bad you expect the information would be if I selected for the worst, credible info I could share"

Alright. Knowing nothing about Nonlinear or about Ben, but based on the rationalist milieu, then for an org that’s weird but basically fine I’d expect to see stuff like ex-employees alleging a nebulously “abusive” environment based on their own legitimately bad experiences and painting a gestalt picture that suggests unpleasant practices but without any smoking-gun allegations of really egregious concrete behavior (as distinct from very bad effects on the accusers); allegations of nepotism based on social connections between the org’s leadership and their funders or staff; accusations of shoddy or motivated research which require hours to evaluate; sources staying anonymous for fear of “retaliation” but without being able to point to any legible instances of retaliation or concrete threats to justify this; and/or thirdhand reports of lying or misdirection around complicated social situations.

[reads post]

This sure has a lot more allegations of very specific and egregious behavior than that, yeah.

Having looked at the evidence and documentation which Nonlinear provides, it seems like the smoking-gun allegations of really egregious concrete behavior are probably just false. I have edited my earlier comment accordingly.

This is a bit of a tangent, but is there a biological meaning to the term "longevity drug"? For a layman like me, my first guess is that it'd mean something like "A drug that mitigates the effects of aging and makes you live longer even if you don't actively have a disease to treat." But then I'd imagine that e.g. statins would be a "longevity drug" for middle-aged men with a strong family history of heart disease, in that it makes the relevant population less susceptible to an aging-related disease and thereby increases longevity, yet the posts talk about the prospect of creating the "first longevity drug" so clearly it's being used in a way that doesn't include statins. Is there a specific definition I'm ignorant of, or is it more of a loose marketing term for a particular subculture of researchers and funders, or what?

Load More