denyeverywhere

Wiki Contributions

Comments

Would you leak that statement to the press if the board definitely wasn't planning these things, and you knew they weren't? I don't see how it helps you. Can you explain?

I don't have a strong opinion about Altman's trustworthiness, but I can assume he just isn't trustworthy and I still don't get doing this.

What, you don't think Plasmodum falciparium is a living being with a right to exist? Don't be such a humanity chauvinist.

I think we've gone well past the point of productivity on this. I've asked some lawyers for opinions on this. I'll just address a few things briefly.

If the whistleblower doesn't have evidence of the meeting taking place, and no memos, reports or e-mails documenting that they passed their concerns up the chain, it's perfectly reasonable for a representative of the corporation to reply, "I don't recall hearing about this concern."

I agree this is true in general, but my point is limited to the cases where documentation would in fact exist were it not for the company's communication policy or data retention policy. If there was a point in the Google case you brought up earlier where Google had attempted to cast doubt on a DOJ witness by pointing out the lack of corroborating evidence (which would have been deleted per Google's policy), I'd strongly reconsider my opinion.

What the article about the case said was just that DOJ complained that it would like to have all the documentation that Google destroyed, and that this probably contained evidence which proved their case. It did not say that Google challenged DOJ witnesses on a lack of corroboration between their testimony and the discoverable record.

It doesn't have to be the Google case. Any case where the defense tried to impeach a witness on grounds of lack of corroborating evidence where that evidence would have been intentionally destroyed by a data retention policy would do.

There are other things I disagree with, but as I said, we're being unproductive.

I've been trying to figure out how someone who appears to believe deeply in the principles of effective altruism could do what SBF did. ... It seems important to me to seek an understanding of the deeper causes of this disaster to help prevent future such disasters.

There's a part of my brain screaming "Why are you leaving yourself wide open to affinity fraud? Are you trying to ensure 'SBF 2: This Time It's Personal' happens or what?" However, I'll ask him to be quiet and explain.

The problem was that you should never go around thinking "Somebody who believes in EA wouldn't screw me, therefore this investment must be safe." Instead you should think "The rate of return on this investment is not possible without crime, therefore I don't know why somebody who claims to be an EA would do this, but I don't have to know, I just have to stay away." Or as I said in response to Zvi's book review

You have to think: this man wouldn't offer me free candy just to get in his unmarked van, that doesn't make sense. I wouldn't give anyone candy for that. What's going on here?

It doesn't matter why something is too good to be true. If it is, it must be a lie, and thus bad. Don't take the deal. In case it's not clear "taking the deal" can mean more than just investing with FTX; it also encompasses other sorts of relationships one might get into with SBF or FTX, like taking their money or allowing them to be a public symbol of you.

The point here is that understanding human psychology and motivations, especially where the human you're trying to understand might be trying to trick you, is way harder than just knowing what sorts of returns are possible on capital investments with given amounts of risk. You can try to understand the SBFs of the world in the hopes of being able to identify them, but why do all that extra work? Just don't trust anyone who says they can make you a 50% return on your investment in a year with zero risk (or comparable risk to T-bills) because every single one of them is lying and committing crimes.

But not much worse; against counterexamplebot, ringer tit-for-tat will defect (so almost full double-defect), and tit-for-tat will always cooperate, so for that match ringer tit-for-tat is down about 50 points (assuming 50 rounds so the score is 100-49). Ringer tit-for-tat then picks up 150 points for each match against the 2 ringers, and the score is now (300-349). And it's only this close because the modified strategy is tit-for-tat rather than something more clever.

Also, this assumes that a bot even can defect specifically against ringer tit-for-tat. Insofar as ringer's source is secret and the identification is a function of the source of both ringer and shill, this may not be possible. If I understood correctly we only have access to ourselves and our opponent during the match, so we can't ask if some 3rd bot would always cooperate against our opponent, who would always defect against the 3rd.

It seems unlikely to me that this ringer/shill strategy will be particularly good compared to the other options

It will absolutely be guaranteed to be better than equivalent strategies without ringer/shill. Remember that ringer/shill modifies an existing strategy like tit-for-tat. Ringer tit-for-tat will always beat tit-for-tat since it will score the same as tit-for-tat except when it goes up against shill tit-for-tat, where it will always get the best possible score.

This means that whatever the strongest ~160-character strategy is, the ringer/shill version will beat 2 more strategies than it. Intuitively, it seems unlikely that anyone will come up with a 240-character strategy which is that much stronger than the best ~160-character strategy. Partly this is because I suspect that the more sophisticated strategies that people will actually come up with will start running up against the execution time barrier and won't have time to make positive use of all that complexity.

you haven't provided a compelling reason why I need to disallow it.

You don't need to disallow it, I'm just saying that it would be ideal if it could be disallowed. It could easily not be worth the trouble.

My default is that people shouldn't be judged by random strangers on the internet over the claims of other random strangers on the internet. As random strangers to Sam, we should not want to be in judgment of him over the claims of some other random stranger. This isn't good or normal or healthy.

Moreover, it is unlikely that we will devote the required amount of time & effort to really know what we're talking about, which we should if we're going to attack him or signal boost attacks. And if we are going to devote the great amount of time necessary, couldn't we be doing something more useful or fun with our time? A lot of good video games have come out recently, for example.

It would be different if I knew Sam personally. I would ask him about it, see what he had to say, and draw a conclusion. It might be worth it to me to know the truth. But I don't. This has the same flavor to me as being really invested in any more conventional celebrity. Like apparently there was some kerfuffle with Johnny Depp and Amber Heard a while ago. My response to that was that I genuinely could not care less. In fact, I actively did not want this bullshit taking up space in my brain. I intentionally avoided learning anything about it. And I'm glad I did. Please don't tell me what it was about.

I mean, yes, it's not currently against the rules, but it obviously should be (or technical measures should be added to render it impossible, like adding random multiline comment blocks to programs).

Presumably the purpose of having 3 bots is to allow me to try 3 different strategies. But if my strategy takes less than about 160 characters to express, I have to use the ringer/shill version of the same strategy since otherwise I will always lose to a ringer/shill player using the same strategy but also shilling for his ringer. And the benefit of the ringer/shill strategy will be constant, since it shouldn't be triggered by other players, just my bots. So it just means all the strategies are inflated by a constant amount. And it requires everyone to adopt the strategy, which is a waste of effort.

Academia is sufficiently dysfunctional that if you want to make a great scientific discover(y) you should basically do it outside of academia.

I feel like this point is a bit confused.

A person believing this essentially has to have a kind of "Wherever I am is where the party's at" mindset, in which case he ought to have an instrumental view of academia. Like obviously, if I want to maximize the time I spend reading math books and solving math problems, doing it inside of academia would involve wasting time and is suboptimal. However, if my goal is to do particle accelerator experiments, the easiest way to do this may be to convince people who have one to let me use it, which may mean getting a PhD. Since getting a PhD will still involve spending a bunch of time studying (if slightly suboptimally as compared to doing it outside of academia) then this might be the way to go.

See, we still think that academia is fucked, we just think they have all the particle accelerators. We only have to think academia is not so fucked that it's a thoroughly unreasonable source of particle accelerator access. We can still publish all our research open access, or even just on our blog (or also on our blog).

Load More