cata

Programmer, rationalist, chess player, father, altruist.

Comments

I like classic style. I think the thing that classic style reflects is that most people are capable of looking at object-level reality and saying what they see. If I read an essay describing stuff like things that happened, and when they happened, and things people said and did, and how they said and did them, then often I am comfortable more or less taking the author at their word about those things. (It's unusual for people to flatly lie about them.)

However, most people don't seem very good at figuring out how likely their syntheses of things are, or what things they believe they might be wrong about, or how many important things they don't know, and so on. So when people write all that stuff in an essay, unless I personally trust their judgment enough that I want to just import their beliefs, I don't really do much with it. I end up just shrugging and reading the object-level stuff they wrote, and then doing my own synthesis and judgment. So the self-aware style really did end up being a lot of filler, and it crowds out the more valuable information.

(If I do personally trust their judgment enough that I want to just import their beliefs, then I like the self-aware style. And I am not claiming that literally all self-aware content is totally useless. But I think the heuristic is good.)

If that resembles you, I don't know if it's a problem for you. Maybe not, if you like it. I was just expressing that when I see someone appearing to do that, like the FTX people, I don't take their suggestion that the way they are going about it is really good and important very seriously.

I have really different priors than it seems like a lot of EAs and rationalists do about this stuff, so it's hard to have useful arguments. But here are some related things I believe, based mostly on my experience and common sense rather than actual evidence. ("You" here is referring to the average LW reader, not you specifically.)

  • Most important abilities for doing most useful work (like running a hedge fund) are mostly not fixed at e.g. age 25, and can be greatly improved upon. FTX didn't fail because SBF had a lack of "working memory." It seems to have failed because he sucked at a bunch of stuff that you could easily get better at over time. (Reportedly he was a bad manager and didn't communicate well, he clearly was bad at making decisions under pressure, he clearly behaved overly impulsively, etc.)
  • Trying to operate on 5 hours of sleep with constant stimulants is idiotic. You should have an incredibly high prior that this doesn't work well, and trying it out and it feeling OK for a little while shouldn't convince you otherwise. It blows my mind that any smart person would do this. The potential downside is so much worse than "an extra 3 hours per day" is good.
  • Common problems with how your mind works like "can't pay attention, can't motivate myself, irrationally anxious" aren't always things where you need to find silver bullet, quick fixes, or else live with them forever. They are typically amenable to gradual directional improvement.
  • If you are e.g. 25 years old and you have serious problems like that, now is a dumb time to try to launch yourself as hard as possible into an ambitious, self-sacrificing career where you take a lot of personal responsibility. Get your own house in order.
  • If you want to do a bunch of self-sacrificing, speculative burnout stuff anyway, I don't believe for a minute that it's because you are making a principled, altruistic, +EV decision due to short AI timelines, or something. That's totally inhuman. I think it's probably basically because you have a kind of outsized ego and you can't emotionally handle the idea that you might not be the center of the world.

P.S. I realize you were trying to make a more general point, but I have to point out that all this SBF psychoanalysis is based on extremely scanty evidence, and having a conversation framed as if it is likely basically true seems kind of foolish.

From the HN comments:

If my test suite never ever goes red, then I don't feel as confident in my code as when I have a small number of red tests.

That seems like an example of this that I have definitely experienced, where A is "my code is correct", B is "my code is not correct", and the failure case is "my tests appear to be exercising the code but actually aren't."

I don't really think that cost is an important bottleneck anymore. I and many others have a Rift collecting dust because I don't really care to use it regularly. Many people have spent more money on cameras, lighting, microphones, and other tinkering for Zoom than it would cost them to buy a Quest.

Any technology is more useful if everyone owns it, but to get there, it has to be useful at reasonable levels of adoption (e.g. a quarter of your friends own it), or it's not going to happen.

To me, the plausible route towards getting lots of people into VR for meetings is to have those people incidentally using a headset for all kinds of everyday computing stuff -- watching movies, playing games, doing office work -- and then, they are already wearing it and using it, and it's easy to have meetings with everyone else who is also already wearing it and using it. That's clearly achievable but also clearly not ready yet.

I don't think it's going to be transformative until you are happy to wear a headset for hours on end. In and of themselves, VR meetings are better than Zoom meetings, but having a headset on sucks compared to sitting at your computer with nothing on your face.

I used to think it was a good idea to experiment with basically every psychoactive drug, but nowadays I am more skeptical of anyone's understanding of the effects of basically any chemical intervention on the human body, and I adopt more of a "if it's not broken, don't fix it" principle towards all of it. It's a lot easier to make my body or mind work worse than to make it work better.

(Of course, if you were already "pretty sure" you were trans, then that's a different story.)

It may be entirely a myth, or may have been true only long ago, or may be applicable to specific sub-industries. It doesn't have anything to do with my experience of interviewing applicants for random Silicon Valley startups over the last decade.

There is a grain of truth to it, which is that some people who can muddle through accomplishing things given unlimited tries, unlimited Googling, unlimited help, unlimited time, and no particular quality bar, do not have a clear enough understanding of programming or computing to accomplish almost anything, even a simple thing, by themselves, on the first try, in an interview, quickly.

If alignment is about getting models to do what you want and not engaging in certain negative behavior, then researching how to get models to censor certain outputs could theoretically produce insights for alignment.

I was referred by 80k Hours to talk to a manager on the OpenAI safety team who argued exactly this to me. I didn't join, so no idea to what extent it makes sense vs. just being a nice-sounding idea.

Load More