I have tried meditation a little bit although not very seriously. Everything I've heard about it makes me think it would be a good idea to do it more seriously.
Not sure how to be weird without being unuseful. What does a weird but useful background look like?
Also I've already been trying to read a lot but still somewhat dissatisfied with my pace. You mentioned you could read at 3x your previous speed. How did you do that?
I am pretty anxious about posting this since this is my first post on LessWrong and also about a pretty confusing topic but I'm probably not well calibrated on this front so oh. Also thanks to NicholasKross for taking a look at my drafts.
I have a strong inside view of the alignment problem and what a solution would look like. The main reason why I don't have an as concrete inside view AI timeline is because I don't know enough about ML and I have to defer to get a specific decade. The biggest gap in my model of the alignment problem is what a solution to inner misalignment would look like, although I think it would be something like trying to find a way to avoid wireheading.
I've checked out John Wentworth's study guide before, mostly doing CS50.
Part of the reason I'm considering getting a degree is so I can get a job if I want and not have to bet on living rent-free with other rationalists or something.
The people I've talked to the most have timelines centering around 2030. However, I don't have a detailed picture of why because their reasons are capabilities exfohazards. From what I can tell, their reasons are tricks you can implement to get RSI even on hardware that exists right now, but I think most good-sounding tricks don't actually work (no one expected transformer models to be the closest to AGI in comparison with other architectures) and I think superintelligence is more contingent on compute and training data than they think. It also seems like other people in AI alignment disagree in a more optimistic direction. Now that I think about it though, I probably overestimated how long the timelines of optimistic alignment researchers were so it's probably more like 2040.
The difference between an expected utility maximizer using updateless decision theory and an entity who likes the number 1 more than the number 2, or who cannot count past 1, or who has a completely wrong model of the world which nonetheless makes it one-box is that the expected utility maximizer using updateless decision theory wins in scenarios outside of Newcomb's problem where you may have to choose to $2 instead of $1, or have to count amounts of objects larger than 1, or have to believe true things. Similarly, an entity that "acts like they have a choice" generalizes well to other scenarios whereas these other possible entities don't.
One-boxers win because they reasoned in their head that one-boxers win because of updateless decision theory or something so they "should" be a one-boxer. The decision is predetermined but the reasoning acts like it has a choice in the matter (and people who act like they have a choice in the matter win.) What carado is saying is that people who act like they can move around the realityfluid tend to win more, just like how people who act like they have a choice in Newcomb's problem and one-box in Newcomb's problem win even though they don't have a choice in the matter.
I think the prior for aliens having visited Earth should be lower, since it a priori it seems unlikely to me that aliens would interact with Earth but not to an extent which makes it clear to us that they have. My intuition is that its probably rare to get to other planets with sapient life before building a superintelligence (which would almost certainly be obvious to us if it did arrive) and even if you do manage to go to other planets with sapient life, I don't think aliens would not try to contract us if they're anything like humans.