Rank: #10 out of 4859 in peer accuracy at Metaculus for the time period of 2016-2020.
It may actually be more affordable to build some kinds of high cost-per-kg structures (e.g. datacenters, high-tech factories) in space than on land.
This seems to be partly the reason for Google Suncatcher project. However, it's not clear that the have a solution to the cooling problem that allows anything of the scale of datacenter to exist in space.
Nobody builds datacenters that can run without water on earth because the cooling without water is too expensive. It will get more expensive in space.
The application you didn't list are surveillance satellites. The cost of providing 24/7 video surveillance of the whole world is dropping.
Ilya has 5-20 year timelines to a potentially superintelligent learning model.
That seems to be the timelines that are necessary for investors to invest in him. If the timelines are shorter, than his startup doesn't have time to compete with the incumbents. If they are longer, then the company has a problem as well.
Your calculation needs to take into account all the physiological and psychological short and longerm consequences of taking this compound, and how these consequences change based on what dose you take, and how often you take it. But if that all checks out, if that drug makes you more of who you want to be, then take it.
This phrasing suggests that you actually know all the physiological and psychological short and longterm consequences of taking this compound. In reality, in most cases you don't know all the consequences.
You generally want the positive consequences to be strong enough to balance the risk you are taking with unknown undesired effects of drugs.
The quote is about Sam Altman who now leads OpenAI and there are some moral problems with the way OpenAI is lead by him.
When I meditate I don't seek the relevant states and I also haven't taken psychedelics, so what I'm saying here is mostly from a third-party perspective.
"I do it" suggest agency which is something different than "being". Clustering both together clusters a lot together.
In cases of self-dissolution people often talk about things like universal consciousness. It's not "no self"->"no consicousness"
Transit systems should ban non-payers, not to punish them, but to save the expense and hassle of trying to monitor them
Don't you need to monitor the ban if you ban them? That sounds to me like hassle as well.
Public transport systems should check if you're not paying for your tickets, and ban you.
That sounds like punishing people who fail to buy ticket with an inability to buy tickets? That seem like a strange choice to me.
Amazon should check if you're producing fraudulent products and ban you. This is because they're unusually skilled and experienced with this kind of thing, and have good info about it.
Why do you believe that Amazon is unusually skilled or experienced with it? Louis Rossmann's investigation of fuses that Amazon sells suggests that Amazon is quite willing to sell fraudulent fuses and doesn't really do something about it. Fraudulent fuses are especially bad for Amazon to sell because they are products that are safety critical. Houses might burn down because of Amazon selling the fraudulent products.
Recent whistleblowing from Meta suggests that 10% of their ad revenue (or 25% of their profits) used to be fraud according to their own estimates. That leaves the question of how many percent of revenue of Facebook would need to be about defrauding customers to be on the level of SBF.
For Facebook the situation seems to be, "The fine we have to pay for facilitating the fraud of our customers are much lower then the profit we make so we do it."
For Amazon it's less clear to me why they aren't doing more about fraud. It seems more of a matter of just not really caring. It isn't really the job of anyone with power at Amazon to reduce fraud and many people have their KPI's that they have to reach that are about other priorities.
It's unclear to me why you think that SBF meets the threshold of being evil in the sense of "prefers bad outcomes, and is working to make them happen". I think he was certainly wrong is using customer funds but I don't think he was in any way intending to get into a situation where he can't return the funds. To me that doesn't look like sadism but more like narcissism.
I remember one conversation at a LessWrong community weekend where I made a contrarian argument. The other person responded with something like "I don't know the subject matter well enough to judge your arguments, I rather stay with believing the status quo. The topic isn't really relevant enough for my to invest time into it."
That's the kind of answer you can get when speaking with rationalists but you don't really get when talking to non-rationalists. That person wasn't "glad to learn that they were wrong" but they were far from irrational. They had an idea about their own beliefs and how it makes sense to change their own beliefs that was the result of reasoning in a way that non-rationalist don't tend to do.
Adam sounds to me naive about what goes into actually changing your mind. He seems to take "learn that you were wrong" in a goal in itself. The person I was speaking about in the above example didn't have a goal to have a sophisticated understanding of the domain I was talking about that's was probably completely in line with their utility function.
When it comes to issues where it's actually important to change your mind, it's complex in another way. Someone might give you a convincing rational argument but in the back of your mind there's a part of you that feels wary. While you could ignore that part at the back of your mind and just update your belief, it's not clear that this is always the best idea.
There are a few people who were faced with pretty convincing arguments about the central importance of AI safety and that it's important for them to do everything they can to fight for AI safety. Then a year later, they have burnout because they invest all their energy into AI safety. They ignored a part of themselves and their ability to change their mind turned to their determinant. A lot of what CFAR did when it comes to Focusing and internal double crux is about listening to more internal information instead of suppressing it.
Another problem when it comes to teaching rationality is that even if someone does the right thing 99% but the 1% they are doing the wrong thing when it actually matters, the result is still a failure. Just because someone can do it in the dojo where they train kata's doesn't mean that they can do it when it's actually important.
Julia Galef had the Scout vs. Soldier mindset as one alternative to the paradigm of teaching individual skills. The idea is that the problem often isn't that people lack the skills but that they are in soldier mindset and thus don't use skills they have.
So I see much of Vassarism as claiming: These protections against high-energy memes are harmful. We need to break them down so that we can properly hold people in power accountable, and freely discuss important risks.
It's been a while since this was written, but I don't think this summaries what Vassar says well.
If anyone wants to get a good idea of what kind of arguments Vassar makes, his talk with Spencer Greenberg is a good source.
On of the reason you might see him as dangerous is that he advocates that people should see a lot more interactions as being about conflict theory. Switching from assuming good intent of other people to using conflict theory, can be quite disruptive to many interactions.
Getting someone to stop assuming that the people around them have good intent can be quite bad for their mental health even if it's true.
ChatGPT suggests China → US: about $6.5 per kg : US → China (backhaul): about $1.2 per kg. That means a roundtrip of 1kg is a bit less than $4. Of that around a fourth is fuel prices, so you have around $1 fuel prices per kg.
Starship on the other hand needs around $1000k in fuel to transport 150kg mean $6.6 per kg. Even if you double the efficiency you still won't reach the same numbers.
When it comes to non-fuel costs, I would not expect them to be much cheaper. Especially when we talk about prices billed to customers.
The airplane market is very competitive with low profit margins that regularly make airlines go bankrupt so that they need to be bailed out. SpaceX will likely target higher profit margins and I don't see a market with airplane like competition in 2035.