Google immediately jumps to mind. The search result quality combined with the infrastructural investment required to execute on copying Google seems like it would take even an entity with no budget constraints more than 3 years, and that’s just search; Google also has maps, email, etc. Does your question assume any budget constraints? (I’ve been using DuckDuckGo as my default search engine for a few weeks and the results are obviously substantially worse than Google. And DDG has been trying pretty hard for over a decade, but with less than unlimited resources but still a lot.)
I think the programming language could be key to a self-improving AI being able to prove that the new implementation achieves the same goals as the old one, as well as us humans being able to prove that the AI is going to do what we expect.
To me it seems like memory safety is price of entry but I expect the eventual language will end up needing to be quite friendly to static analysis and theorem proving. That probably means very restricted side effects and mutation, as well as statically checkable memory and compute limits. Possibly also taking hardware unreliability into account, although I have no idea how to do that.
The language should be easy to write code in — if it’s too hard to write the code, you’re going to be out-competed by unfriendly AI projects — but also easy to write and prove fancy types/static assertions/contracts, because humans are going to be needing to prove a lot of stuff about code in this language and it seems like the proofs should also be in the code. My current vision would be some combination of Coq, Rust and Liquid Haskell.
This is great! I've bookmarked it. I really appreciate that you listed brands -- that will be a generator of lots of useful fashion ideas.
Uniqlo has been my clothing go-to for years now (I probably have bought 20+ t-shirts from them, all my jeans for the last few years, all my underwear, and even a few jackets and such), so I second that recommendation, especially for skinnier men.
I would additionally recommend people go shopping in person at thrift stores. Thrift stores are a good way to get a taste of styles or brands that you're not sure will fit into your wardrobe -- if you take a risk on a piece of clothing and it ends up not working out, at a thrift store you're usually only out $15 or so. (Though it's worth noting that most expensive clothing stores have at least a 30 day return policy, usually quite a bit more than that.)
I would also add shoes -- in the US, I see men wearing a variety of shoe types:
Ezra seemed to be arguing both at the social-shaming level (implying things like "you are doing something normatively wrong by giving Murray airtime") and at the epistemic level (saying "your science is probably factually wrong because of these biases"). The mixture of those levels muddles the argument.
In particular, it signaled to me that the epistemic-level argument was weak -- if Ezra would have been able to get away with arguing exclusively from the epistemic level, he would have (because, in my view, such arguments are more convincing), so choosing not to do so suggests weakness on that front.
(Why do I think this? I came away from the debate podcast frustrated with Ezra. Sam was being insistent about arguing exclusively on the epistemic level. Ezra was having none of it. After thinking about it for a long time, I came to the summary I wrote above, which I felt was more favorable / more of a steelman to Ezra than my initial impression from the debate.)
So, at least to convince me, if Ezra wanted to make the points you are suggesting he make, then he should have stuck to debating Sam on epistemic grounds and avoiding all normative implications.
Thanks. This is a useful distinction, and I'm not sure yet what it means for my understanding of the arguments, but I'll have to process it and hopefully update my thinking on this matter.
For the same reason the site exists, which is to spread rationality. This seems like the default position.
If you disagree, I think it should be because you think "spreading rationality" is not the goal (perhaps LW exists as a place for a certain group of people to hang out?) or that the current size is optimal or too large for its purpose (which seems quite unlikely).
Not quite ready to announce publicly. Will pm you though.
My startup recently closed an incredibly important deal, and had amazing growth in July, and important people are interested in investing. Proud to have gotten this far. Excited to see what's next. A bit overworked but it feels kinda good.
It's not obvious that you've gained anything here. We can reduce to total utilitarianism -- just assume that everyone's utility is zero at the decision point. You still have the repugnant conclusion issue where you're trying to decide whether to create more people or not based on summing utilities across populations.