lincolnquirk

lincolnquirk's Comments

Go F*** Someone

I really enjoyed this post. The analogy of capital vs. labor really hit home in particular, I realized that’s exactly how I’ve been implicitly treating dating, so I think this post is likely to change my behavior in the future. Thanks for writing it.

We run the Center for Applied Rationality, AMA

What aspects of CFAR's strategy would you be most embarrassed by if they were generally known? :P

We run the Center for Applied Rationality, AMA

Ok, I'll bite. Why should CFAR exist? Rationality training is not so obviously useful that an entire org needs to exist to support it; especially now that you've iterated so heavily on the curriculum, why not dissolve CFAR and merge back into (e.g.) MIRI and just reuse the work to train new MIRI staff?

even more true if CFAR is effective recruitment for MIRI, but merging back in would allow you to separately optimize for that.

One Million Dollars

Congrats Jeff! That's an incredible milestone. I have the comical image of you as Doctor Evil right now.

How common is it for one entity to have a 3+ year technological lead on its nearest competitor?

Google immediately jumps to mind. The search result quality combined with the infrastructural investment required to execute on copying Google seems like it would take even an entity with no budget constraints more than 3 years, and that’s just search; Google also has maps, email, etc. Does your question assume any budget constraints? (I’ve been using DuckDuckGo as my default search engine for a few weeks and the results are obviously substantially worse than Google. And DDG has been trying pretty hard for over a decade, but with less than unlimited resources but still a lot.)

What Programming Language Characteristics Would Allow Provably Safe AI?

I think the programming language could be key to a self-improving AI being able to prove that the new implementation achieves the same goals as the old one, as well as us humans being able to prove that the AI is going to do what we expect.

To me it seems like memory safety is price of entry but I expect the eventual language will end up needing to be quite friendly to static analysis and theorem proving. That probably means very restricted side effects and mutation, as well as statically checkable memory and compute limits. Possibly also taking hardware unreliability into account, although I have no idea how to do that.

The language should be easy to write code in — if it’s too hard to write the code, you’re going to be out-competed by unfriendly AI projects — but also easy to write and prove fancy types/static assertions/contracts, because humans are going to be needing to prove a lot of stuff about code in this language and it seems like the proofs should also be in the code. My current vision would be some combination of Coq, Rust and Liquid Haskell.

Clothing For Men

This is great! I've bookmarked it. I really appreciate that you listed brands -- that will be a generator of lots of useful fashion ideas.

Uniqlo has been my clothing go-to for years now (I probably have bought 20+ t-shirts from them, all my jeans for the last few years, all my underwear, and even a few jackets and such), so I second that recommendation, especially for skinnier men.

I would additionally recommend people go shopping in person at thrift stores. Thrift stores are a good way to get a taste of styles or brands that you're not sure will fit into your wardrobe -- if you take a risk on a piece of clothing and it ends up not working out, at a thrift store you're usually only out $15 or so. (Though it's worth noting that most expensive clothing stores have at least a 30 day return policy, usually quite a bit more than that.)

Clothing For Men

I would also add shoes -- in the US, I see men wearing a variety of shoe types:

  • Fashion sneakers instead of athletic sneakers
  • boat shoes
  • brown leather shoes (though black is essential for formal occasions, it comes off as too formal for me most of the time). For this I also think a brown belt is important to go with it.
Moral frameworks and the Harris/Klein debate

Ezra seemed to be arguing both at the social-shaming level (implying things like "you are doing something normatively wrong by giving Murray airtime") and at the epistemic level (saying "your science is probably factually wrong because of these biases"). The mixture of those levels muddles the argument.

In particular, it signaled to me that the epistemic-level argument was weak -- if Ezra would have been able to get away with arguing exclusively from the epistemic level, he would have (because, in my view, such arguments are more convincing), so choosing not to do so suggests weakness on that front.

(Why do I think this? I came away from the debate podcast frustrated with Ezra. Sam was being insistent about arguing exclusively on the epistemic level. Ezra was having none of it. After thinking about it for a long time, I came to the summary I wrote above, which I felt was more favorable / more of a steelman to Ezra than my initial impression from the debate.)

So, at least to convince me, if Ezra wanted to make the points you are suggesting he make, then he should have stuck to debating Sam on epistemic grounds and avoiding all normative implications.

Load More