As you highlight, asking everyone to set up their own cross-posting solution is probably not viable. But if there was some service run by the LW team that had a simple guide for setting it up (e.g. go to your LW account, get your Twitter API key, copy it here, grant permission, done.) and it took ~5 minutes, that would lower the barrier to entry a lot and would be a huge step forward.
Not even fitting for Quick Takes? We could have a "Quicker Quick Takes" or "LW Shitposts" section for all I care. (More seriously if you really wanted some separate category for this you could name it something like "LW Microblog".)
Also a lot of Eliezer's tweets are quite high effort, to the point where some of them get cross-posted as top level posts. (E.g. https://www.lesswrong.com/posts/fPvssZk3AoDzXwfwJ/universal-basic-income-and-poverty )
Why is a significant amount of content by some rationality adjacent people only posted on X/Twitter?
I hope I don't have to explain why some people would rather not go near X/Twitter with a ten foot pole.
The most obvious example is Eliezer, who is much more active on Twitter than LW. I "follow" some people on Twitter by reading their posts using Nitter (e.g. xcancel.com ).
What triggered me to post this today is that it seems @Aella set her account to followers-only (I assume due to some recent controversy), so now the only way for me to read her tweets would be to create a Twitter account.
Why can't some of this content be mirrored e.g. as LW Quick Takes, or on basically any other platform outside Twitter's walled garden? A lot of internet personalities cross-post their stuff to multiple platforms these days. One of the best examples I have seen is Molly White, she has her own self-hosted microblog feed which is pushed to Twitter, Bluesky, and Mastodon simultaneously: https://www.mollywhite.net/micro
I think even just having Eliezer's content available somewhere else would be valuable enough to the community for the LW team to possibly assist with some technical solution here.
A guide to taking the perfect dating app photo. This area of your life is important, so if you intend to take dating apps seriously then you should take photo optimization seriously, and of course you can then also use the photos for other things.
It has been ~1 year since this was posted, and photorealistic image generation has now gone mainstream with e.g. ChatGPT introducing it. People can now generate "improved" photos of themselves.
How has this affected dating apps? Could anyone actively using them weigh in on this?
I imagine the equilibrium to be everyone having extremely attractive AI generated photos of themselves. (If the person is attractive to begin with the AI version probably only has some minor tweaks compared to the reference photos, but it they are not so attractive originally the difference could be quite jarring.)
How far are we from that equilibrium right now and how fast are things changing? Does anything other than a race to the bottom with AI photos seem to be happening? (E.g. do people already have an aversion to AI photos in sufficient numbers that they penalize photos that look "too good"?)
What? I am telling you it is.
and preferentially read things that take less cognitive effort (all else equal, of course)
Sorry, no offense meant, I am just genuinely surprised. But I believe you now, I guess our experiences are just very different in this regard.
that sentence is semantically dense and grammatically complicated. I have to put in some work to break it down into noun phrases and such and figure out how it fits together. requiring cognitive work of potential readers before they've even decided if they want to read your thing is extremely anti-memetic
Sorry, but I call bullshit on this being a problem for you, or any other LW reader.
Now you are probably right that if you take the general population, for a significant number of people parsing anything but the simplest grammatical structures is going to impart noticeable extra cognitive load, lowering overall memetic fitness.
But as the post outlined, we are not optimizing for the number clicks, we are optimizing for something like P(loves article|clicked). See also https://www.lesswrong.com/posts/vidXh2DJtnqH5ysrZ/a-blog-post-is-a-very-long-and-complex-search-query-to-find
So if you are worried about someone bouncing off the title because of its grammatical complexity, you better also write the article with simple grammar (and simple content). Are there situations where your main goal is to reach as many people as possible? Sure, but for that you probably want to optimize both the title and content with that in mind. And at this point what you are doing is probably more like "political communication" than "writing something for like-minded people".
the tone of it sounds like a dry academic paper. those are typically not very fun to read. it signals that this will also not be fun to read
For me it signals more positive things like seriousness and better epistemics[1], but you probably have a point that there is space to signal the tone of the article in the title. Still, I don't think reducing its information density is the right way to do it.
Well, on blog posts at least. On actual academic papers everyone is expected to write in a serious sounding academic style, so there is much less signal there. ↩︎
I'm worried about Chicago
In what world is this a good title? It basically gives zero information about the topic of the article, it is exactly the kind of clickbait title that purposefully omits any relevant information from the title to try to make people click. I personally associate such clickbait titles with terrible content, and they make me much less likely to click.[1]
What would be wrong with just using the subtitle as the actual title? It's much more informative:
Slowing national population growth plus remote work spell big trouble for Midwestern cities
Unfortunately various platforms actively push authors towards clickbait titles, and even if the author dislikes them they often don't have a choice. Often good content is actually hiding behind terrible clickbait titles (and thumbnails) nowadays, so I sometimes have to make an active effort to click on clickbait because otherwise I could be missing out on discovering some good content. ↩︎
Interesting. While the post resonates with me, I feel like I am trying to go in the opposite direction right now, trying to avoid getting nerd sniped by all the various fields I could be getting into, and instead strategically choosing the skills so that they are the most useful for solving the bottlenecks for my other goals that are not "learning cool technical things".
Which is interesting, because so far based on your posts you struck me as the kind of person I am trying to be more like in this regard, being more strategic about my goals. So maybe the pendulum swings back, and eventually you find out that letting yourself get nerd sniped by random things does have some hidden benefits? I guess I will find out in a few years (if we are alive).
Wonder if correctness proofs (checked by some proof assistant) can help with this.[1]
I think the main bottleneck in the past for correctness proofs was that it takes much more effort to write the proofs than it takes to write the programs themselves, and current automated theorem provers are nowhere near good enough.
Writing machine checked proofs is a prime RL target, since proof assistant kernels should be adversarially robust. We have already seen great results from stuff like AlphaProof.
One counterargument I could see is that writing the correctness properties themselves could turn out to be a major bottleneck. It might be that for most real world systems you can't write succinct correctness properties. ↩︎
Out of curiosity, why do you post on Twitter? Is it network effects, or does it just have such a unique culture that's not available anywhere else? (Or something else?) Do you not feel any kind of aversion towards the platform, which would potentially discourage you from interacting? (I don't mean this to sound accusatory, if your position is that "yes Twitter is awesome and more rationalists should join" I would also like to hear about that.)