One quirk about the political donation problem is that activists respond to negative messaging but the broader base -- people who may donate/volunteer/vote based on the candidate not the party -- responds to positive messaging. I think it's fair to say this creates a "poisoning the well" problem, where the fundraising emails are doom, activists respond to doom, but the base is cut off from the party. Relevant to this forum, something like "ensure AI benefits everyone" might message better with the base, and "prevent AI from killing everyone" might message better with the activists.
Key point: minimalism the design aesthetic, and minimalism meaning actually having less stuff, are opposites. Minimalist design means you highlight the one function it does very pleasingly. Minimalist stuffhaving means you have fairly cheap-looking multipurpose tools.
Ironically the most important domain this applies is storage containers. Your path to decluttering succeeds if your first instinct is to grab random boxes/cartons/bags to start storing stuff, and it fails if you start by buying new containers.
Marie Kondo is a very good book, you should read it.
Your take is consistent with political messaging advice that people like water and they like drinks but they don't like watered down drinks. Swing voters react to what's said most often, not to the average of things that get said around them.
I'm not all that sure how AI search works. Searches, and indexes top 20 hits, or something like that. Is reading a webpage the expensive part? If so then caching/context window management might matter a lot. Plain text might backfire if you actually lose table structure and stuff. You can probably ignore styles at least.
How do I use AI to search for information on 1000 companies or so? This turns out to be harder than I thought. The difficulty can be expressed computationally: I'm requesting Output tokens = Linear(Input prompt) and that's a lot of hard cash. This is interesting because to non-computer-scientist normies, this wrinkle is really not apparent. AIs can do basic agentic reasoning (like loop over a list of 1000 companies) and a bunch of searches, surely you can combine them? Yes it turns out, but it'll cost you like $38 for one query.
I went with Ottogrid.ai, which was fun, but expensive.
If I really had time to hack at this, I would try to do breadth passes then depth passes. like google "top 20 AI companies", scrape stats, then do narrower searches as time goes on. Is there a name for this algo? Idk. But people use it constantly, even cleaning the kitchen and so on.
But is that not done already???? This seems like the first AI-practical problem you'd solve once you had search-capable AI or early agentic or early reasoning.
No it really doesn't, it sells you problems you already have and want to hear more about. People are not using their System 2 when they read the news, it's all just low-mental scanning and pattern matching on present experiences. I mean you can riff on that and get somewhere with alliance-building when it comes to AI but I can tell you all the "trust the science" liberals are already exempting AI scientists.
I agree with creating alliances. Remember that only activists like being given new problems. Most people dislike being told about a brand new problem that they don't even have yet.
This is a very novel and not-useless way to break down the aphorism "don't worry about things not in your control." Morality is supposed to be over the action-utility space not over the "how good is this state" space. So if you're guilt prone... and do logical obsession to convert guilt to morality... you might notice you're making an incorrect leap to feel guilty. (Or try CBT.)
OK the thesis makes sense. Like, you should be able to compare "people generally following rationalist improvements methods" and "people doing some other thing" and find an effect.
It might have a really small effect size across rationalism as a whole. And rationalism might have just converged to other self-improvement systems. (Honestly, if your self-improvement system is just "results that have shown up in 3 unrelated belief systems" you would do okay.
It might also be hard to improve, or accelerate, winningness in all of life by type 2 thinking. Then what are we doing when we're type 2 thinking and believe we're improving, idk. Good questions, I guess.
Comments