This is a special post for quick takes by RomanHauksson. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

43 comments, sorted by Click to highlight new comments since: Today at 4:41 AM

Sometimes I stumble across a strange genre of writing on the internet.

From GameB Home:

We're gaining the power of gods, but without the love, wisdom and discernment of gods—that is a self-extinctionary scenario. Welcome to Game B, a transcontextual inquiry into a new social operating system for humanity. Game A is what got us to this time of metacrisis and collapse. Game B is what emerges in response. Come play with us as we learn to become wiser, together, gain coherence and begin to move towards a new social operating system emphasizing human wellbeing, metastability, and built on good values that we will be happy to call home and we will be proud to leave to our descendants.

From Bryan Johnson's Blueprint:

The enemy is Entropy. The path is Goal Alignment via building your Autonomous Self; enabling compounded rates of progress to bravely explore the Zeroth Principle Future and play infinite games.

If you've also done random walks through cyberspace you might have read this kind of language in some deep corner of the internet as well. It's characteristically esoteric and stuffed with complex vocabulary. I can tell that the writers have something in mind when they're writing it – that they're not totally off the rails – but it still comes off as spiritual nonsense.

Look, I get it! Rationalist writing is stuffed with jargon and machine learning analogies, and self-help books feature businessy pseudoframeworks and vapid motivational prose. It's okay for your field to have its own linguistic subculture! But when you try to dress up your galaxy brain insights in similarly galaxy brain vocabulary, you lose me.

This kind of writing makes me uncomfortable in a way I can't put into words, like the feeling one gets when they look at a liminal photograph. Maybe because it's harder for me to judge the epistemics of the writing. I feel it trying to unfairly hijack the part of my brain which measures the insightfulness of text by presenting itself as mystical, like forbidden knowledge I've finally revealed. But if these insights were really all that, they'd have the balls to present themselves candidly!

Yes, metaphors and complex language are sometimes necessary to get your point across and make text engaging. In Cyborgism, @janus writes:

Corridors of possibility bloom like time-lapse flowers in your wake and burst like mineshafts into nothingness again. But for every one of these there are a far greater number of voids–futures which your mind refuses to touch. Your Loom of Time devours the boundary conditions of the present and traces a garment of glistening cobwebs over the still-forming future, teasing through your fingers and billowing out towards the shadowy unknown like an incoming tide.

But unlike the previous examples, this beautifully flowery depiction of GPT-assisted writing works because it's clearly demarcated within a more down-to-earth post. Good insights survive scrutiny even when nude.

[-]dr_s7mo20

This kind of writing makes me uncomfortable in a way I can't put into words, like the feeling one gets when they look at a liminal photograph.

I think it's a fair feeling. There's a certain very famous (at least in our country) Italian 19th century novel in which at a point a priest sets out to bamboozle a peasant boy to get out of doing something he doesn't want to. His solution is to begin piling up complexity and adding a healthy dose of Latin on top, so that the illiterate farmer remains obviously confused and frustrated, but can't even quite put the finger on where he was cheated.

To put it bluntly: talking all difficult is a good way to get away with making stupid stuff sound smart and simple stuff sound complex. You don't even necessarily do it on purpose, sometimes entire groups simply drift into doing it as a result of trying to up each other in trying to sound legitimate and serious (hello, academic writing). Jargon is useful to summarize complex concepts in simple expressions but you often don't need that much and the more you overload your speech with it the less people will be able to get the entire thing. Even for people who do know the jargon, recalling what each term means isn't always immediate. So, given how easy it is to overuse jargon to fool people or to position themselves above them, it's not that strange that we sometimes develop a heuristic that makes us suspicious of what looks like too much of it.

With the two extracts you posted, the first one sounds to me like just another declination on the theme of "to stop [bad thing] we should all become GOOD" which is always a very "no shit Sherlock" thing. The second extract honestly I can't quite tell what is precisely saying either, which is worrying in its own way.

So, yeah, +1 for just talking as simple as possible. Not any simpler, hopefully, but there's rarely a risk of that.

You don't even necessarily do it on purpose, sometimes entire groups simply drift into doing it as a result of trying to up each other in trying to sound legitimate and serious (hello, academic writing).

Yeah, I suspect some intellectual groups write like this for that reason: not actively trying to trick people into thinking it's more profound than it is, but a slow creep into too much jargon. Like a frog in boiling water.

Then, when I look at their writing, it seems needlessly intelligible to me, even when it's writing designed for a newcomer. How do they not realize this? Maybe the water just feels warm to them.

Microsolidarity

Microsolidarity is a community-building practice. We're weaving the social fabric that underpins shared infrastructure.

The first objective of microsolidarity is to create structures for belonging. We are stitching new kinship networks to shift us out of isolated individualism into a more connected way of being. Why? Because belonging is a superpower: we’re more courageous & creative when we "find our people".

The second objective is to support people into meaningful work. This is very broadly defined: you decide what is meaningful to you. It could be about your job, your family, or community volunteering. Generally, life is more meaningful when we are being of benefit to others, when we know how to contribute, when we can match our talents to the needs in the world.

When the human tendency to detect patterns goes too far

And, apophenia might make you more susceptible to what researchers call ‘pseudo-profound bullshit’: meaningless statements designed to appear profound. Timothy Bainbridge, a postdoc at the University of Melbourne, gives an example: ‘Wholeness quiets infinite phenomena.’ It’s a syntactically correct but vague and ultimately meaningless sentence. Bainbridge considers belief in pseudo-profound bullshit a particular instance of apophenia. To find it significant, one has to perceive a pattern in something that is actually made of fluff, and at the same time lack the ability to notice that it is actually not meaningful.

Hello RomanHauksson,

speaking of epistemics, what is it that you actually Feel in response to what is written? I mean, your analogy about "uncomfortable in a way I can't put into words, like the feeling one gets when they look at a liminal photograph", seems vaguely reminiscence of the kind of language you are uncomfortable with, does it not? 

If I would "translate" the first paragraph from Game B, I believe it means something like this:

"We have this vague sense built upon various experiences that rapid growth in technology doesn't seem to make the world safer - rather, it increases the deadliness of its consequences.
We call ourselves Game B(etter), and really believe that looking up from our respective specializations, and using social/self-development tools to improve cooperation (see page 2), we can shift the feeling of doom, to one of solid good feelings. We connect this increase in deadliness with Capitalism, but believe our focused effort will shift the crisis it has created, into a grand opportunity.
How we treat each other now will have lasting consequences, and if you believe that, join us and treat us well - we will treat you well too (see page 3 for details). If we are more altruistic, we can create relationships that oppose the egoistical trend, that also increases deadliness of technology - And so we will achieve two goals at the same time: Make technology safe, and do something that will guarantee that we will feel proud in the face of our descendant."

Now, this is just an interpretation, but I wonder if it makes more sense to you now? There is a lot of things that could be added, but to me it is just Jargon, or simply that the meaning of words shift towards a different spectrum. 

I would agree that it is complex language, but I wouldn't say it is spiritual. It talks about 'gods', but that is much more 'religious' lingo, than spiritual. Shouldn't it have more sentences that say things like: "Now is the time for our personal and collective Transformation" - "Reconnect with your true essence" - "Find others that harmonize with your wish for a more vibrant frequency". - "An increased connection with the Source cleansing the dregs we have come to Earth to cleanse, ushering in a new Age of Humanity".

It is of course a minor point, but I also wonder about what you define as Spiritual. Simply because the expression "new social operating system" in and by itself, doesn't seem that spiritual to me. It is abstract, but so is information and bits. Human relationships and social systems might be more difficult to map, but it follows the same underlying principle, and can be directly traced to things like psychology, social sciences, pedagogy and learning-theory. And neither of those, necessarily, have any Spiritual undertones.

I do agree in a more abstract sense however, even if the minor points above still stands. The quotes you have found, and the Spiritual lingo is probably much more closely related in its applications. The same sciences can be applied to companies, schools or businesses, that 'make money', or as above, a 'Community' that works towards "increasing certain values". 

Kindly,
Caerulea-Lawrence

From Pluriverse:

A viable future requires thinking-feeling beyond a neutral technocratic position, averting the catastrophic metacrisis, avoiding dysoptian solutionism, and dreaming acutely into the techno-imaginative dependencies to come.

A common and disappointing pattern I've noticed is articles simply introducing a topic without making any claims about it. Example: an article in my university's student magazine purportedly about the ethics of AI art, but all it does is mention various issues without making any claims about them. Like, duh, of course people can make deepfake nudes and Stable Diffusion was trained on copyrighted works, that's old news! What do you think, writer, humanity should do about it, and why?

Did you know that caffeine is not a magical property of certain drinks? It's just a chemical you can buy in pill form. You don't "need coffee in the morning", you need caffeine. So if you don't like the taste or the preparation time or the price or that it stains your teeth and messes with your digestion, you can just supplement the caffeine and drink something healthier. And don't get me started on energy drinks...

Plus, caffeine pills give you more control than caffeinated drinks. Unlike coffee or tea, you know exactly how much caffeine is in each dose, and you can buy "extended release" pills which have a smoother peak. Many drink green tea because it contains L-theanine, which reduces jitteryness, but it doesn't have the ideal amount of L-theanine proportional to the caffeine. Fear not, because L-theanine isn't magical either! You can also buy pills that give you the right dosage.

Caffeinated drinks are a front! They're a sham! Peel back the curtain! Free your mind! :P

an analogy between longtermism and lifespan extension

One proposition of longtermism is that the extinction of humanity would be especially tragic not just because of the number of people alive today who would die, but because this would eliminate the possibility for astronomical numbers of future people to exist. This implies that humanity should reprioritize resources away from solving near term problems and towards safeguarding itself from extinction.[1]

On a personal level, I have a chance of living to experience the longevity escape velocity, at which point anti-aging technology would be so advanced that I would only die due to accidents rather than natural factors. I may live for thousands of years, and these years would be much better than my current life because of improvements in general quality of life. Analogous to the potential of many future generations, this future would be so awesome for me that I should be willing to sacrifice a lot to increase the chance that it happens.

I could follow a version of Bryan Johnson’s "Blueprint" lifestyle for around $12,000 per year, which he designed to slow or reverse aging as much as possible. This might not be worth it. Suppose this protocol would extend my expected lifespan by 20%, but the extra $12,000 per year, if spent elsewhere, would increase my quality of life by 30%. This would mean I could gain more (quality of life × lifespan) by spending that money elsewhere.[2]

However, lifestyle interventions which, if I followed for the rest of my life, would increase my expected lifespan by 20%, would actually increase my expected lifespan by much more than 20% because our knowledge of how to extend lifespan increases as time passes. In other words, spending money on lifestyle interventions to promote longevity instead of quality of life increases the chance that I live to experience longevity escape velocity, so it may be worth it.

society spending resources on neartermist issues : me spending money on immediate quality of life :: society spending money on longtermist issues : me spending money on lifespan extension


  1. And ensuring that the project of populating the universe goes well. E.g., preventing S-risks. ↩︎

  2. This also means less money to donate to charity and/or less slack to be in a position to work directly on the world's important problems. ↩︎

[-][anonymous]1y10

It's always interesting to see what type of people are interested in longevity. Most people would like to have longevity, but some people are obsessed. I wonder if historically people viewed their children as a copy of themselves compared to now. It seems like people had similar lives for multiple generations in the past compared to the social and geographical mobility we now enjoy. Does this detach us from our children existentially? Also what kind of people would view others as a viable path to their own gene propagation, and what kind of people wouldn't see others as the same but rather as a competition to themselves?

I wonder if historically people viewed their children as a copy of themselves compared to now.

I would guess that both in the past and now, some people see their children as copies of themselves, and some do not. (Though it is possible that the relative numbers have changed.) Seems to be more about personality traits than about... calendar.

It does provide an alternative to having kids as a way of self-extension. They should, in my view, be seen as deeply related, so long as the parent makes enough memetic work to fully encode their personality. I wouldn't mind being a trill. But it is an immense loss to lose the contents of a mind. My offspring should have my knowledge available, as a trill would. And I'd like my knowledge to be available to anyone. In the meantime, I'd still like my form and personality to continue as one for much longer than humans historically have, and I'd like the same for both my children and everyone's children. We can extend lifespan very significantly without messing up the replicator equation, if we also create technologies for dramatically more efficient (lower temperature) forms of life than biology. When true ascension is possible, my family will be deep space extropians, every part of the body participating in brainlike computation and every part of the body an intelligence-directed work of art, not live on the surface of planets.

Ideally, a competitive market would drive the price of goods close to the price of production, rather than the price that maximizes revenue. Unfortunately, some mechanisms prevent this.

One is the exploitation of the network effect, where a good is more valuable simply because more people use it. For example, a well-designed social media platform is useless if it has no users, and a terrible addictive platform can be useful if it has many users (Twitter).

This makes it difficult to break into a market and gives popular services the chance to charge what people will pay instead of the minimum amount required to keep the lights on. Some would say this is the price that things should be, but I disagree. Life should be less expensive for consumers, and diabetic people shouldn't need to pay an arm and a leg for insulin.

Or maybe I'm just seething that I just willingly paid $40 for a month's access to a dating app's "premium" tier 🤢.

Yes, the increased chance that I find a good person to date in the next month is worth ≥$40 to me. It's still the most efficient way to discover and filter through other single people near me. But I doubt it costs this much to maintain a dating app, even considering that the majority of people don't pay for the premium tier.

The other thing that irks me about the network effect is that I don't always like the thing that matches the puclic's revealed preferences. I think this dating app is full of dark patterns – UI tricks that make it as addicting as possible. And it encourages shallow judgement of people. I would truly rather see people's bio front and center, rather than their face, and I want them to have more space to talk about themselves. I wish I could just fill out a survey on what I'm looking for and be matched with the right person. Alas, OKCupid has fallen out of fashion, so instead I must dodge dark patterns and scroll past selfies because human connection has been commercialized.

Most of the "mechanisms which prevent competitive pricing" is monopoly.  Network effect is "just" a natural monopoly, where the first success gains so much ground that competitors can't really get a start.  Another curiosity is the difference between average cost and marginal cost.  One more user does not cost $40.  But, especially in growth mode, the average cost per user (of your demographic) is probably higher than you think - these sites are profitable, but not amazingly so.  

None of this invalidates your anger at the inadequacy of the modern dating equilibrium.  I sympathize that you don't have parents willing to arrange your marriage and save you the hassle.

I didn't know about either of those concepts (network effects being classified as a natural monopoly and the average vs. marginal cost). Thanks!

While I am frustrated by the current dating landscape, I think dating apps are probably a net positive – before they were popular, it was impossible to discover as many people. And while arranged marriages probably have the same level of satisfaction as freely chosen marriages, I'm glad that I have to find my own partner. It adds to my life a sense of exploration and uncertainty, incentivizes me to work on becoming more confident/attractive, and helps me meet more cool people as friends.

Or maybe I'm just rationalizing.

Does anyone know whether added sugar is bad for you if you ignore the following points?

  1. It spikes your blood sugar quickly (it has a high glycemic index)
  2. It doesn't have any nutrients, but it does have calories
  3. It does not make you feel full, so it makes it easier to eat more calories, and
  4. It increases tooth decay.

I'm asking because I'm trying to figure out what carbohydrate-dense foods to eat when I'm bulking. I find it difficult to cram in enough calories per day, so most of my calories come from fat and protein at the moment. I'm not getting enough carbs. But most "carby foods for bulking" (e.g. potatoes, rice) are very filling! E.g., a cup of rice has 200 kcal, but a cup of nuts has 800.

I did some stats to figure out what carby foods have a low glycemic index but also a low satiation index, i.e. how quickly they make you feel full. My analysis showed that sponge cake was a great choice, having a glycemic index of only 40 while being the least filling of all the foods I analyzed!

But common sense says that cake would be classified as a "dirty bulk" food, which I'm trying to avoid. If it's not dirty for its glycemic index, what makes it dirty? Is it because cake has a "dirty" kind of fat, or is there something bad about sugar besides its glycemic index?

Just going off of the points I listed, eating cake to bulk up isn't "dirty", except for tooth decay. That's because

  1. Cake has a low glycemic index, I think because it has a lot of fat?
  2. I would be getting enough nutrients from the rest of what I eat; cake would make up the surplus.
  3. The whole point of me eating cake is to get more calories, so this point is nil.

What am I missing?

Ascorbic acid seems to be involved in carbohydrate metabolism, or at least in glucose metabolism, which may be why the small amounts of vitamin C in an all meat diet seem to be sufficient to avoid scurvy - negligible carbohydrate intake means reduced levels of vitamin C.  Both raw unfiltered honey and fruits seem like they don't cause the kind of metabolic derangement attributed to foods high in refined carbohydrates like refined grains and sugar.  Empirically high-carbohydrate foods in the ancestral diet are usually high in vitamin C.  Honey seems like an exception, but there might be other poorly understood micronutrients in it that help as well.  So it seems probable but not certain that taking in a lot of carbohydrates without a corresponding increase in vitamin C (and/or possibly other micronutrients they tend to come with in fresh fruit) could lead to problems.

Seeds (including grains) also tend to have high concentrations of antinutrients, plant defense chemicals, and hard to digest or allergenic proteins (these are not mutually exclusive categories), so it might be problematic in the long run to get a large percentage of your calories from cake for that reason.  Additionally, some B vitamins like thiamine are important for carbohydrate metabolism, so if your sponge cake is not made from a fortified flour, you may want to take a B vitamin supplement.  

Finally, sponge cake can be made with or without a variety of adulterants and preservatives, and with higher-quality or lower-quality fats.  There is some reason to believe that seed and vegetable oils are particularly prone to oxidation and may activate torporific pathways causing lower energy and favoring accumulation of body fat over other uses for your caloric intake, but I haven't investigated enough to be confident that this is true.

I wouldn't recommend worrying about glycemic index, as it's not clear high glycemic index causes problems.  If your metabolism isn't disordered, your pancreas should be able to release an appropriate amount of insulin, causing the excess blood sugar to be stored in fat or muscle cells.  If it is disordered, I'd prioritize fixing that over whatever you're trying to do with a "bulk."  Seems worth reflecting on the theory behind a "bulk," though, as if you're trying to increase muscle mass, I think the current research suggests that you want to:

  • Take in enough protein
  • Take in enough leucine at one time to trigger muscle protein synthesis
  • Take in enough calories to sustain your activity level

There is no consensus on the cause of metabolic syndrome (which is responsible for great amounts of cardiovascular disease and cognitive decline), but some experts like UCSF professor Robert Lustig, MD, believe that the main cause is fructose in the diet. Table sugar is half fructose and about half of the carbs in most fruits and vegetables is also fructose (with apples and pears being about 70% fructose and cherries being about 30%).

Cultures that have traditionally relied heavily on carbs, e.g., East Asia, traditionally get almost all of their carbs from starchy food that contain zero fructose. Also, fructose is about 7 times worse than glucose at producing advanced glycation end products (AGEs).

I find it difficult to cram in enough calories per day, so most of my calories come from fat and protein at the moment. I'm not getting enough carbs.

Why do you believe that you need calories from carbs to bulk?

I personally created a mix that might interesting for you as well that has a lot of calories but isn't very filling:

300ml water + 30ml Liquid Aminoacids + 30ml peanut oil + ~5ml honey + one spoon of pulverized beetroot powder

Theoretically, it makes a lot of sense to me that consuming amino acids is less filling than consuming proteins because the body doesn't need to do work to break them down. That also seems to match my experience that I can easily drink it after another meal. 

Generally consuming oil directly doesn't taste good and amino acids directly also doesn't taste very good, but mixing them together tastes a lot better. 

Could you give specific examples for the liquid amino acids you use?

I used to consume thousands of calories of pure sugar (mixed with water) for long gym days. I did this in line with recommendations for athletes that simpler carbohydrates are better for endurance activity for their ease of digestion and because the calories were used to fuel exercise as opposed to being converted to excess adipose tissue. Cake is typically 'dirty' in my opinion because regular consumption of cake tends to not be correlated with a healthful diet and because the calories that cake takes up can push out more nutrient-dense foods, but cake, and most all 'foods,' I don't think are bad per se but only insofar as they contribute to a dietary pattern that is lacking in nutrients. But if you're bulking and are adequately meeting nutrition targets then eating calorically dense foods is, I think, neutral wrt health, though eating lots of fatty nuts might be more healthful. Lmk if studies for any of the above claims would be helpful, and for a less evidence-based example, I think of Michael Phelps eating lots of candy and 'unhealthy' foods when training. 

Concision is especially important for public speakers

If I was going to give a talk in front of 200 people, it being 1 minute unnecessarily less consise wastes ~3 hours of the audience's time in total, so I should be willing to spend up to 3 hours to change that.

In 95%-ile isn't that good, Dan Luu writes:

Most people consider doing 30 practice runs for a talk to be absurd, a totally obsessive amount of practice, but I think Gary Bernhardt has it right when he says that, if you're giving a 30-minute talk to a 300 person audience, that's 150 person-hours watching your talk, so it's not obviously unreasonable to spend 15 hours practicing (and 30 practice runs will probably be less than 15 hours since you can cut a number of the runs short and/or repeatedly practice problem sections).

Maybe someone should make a dating app for effective altruists, where people can indicate which organizations they work for / which funds they receive, and conflicts of interest are automatically detected. Potential solution to the conflict between professional and romantic relationships in this weird community. Other ideas:

  • Longer profiles, akin to dating docs
  • Calendly date button
  • OK Cupid-style matching algorithm, complete with data such as preferred cause area
  • Tools for visualizing your polycule graph
  • Built-in bounties or prediction markets to incentize people to match-make
  • A feature which is just a clone of reciprocity.io, where you can anonymously indicate who you'd be open to dating and if two people indicate each other, they both get notified

This is half a joke and half serious. At least it's an interesting design challenge. How would you design the ideal dating app for a unique community without traditional constraints like "must be gamified to make people addicted", "needs a way to be profitable", "must overcome network effects", and "users aren't open-minded to strange features"?

Manifold.love is in alpha, and the MVP should be released in the next week or so. On this platform, people can bet on the odds that each other will enter in at least a 6-month relationship.

Prioritizing subjects to self-study (advice wanted)

I plan to do some self-studying in my free time over the summer, on topics I would describe as "most useful to know in the pursuit of making the technological singularity go well". Obviously, this includes technical topics within AI alignment, but I've been itching to learn a broad range of subjects to make better decisions about, for example, what position I should work in to have the most counterfactual impact or what research agendas are most promising. I believe this is important because I aim to eventually attempt something really ambitious like founding an organization, which would require especially good judgement and generalist knowledge. What advice do you have on prioritizing topics to self-study and for how much depth? Any other thoughts or resources about my endeavor? I would be super grateful to have a call with you if this is something you've thought a lot about (Calendly link). More context: I'm a undergraduate sophomore studying Computer Science.

So far, my ordered list includes:

  1. Productivity
  2. Learning itself
  3. Rationality and decision making
  4. Epistemology
  5. Philosophy of science
  6. Political theory, game theory, mechanism design, artificial intelligence, philosophy of mind, analytic philosophy, forecasting, economics, neuroscience, history, psychology...
  7. ...and it's at this point that I realize I've set my sights too high and I need to reach out for advice on how to prioritize subjects to learn!

Some advice (with less justification):

Pick one (1) technical subject[1]. Read the textbook carefully (maybe take notes). Do all the exercises (or at least try to spend >20 minutes on exercises you can't solve). Potentially make flashcards. Study those flashcards. Do the real thing.[2]

I regret having spent so much time reading philosophy, and not learning technical subjects. I have gained remarkably little from "learning how to learn" (except the stuff above) or productivity or epistemology (excluding forecasting)[3]. I remember reading about a heuristic (might've been on Gwerns site, but I can't find it right now): Spend 90% of your time on object-level stuff, 9% of time on meta stuff, 0.9% of time on meta-meta stuff, and so on).

Learning forecasting is great. Best learned by doing a thousand forecasts (flows through to probability theory).


  1. I think linear algebra, causal inference or artificial intelligence are good candidates. I am unsure about game theory, it's been useful only in metaphors in my own life—too brittle and dependent on initial conditions. But in general anything where you can do exercises (so most things from 6.) and have them be wrong or right is good (so stuff like coding is better than math because checking a proof depends on knowing what a good proof looks like). ↩︎

  2. I predict you won't finish the textbook. No problem. ↩︎

  3. I think I learned more from a course on social choice theory than all philosophy from before 1950 I have read. ↩︎

socialhacks

A characteristic feature of the effective altruism and rationalism communities is what I call "socialhacks", or unusual tricks to optimize social or romantic activity, akin to lifehacks. Examples include

  • Dating documents
  • Monetary bounties for those who introduce someone to a potential romantic partner if they hit it off
  • A custom-printed T-shirt listing topics one enjoys discussing, their name, or a QR code to their website
  • Booking casual one-on-one calls using Calendly
  • Maintaining an anonymous feedback form
  • Reciprocity: a site where people can choose which others they would hang out with / date, and it only reveals the preference of the other party if they also want to do that activity

Lifehacks live in the fuzzy boundary between useful and non-useful: if an activity is not useful at all, it's not a good lifehack, but if it's too universally useful, it becomes common practice and no longer worthy of being called a "hack" (e.g. wearing a piece of cloth in between one's foot and their shoe to make it easier to put on the shoe and reduce odor, i.e. socks).

Similarly, socialhacks are useful but live on the boundary between socially acceptable and unacceptable. They're never unethical, but they are weird, which is why they're only popular within open-minded, highly coordinated, and optimizing-mindset groups like EAs and rats. Some things would totally be considered socialhacks if they weren't mainstream, like dating apps and alcohol.

I asked GPT-4 to generate ideas for new socialhacks. Here's a curated list. Do you have any other ideas?

  • Hosting regular "speed friend-dating" events where participants have a limited time to talk to each other before moving on to the next person, helping to expand social circles quickly.
  • Using personalized business cards that include not only one's contact information but also a brief description of their hobbies and interests to hand out to potential friends or romantic interests.
  • Developing a "personal brand" that highlights one's unique qualities, interests, and strengths, making it easier for others to remember and connect with them.
  • Establishing a regular "friend check-in" routine, where you reach out to friends you haven't spoken to in a while to catch up and maintain connections.
  • Using a digital portfolio, such as a personal website or blog, to showcase one's interests, hobbies, and achievements, making it easier for potential romantic partners or friends to learn more about them.
  • Utilizing a "get-to-know-me" quiz or survey app, where you can create a personalized questionnaire for friends or potential partners to fill out, discovering shared interests and compatibility.
  • Developing a personal "social calendar" app or tool that helps you track and manage social events, as well as set reminders to reach out to friends and potential romantic partners.

Unfortunately, "social hacking" is already a term in security. The only good suggestion I got out of GPT-4 was "socialvation". So, a second question: do you have any other suggestions?

Sometimes I notice that people in communities formed around an attribute display a higher intensity of that attribute than one'd expect – here's my hypothesis for why this might be.

  1. Selection effects: if you pluck a random person from the gym, they will probably be more of a fitness enthusiast than the average person who regularly goes to the gym, because more enthusiastic exercisers work out more often, so you're more likely to have picked them. So if you peer across the gym room, you'll see a bunch of super buff people – not because the average person who goes to the gym is super buff, but because super buff people go to the gym more frequently.
  2. Evaporative cooling: out of every student at my university who would attend its queer pride club at least sometimes (set ), some are just on the edge of interest. One of them () attends a club social and observes a subset : the people who actually showed up. The average member of this subset are more enthusiastic about queer culture than the superset, dissuading  from attending future meetings (they don't fit in) and strengthening the average enthusiasm of members in .

My explanation rests on two assumptions which don't apply to every community.

  1. People who display a higher intensity of some trait around which a community is formed attend meetings with that community more often.
  2. More enthusiastic members dissuade more members who are on the edge than they attract.

rational vanity

epistemic status: literal shower thought, uncertain

Gaining social status and becoming more attractive are useful goals because more attractive and higher-status people are treated better (see Halo Effect) and because they increase one's self-confidence. But the desire to improve on these fronts is seen as vain, as a negative virtue. I think there are three reasons this could be:

  1. Our evolutionary ancestors who sought status were more likely to have children, so this desire is biologically hardwired into us. Someone who takes action to increase their status might be doing so out of a cold, rational calculation that says it would help them to achieve their goals, but it's more likely that they're doing it because it feels good.
  2. Often, out of this irrational motivation, people take actions which increase their status but aren't useful in achieving their goals. For example, buying a fancy, expensive car probably isn't worth the money because there are more efficient ways to convert money into status and it might make one come off as extravagant and douchey instead of classy.
  3. Status and attractiveness are zero-sum games because they only mean anything in relation to other people. Everyone buying cosmetic plastic surgery would be a waste of surgeons' time because humanity wouldn't be better off overall.[1] This means that spending resources to move up the status ladder is like defecting against the rest of humanity (see tragedy of the commons). To prevent people from defecting, society established a norm of judging negatively those who are clearly chasing status, and so people have to be sneaky about it. They either have to have some sort of plausible deniability ("I only bought these expensive clothes because I like how they look, not because I'm chasing status") or a genuine reason other than "I want people to treat me better" ("I only got plastic surgery because my unattractiveness was having an especially negative effect on my mental health").

So, here's the result: someone who rationally chases status and makes themselves more attractive in order to better achieve their goals, even if altruistic, is seen instead as someone succumbing to their natural instinct and defecting against the societal norm of not playing the status game.


  1. Although, I've heard the argument that if beauty is intrinsically valuable, humanity would be better off if everyone bought plastic surgery because there would be more beauty in the world. ↩︎

Potential issues with this thought:

  • Conflates attractiveness and status. Talks about them as if they have the same properties, which might not be the case.
  • Do people actually see the pursuit of increased attractiveness and status negatively? For example, if someone said "I want to go to the gym to look better", I think that would be seen as admirable self-improvement, not vanity.
  • Is the norm of judging vanity negatively actually a result of the zero-sum game? I don't know enough about sociology to know how societal norms form.
  • Is irrational vanity actually common? Maybe doing things like buying extravagant cars is less common than I think, or these sorts of acts are more "rationally vain" than I think, i.e. they really are cost-effective ways to increase status.

I try to avoid most social media because it's addicting and would probably have negative effects on my self-image. But I've found it motivating to join social media websites that are catered to positive habits: Strava and Goodreads leverage my innate desire for status and validation to make me run and read more often. They still have dark patterns, and probably still negatively affect my self-image a bit, but I think my use of them is net positive.

Originally, I wanted to write this as a piece of advice, as in "you should consider using social media to motivate yourself in positive habits", but I tentatively think this is a bad writing habit of mine and that I should be more humble until I'm more confident about my conclusions. What do you think?

altruism is part of my self-improvement feedback loop

One critique of utilitarianism is that if you seriously use it to guide your decisions, you would find that for any given decision, the choice that maximizes overall wellbeing is usually not the one that does any good for your personal wellbeing, so you would turn into a "happiness pump": someone who only generates happiness for others at the detriment of themself. And, wouldn't you know it, we see people like this pop up in the effective altruism movement (whose philosophy stems mostly from utilitarianism), particularly those who pursue earn-to-give. While most are happy to give away 10% of their income to effective charities, I've heard of some who have taken it to the extreme, to the point of calculating every purchase they make in terms of days of life they could have counterfactually saved via a donation.

However, since its beginnings, EA has shifted its focus away from earning to give and closer to encouraging people to pursue careers where they can work directly on the world's most important problems. For someone with the privilege to consider this kind of career path, I believe this has changed the incentives and made the pursuit of self-fullfillment more closely aligned with maximizing expected utility.

the self-improvement feedback loop

Self-improvement is a feedback loop, or rather, a complicated web of feedback loops. For example,

  • The happier you are, the more productive you are, the more money you make, the happier you are.
  • The more often you exercise, the better your mental health, the better your executive function, the less often you skip your workouts, the more often you exercise.
  • The more often you exercise, the stronger you become, the more attractive you become, the more you benefit from the halo effect, the more likely you are to get a promotion, the more money you make.

It all feeds into itself. Maybe this is just another way of phrasing the effect of accumulated advantage.

let's throw altruism into the loop

In my constant battle to nudge this loop in the right direction, I don't see altruism as a nagging enemy, who would take away energy I could use to get ahead. Rather, I see it as part of the loop.

Learning about the privilege I have (not only in the US but also globally) and how I can meaningful leverage that privilege as an opportunity to help massive numbers of poorer-off people has given me an incredible amount of motivation to better myself. Before I discovered EA, my plan was to become a software developer and retire as early as possible. Great life plan, don't get me wrong – but when I learned I could take a shot at solving the world's most important problems, I realized it was a super lame and selfish waste of privilege in comparison.

Instead of thinking about "how do I make as much money as possible?", I now think about

  • How do I form accurate beliefs about the world?
  • What does the world look like, where will it be in the future, and where can I fit in to make it better?
  • Which professional skills are the best fit for me and the most important for having a positive impact?
  • How do I become as productive and agentic as possible?

Notice how this differs from the happiness pump situation. It's more focused on "improving the self to help others" than "sacrificing personal wellbeing to help others". This paradigm shift in what it looks like to try to do as much good as possible brings altruism into the self-improvement feedback loop. It gives my life a sense of meaning, something to work towards. Altruism isn't a diametrically opposed goal to personal fulfillment; it's mostly aligned.

The happier you are, the more productive you are, the more money you make, the happier you are. The more often you exercise, the better your mental health, the better your executive function, the less often you skip your workouts, the more often you exercise. The more often you exercise, the stronger you become, the more attractive you become, the more you benefit from the halo effect, the more likely you are to get a promotion, the more money you make.

And this effect could be even stronger in a group. In addition to the individual loops, seeing other people happy makes you happy, seeing other people productive inspired you to do something productive, people in the group could help each other financially, exercise together, etc.

Yeah definitely! It gets even more complicated when you throw other humans in the loop (pun not intended).

One critique of utilitarianism is that if you seriously use it to guide your decisions, you would find that for any given decision, the choice that maximizes overall wellbeing is usually not the one that does any good for your personal wellbeing, so you would turn into a "happiness pump": someone who only generates happiness for others at the detriment of themself.

I think this only happens if you take an overly restictive/narrow/naive view of consequences. Humans are generally not productive if they're not happy, so the happiness pump strategy is probably not actually good for the net well being of other people longterm.

I agree, maybe I should state that overtly in this post. It's essentially an argument against the idea of a happiness pump, because of the reason you described.

Three related concepts.

  • On redundancy: "two is one, one is none". It's best to have copies of critical things in case they break or go missing, e.g. an extra cell phone.
  • On authentication: "something you know, have, and are". These are three categories of ways you can authenticate yourself.
    • Something you know: password, PIN
    • Something you have: key, phone with 2FA keys, YubiKey
    • Something you are: fingerprint, facial scan, retina scan
  • On backups: the "3-2-1" strategy.
    • Maintain 3 copies of your data:
    • 2 on-site but on different media (e.g. on your laptop and on an external drive) and
    • 1 off-site (e.g. in the cloud).

Inspired by these concepts, I propose the "2/3" model for authentication:

Maintain at least three ways you can access a system (something you have, know, and are). If you can authenticate yourself using at least 2 out of the 3 ways, you're allowed to access the system.

This prevents both false positives (hackers need to breach at least two methods of authentication) and false negatives (you don't have to prove yourself using all methods). It provides redundancy on both fronts.

This was originally a comment, found here.