Going Durden


Sorted by New

Wiki Contributions


Im confused by this post. It might be that I lack the necessary knowledge or reading apprehension, but the post seems to dance around the actual SELF-improvement (AI improving itself, Theseus Ship Style), and refocuses on improvement iteration (AI creating another AI). 

Consider a human example. In the last few years, I learned Rationalist and Mnemonic techniques to self-improve my thinking. I also fathered a child, raised it, and taught it basic rationalist and mnemonic tricks, making it an independent and only vaguely aligned agent potentially more powerful than I am. 

The post seems to focus on the latter option.

is if it turns out that advanced narrow-AIs manage to generate more utility than humans know what to do with initially.


I find it not just likely but borderline certain. Ubiquitous, explicitly below-human narrow AI has a tremendous potential that we act blind to, focusing on superhuman AI. Creating superhuman, self-improving AGI, while extremely dangerous, is also an extremely hard problem (in the same realm as dry nanotech or FTL travel). Meanwhile, creating brick-dumb but ubiquitous narrow AI and then mass producing it to saturation is easy. It could be done today, its just a matter of market forces and logistics.

It might very well be the case that once the number of narrow-AI systems, devices and drones passes certain threshold (say, it becomes as ubiquitous, cheap and accessible as cars, but not yes as much as smartphones) we would enter a weaker form of post-scarcity and have no need to create AI gods.

I have a somewhat similar story. I have been struggling with ADHD all my life, and only recently started using anti-ADHD medicines. Unfortunately, these gave me stomach issues and tremendous reflux, which was only tolerable if I took them in small doses...which in turn barely helped with my ADHD.

After testing pretty much every anti-ADHD drug with combination of every anti-reflux drug, I gave up, and tried my aunt's suggestion of Ashwaghandha. I was beyond skeptical, and only gave it a try to please a concerned relative. I was mentally prepared to anti-placebo it, determined to prove it will not work (I even pre-planned my smug and condescending speech about how I did my best to test it and how obviously it did nothing, being just another woowoo herbalist nonsense with no scientific proof behind it).

It goddamn worked. By itself, ashwaghandha did precisely nothing. By themselves, ADHD medicine did something, but at the cost of me belching acid like an overfed xenomorph. Combined, it resulted in a far, far greater mental focus, and no digestion issues at all. Absurdly, combining ashwghanda with smaller dose of amphetamine salts gave better and stabler mental results than just doubling the amph intake.

AFAIK, there are no studies that conclusively prove ashwaghanda really works. Those that do, suggest it as a sleeping aid of all things. And yet. I talked it out with my psychiatrist, and as far as she knows (and she is likely THE expert on adult ADHD in my country) ashwaghanda should do nothing at all.


It would be fascinating if propensity for limerence was genetically determined, because limerence directly influences our mating/breeding habits. For one, teen pregnancy might very well be a side effect of this.

In that regard, should we assume that the missing component that makes love "romantic" or "limeric" is irrationality?

My instinct is that if someone has a gooey, excessive feeling that the other is Significant it counts as romantic, but if one had a rational, evidence based belief that the other is Significant, it would not be considered romantic enough, even if the feeling of emotional bond would be much more resilient in the second example.

To use a more concrete example:

1. Bill meets Alice and falls madly in love with her. He does irrational, excessively symbolic and juvenile things to impress her. They break up anyway after a turbulent 3 months. Their Love is Romantic.

2. Frank meets Jane on a professional dating app, and they see with perfect clarity that their values, ideologies, libidos, tastes and lifestyles are perfectly aligned. They marry and spent 57 years together in an easy bliss, until they die. Their relationship would not be qualified as romantic, even though it generated more happiness and a stronger bond.

Therefore, I would suggest that the important component of romance are: irrationality, excessiveness, emotional risk and playing against bad statistical odds. In other words, drama.

I honestly cannot recall I ever felt limerence, even when I felt love. This led me to research it, and it seems like limerence is a highly culture specific, and is likely more a cultural meme than an emotion inherent to human brains. If I were to guess, I would say limerence is a side effect of the emotional and sexual frustration of the young and inexperienced humans who dabble in their first relationships, and since hearing/gossiping/reading about other people's romantic frustrations is exciting, it became a meme.

To support this theory, we see much greater emphasis on limerence/limerice/limerance in cultures and ages when virginity until marriage was considered sacred, and young people were gender-segregated. In free-love egalitarian cultures, we see remarkably little dramatic limerice, and in fact, we see attempts by the youth to artificially create romantic drama (ex: going out of their way to date dangerous people, or pining over an inaccessible celebrity) to achieve a semblance of limerice.

As we grew in numbers and social complexity, it is easy to encounter someone with a completely different desire/expectation of limerance in their life, which I think is the reason romantic relationships became so difficult.

Unsurprisingly, there seems to be very little desire for limerice among LWers and rationalists in general, which explains why a higher-than-average number of us are single, or dating fellow rationalists.

Some reasons why Im personally not as involved in working to prevent AI Hell:

(in no order of importance).

1. Im not strongly convinced a hostile Singularity is plausible at least in the near future, from technological, logistical, and practical standpoint. Pretty much every AI Hell scenario I have read, hinges on sudden appearance of scientifically implausible technologies, and on instant perfect logistics that the AI could use.

2. Main issue that could lead to AI Hell is the misalignment of values between AI and humans. However, it is patently obvious that humans are not aligned with each other, with themselves or with rational logic. Therefore, I do not see a path to align AI with human values unless we Solve Ethics, which is an impossible task unless we completely redesign human brains from scratch.

3. Im personally not qualified to work on any technological aspects of preventing AI Hell. I am qualified to work on human-end Ethics and branch into alignment from that, and I see it as an impossible task with the kind of humans we get to work with.

4. A combination of points 1 and 2 leads me to believe that humanity is far more likely to abuse early stage AI to wipe itself out, than for AI itself to wipe out humanity of its own volition. To put it differently, crude sub-human level AI can plausibly be used to cause WW3 and a nuclear holocaust without any need for hostile superhuman AI. I think we worry too much about the unlikely but extremely lethal post-Singularity AI, and not enough about highly likely and just sufficiently lethal wargame systems in the hands of actual biological humans, who are not sufficiently concerned with humanity's survival.

5. Roko's Gremlin: anyone who is actively working on limiting or forcibly aligning AI is automatically on the hit-list of any sufficiently advanced hostile AI. Im not talking about long term high-end scenario of the Roko's Basilisk, but rather the near-future low-end situation in which an Internet savvy AI can ruin your life for being a potential threat to it. In fact, this scenario does not require actively hostile AI at all. I see it as completely plausible that a human being with a vested financial interest in AI advancement could plausibly use AI to create a powerful smear campaign against, say, EY, to destroy his credibility, and with him the credibility of the AI Safety movement. Currently accessible AI is excellent at creating plausible-seeming bullshit, which would be perfect to use for social media warfare against anyone who tries to monkeywrench its progression. Look at Nick Bostrom to see how easily one of us can be sniped down with minimum effort.

What about the inverse idea, rather than putting cybernetics into racoons, put racoons into cynbernetics? 

Something like say, Boston Dynamics Dog, but with a racoon encased inside to pilot it. The actual strength, dexterity and speed of a racoon is unimpressive, the impressive part is their intelligence. You could have raccoons piloting all purpose robot-suits, but for simplest menial tasks you could get away with something long lived, robust and sturdy, like a lobster.

Possibly, but to know that, I would have to be shown definition of what Romantic Love actually is, aside from "deep friendship+sex".  Even the Wikipedia article on Romance/Love lists a whole bunch of contradicting definitions that boil down to one of:

1. Friendship and sex (Emotional Bond+ Physical Bond).

2. Biological mate bonding to create offspring.

3. "...You know, that lyrical, limerical, ephemeral thing that we all experience, so we won't define it..."

My guess is that answer 3 is basically social memetics to cover and normalize that love is basically 2 by way of 1. 

And since asexual people supposedly feel Love as well, this means that Love is essentially an intense desire for Friendship that forms a lasting bond.


For a while now, I have been trying out something that I think would be compatible with your Portable Tell Culture, a thing I would call a Passive Tell or a Passive Frame. Basically, the idea is that my outward presentation and behavior is always well matched with my actual internal beliefs, and I consciously use social stereotypes and stylistic cues to make it obvious. 

Without getting into any specifics, I'm exactly the kind of a guy you would think I was after a first glance, and my words, actions, behavior, even fashion, matches the social stereotype that I internally resemble the most. Im exactly what it says on the tin, and a book that you can judge by the cover.

This came as a result of an experiment in radical honesty I started 2 years ago. Trying to limit lying, and deceptive presentation to a limit meant that I had to wear my internal beliefs openly and passively advertise them, which naturally filters the possible social interactions and types of people I interact with to those Im compatible with.

Load More