Cause X candidate:Buddhists claim that they can put brains in a global maximum of happiness, called enlightenment. Assuming that EA aims to maximize happiness plain and simple, this claim should be taken seriously. It currently takes decades for most people to reach an enlightened state. If some sort of medical intervention can reduce this to mere months, this might drive mass adoption and create a huge amount of utility.
Relevant: the non-adversarial principle of AI alignment
Whereas if you're good at your work and you think that your job is important, there's an intervening layer or three—I'm doing X because it unblocks Y, and that will lead to Z, and Z is good for the world in ways I care about, and also it earns me $ and I can spend $ on stuff...
Yes initially there might be a few layers, but there's also the experience of being really good at what you do, being in flow, at which point Y and Z just kind of dissolve into X, making X feel valuable in itself like jumping on a trampoline.Seems like this friend wants to be in this... (read more)
A thought experiment: would you A) murder 100 babies or B) murder 100 babies? You have to choose!
Sidestepping the politics here: I've personally found that avoiding (super)stimuli for a week or so, either by not using any electronic devices or by going on a meditation retreat, tends to be extremely effective in increasing my ability to regulate my emotions. Semi-permanently.I have no substitute for it, it's my panacea against cognitive dissonance and mental issues of any form. This makes me wonder: why aren't we focusing more on this from an applied rationality point of view?
This seems to be a fully general counterargument against any kind of advice.As in: "Don't say 'do X' because I might want to do not X which will give me cognitive dissonance which is bad"You seem to essentially be affirming the Zen concept that any kind of "do X" will imply that X is better than not X, i.e. a dualistic thought pattern, which is the precondition for suffering.But besides that idea I don't really see how this post adds anything. Not to mention that identity tends to already be an instance of "X is better than not X". Paul Graham is sayi... (read more)
Well, I don't know if I'd call it a fully general argument against taking any kind of advice. I'd say it's more like a fully general argument against directly trying to optimize for the thing being measured, i.e. an argument that you should avoid Goodhart's curse.
As to to the post not adding anything, I guess that's true in a certain sense, but I find many people, myself included, are kinda bad at taking general advice and successfully applying it in all specific situations, so it's often helpful to take something general and consider specific instances of... (read more)
Since this is the first Google result and seems out of date, how do we get the RSS link nowadays?
I may have finally figured out the use of crypto.
It's not currency per se, but the essential use case of crypto seems to be to automate the third party.
This "third party" can be many things. It can be a securities dealer or broker. It can be a notary. It can be a judge that is practicing contract law.
Whenever there is a third party that somehow allows coordination to take place, and the particular case doesn't require anything but mechanical work, then crypto can do it better.
A securities dealer or broker doesn't beat a protocol that matches buyers and sel... (read more)
I was in this "narcissist mini-cycle" for many years. Many google searches and no luck. I can't believe that I finally found someone who recognizes it. Thank you so much.fwiw, what got me out of it was to attend a Zen temple for 3 months or so. This didn't make me less narcissistic, but somehow gave me the stamina to actually achieve something that befit my inflated expectations, and now I just refer back to those achievements to quell my need for greatness. At least while I work on lowering my expectations.
It does not, but consider 2 adaptations:A: responds to babies and more strongly to bunniesB: responds to babies onlyB would seem more adaptive. Why didn't humans evolve it?Plausible explanation: A is more simple and therefore more likely to result from a random DNA fluctuation. Is anyone doing research into which kinds of adaptations are more likely to appear like this?
Can you come up with an example that isn't AI? Most fields aren't rife with infohazards, and 20% certainty of funding the best research will just divide your impact by a factor 5, which could still be good enough if you've got millions.For what it's worth, given the scenario that you've at least got enough to fund multiple AI researchers and your goal is purely to fix AI, I concede your point.
I don't like this post because it ignores that instead of yachts you can simply buy knowledge for money. Plenty of research that isn't happening because it isn't being funded.
A Shor-to-Constance translation would be lossy because the latter language is not as expressive or precise as the former
I wonder just how far this concept can be stretched. Is focusing a translation from the part of you that thinks in feelings to the part of you that thinks in words? If you're translating some philosophical idea into math, are you just translating from the language of one culture to the language of another?And if so, it strikes me that some languages are more effective than others. Constance may have had better ideas, but if Shor knew the same stuff as Constance (in his own language) perhaps he would have done better. Shor's language seems ... (read more)
Personalized mythic-mode rendition of Goodhart's law:
"Everyone wants to be a powerful uncompromising force for good, but spill a little good and you become a powerful uncompromising force for evil"
The parent-child model is my cornerstone of healthy emotional processing. I'd like to add that a child often doesn't need much more than your attention. This is one analogy of why meditation works: you just sit down for a while and you just listen.
The monks in my local monastery often quip about "sitting in a cave for 30 years", which is their suggested treatment for someone who is particularly deluded. This implies a model of emotional processing which I cannot stress enough: you can only get in the way. Take all distractions away from someone and t... (read more)
Look, if you can't appreciate the idea because you don't like it's delivery, you're throwing away a lot of information
It's supposed to read like "this idea is highly unpolished"
Here's an idea: we hold the Ideological Turing Test (ITT) world championship. Candidates compete to pass an as broad range of views as possible.
Points awarded for passing a test are commensurate with the amount of people that subscribe to the view. You can subscribe to a bunch of them at once.
The awarding of failures and passes is done anonymously. Points can be awarded partially, according to what % of judges give a pass.
The winner is made president (or something)
It might be hard to take a normative stance, but if culture 1 makes you feel better AND leads to better results AND helps people individuate and makes adults out of them, then maybe it's just, y'know, better. Not "better" in the naive mistake-theorist assumption that there is such a thing as a moral truth, but "better" in the correct conflict-theorist assumption that it just suits you and me and we will exert our power to make it more widely adopted, for the sake of us and our enlightened ideals.
Case study: A simple algorithm for fixing motivation
So here I was, trying to read through an online course to learn about cloud computing, but I wasn't really absorbing any of it. No motivation.
Motives are a chain, ending in a terminal goal. Lack of motivation meant that my System 1 did not believe what I was doing would lead to achieving any terminal goal. The chain was broken.
So I traversed the chain to see which link was broken.
Question for the Kegan levels folks: I've noticed that I tend to regress to level 3 if I enter new environments that I don't fully understand yet, and that this tends to cause mental issues because I don't always have the affirmative social environment that level 3 needs. Do you relate? How do you deal with this?
As someone who never came across religion before adulthood, I've been trying to figure it out. Some of it's claims seem pretty damn nonsensical, and yet some of it's adherents seem pretty damn well-adjusted and happy. The latter means there's gotta be some value in there.
The most important takeaway so far is that religious claims make much more sense if you interpret them as phenomenological claims. Claims about the mind. When buddhists talk about the 6 worlds, they talk about 6 states of mood. When christians talk about a coven... (read more)
As for being on ibogaine, a high dose isn't fun for sure, but microdoses are close to neutral and their therapeutic value makes them net positive
Have you tried opiates? You don't need to be in pain for it to make you feel great
Ibogaine seems to reset opiate withdrawal. There are many stories of people with 20 year old heroin addictions that are cured within one session.
If this is true, and there are no drawbacks, then we basically have access to wireheading. A happiness silver bullet. It would be the hack of the century. Distributing ibogaine + opiates would be the best known mental health intervention by orders of magnitude.
Of course, that's only if there are no unforeseen caveats. Still, why isn't everybody talking about this?
Did Dominic Cummings in fact try a "Less Wrong approach" to policy making? If so, how did it fail, and how can we learn from it? (if not, ignore this)
I did all the epistemic virtue. I rid myself of my ingroup bias. I ventured out on my own. I generated independent answers to everything. I went and understood the outgroup. I immersed myself in lots of cultures that win at something, and I've found useful extracts everywhere.
And now I'm alone. I don't fully relate to anyone in how I see the world, and it feels like the inferential distance between me and everyone else is ever increasing. I've lost motivation for deep friendships, it just doesn't seem compatible with learning new t... (read more)
I don't recall, this is one of those concepts that you kind of assemble out of a bunch of conversations with people that already presuppose it
Here's another: probing into their argument structure a bit and checking if they can keep it from collapsing under its own weight.
Probably the skill of discerning skill would be easier to learn than... every single skill you're trying to discern.
The outgroup is evil, not negotiating in good faith, and it's an error to give them an inch. Conflict theory is the correct one for this decision.
Which outgroup? Which decision? Are you saying this is universally true?
Forgive me for stating things more strongly than I mean them. It’s a bad habit of mine.
I’m coming from the assumption that people are much more like Vulcans than we give them credit for. Feelings are optimizers. People that do things that aren’t in line with their stated goals, aren’t always biased. In many cases they misstate their goals but don’t actually fail to achieve them.
See my last shortform for more on this
So here's two extremes. One is that human beings are a complete lookup table. The other one is that human beings are perfect agents with just one goal. Most likely both are somewhat true. We have subagents that are more like the latter, and subsystems more like the former.
But the emphasis on "we're just a bunch of hardcoded heuristics" is making us stop looking for agency where there is in fact agency. Take for example romantic feelings. People tend to regard them as completely unpredictable, but it is actually possible to predict to some extent whether y
I am aware that confessing to this in most places would be seen as a huge social faux pas, I'm hoping LW will be more understanding.
You're good. You're just confessing something that is true for most of us anyway.
Where I have a big disagreement is in the lesson to take from this. Your argument is that we should essentially try to turn off status as a motivator. I would suggest it would be wiser to try to better align status motivations with the things we actually value.
Up to a point. It is certainly true that status motivations have led to g... (read more)
Right, right. So there is a correlation.
I'll just say that there is no reason to believe that this correlation is very strong.
I once won a mario kart tournament without feeling my hands.
People generally only discuss 'status' when they're feeling a lack of it
While this has been true for other posts that I wrote about the subject, this post was actually written from a very peaceful, happy, almost sage-like state of mind, so if you read it that way you'll get closer to what I was trying to say :)
I appreciate your review.
Most of your review assumes that my intent was to promote praise regardless of honesty, but quite the opposite is true. My intent was for people to pause, take a breath, think for a few moments what good things others are doing, and then thank them for it, but only if they felt compelled to do so.
Or I'll put it this way: it's not about pretending to like things, it's about putting more attention to the things about others that you already like. It's about gratefulness, good faith and recognition. It's abou... (read more)
I have gripes with EA's that try to argue about which animals have consciousness. They assume way too readily that consciousness and valence can be inferred from behavior at all.
It seems quite obvious to me that these people equate their ability to empathize with an animal with the ability for the animal to be conscious, and it seems quite obvious to me that this is a case of mind projection fallacy. Empathy is just a simulation. You can't actually see another mind.
If you're going to make guesses about whether a species is conscious, you sho... (read more)
I'm looking forward to a bookshelf with LW review books in my living room. If nothing else, the very least this will give us is legitimacy, and legitimacy can lead to many good things.
To me, the most useful part of this post is that it introduces this idea that affordances are personal, i.e. some people are allowed to do X while others are not. I like to see this as part of the pervasive social machinery that is Omega.
I imagine people of a certain political background to want to sneer at me, as in, "why did it take someone in your in-group to tell you this?"
To which I admit that, indeed, I should have listened. But I suppose I didn't (enough), and now I did, so here we are with a post that made my worldview more empathetic. The bottom line is what matters.
This post has been my go-to definition of Transhumanism ever since I first read it.
It's hard to put into words why I think it has so much merit. To me it just powerfully articulates something that I hold as self-evident, that I wish others would recognize as self-evident too.
To me, this is exactly what the LW community (and the broader progressive tribe surrounding it) needs to hear. This post, along with other developments of thought in the same direction, has caused a major shift in how I think about changing things.
The first quote is most important, and I find myself using it quite often if I'm met with a person my age (or even older) that dismisses a tradition as obviously dumb. Why do you think the tradition exists in the first place? If you don't know, how can you be so sure it doesn't serve some function?... (read more)
I still find myself using this metaphor a lot in conversations. That's a good benchmark for usefulness.
I could write a paragraph to explain some concept underlying a decision I made. OR there could be a word for this concept, in which case I can just use the word. But I can't use that word if it's not commonly understood.
The set of things that are common knowledge in a group of people is epistemic starting point. Imagine you had to explain your niche ideas about AI without using any concepts invented past 1900. You'd be speaking to toddlers.
I needed "common knowledge" to be common knowledge. It is part of our skill of upgrading skills. It's at the core of group rationality.
This post introduced me to a whole new way of thinking about institutional/agency design. The most merit, for me, was pointing out this field existed. The subject is close to one of the core subjects of my thoughts, which is how to design institutions that align selfish behavior with altruistic outcomes on different hierarchical levels, from the international, to the cultural, national, communal, relational, and as far down as the subagent level.
I don't think this is the right question to ask. Even if the net alertness gain of a cup of coffee is 0, it is still worth consuming during those moments that alertness is worth more, and abstaining during those moments where relaxation is worth more. Net alertness is not net EV.
Good catch, fixed it.
100x is obviously a figure of speech. I'd love to see someone do some research into this and publish the actual numbers
I suppose I was naive about the amount of work that goes into creating an online course. I had been a student assistant where my professor would meet with me and the other assistants to plan the entirety of the course a day before it started. Of course this was different because there was already a syllabus and the topic was well understood and well demarcated.
Also, I had visited Berkeley around that time, and word was out about a new prediction that the singularity was only 15 years ahead. I felt like I had no choice but to try and do something. Start mov... (read more)