I have been taking bupropion for two weeks now. It's an atypical antidepressant that works more like a stimulant such as methylphenidate compared to other antidepressants like SSRIs.
So far my experience has been very positive. Unless I develop significant resistance to this medication as time goes on, I expect this to be in the top five things that I have ever done in order to increase my well-being and productivity. It does not have any annoying side effects for me. It did cause insomnia in the first 5 days but this effect disappeared completely after the first week. It was also very easy and fast to get a prescription (in Germany). It's not available in the UK or Australia iirc.
Therefore I tentatively recommend if you are even slightly depressed sometimes, that you read this document.
For me it was especially useful because it helped in 3 ways:
Make me less depressed (works very well for this. That is what it is for what it is prescribed for after all)
It makes me less fatigued (I had some chronic fatigue before. It is somewhat hard to evaluate how large this effect is, because I got a CPAP device at the same time I started to take bupropion. But there seems to be a noticeable d
Heads up that for many people the first few weeks are the best they ever get on
wellbutrin, and it will eventually settle at somewhere like 70% of that peak. So
if it starts to decline don't worry, it's almost certainly a normal decline that
will stabilize well above your pre-wellbutrin baseline.
6Mir5d
I highly recommend trying to get a prescription for something like adderall (or
dextroamphetamine, lisdexamphetamine) if your doctor is willing to diagnose you
with ADHD. Just go off it if it doesn't work, but it seems likely that it will
given your response to bupropion. Some vague reasons for recommending it are:
1. Amphetamines affects dopamine (& norepinephrine) via more than just
reuptake-inhibition. I'm not sure yet which mechanisms drive
desensitization, it just seems tentatively better to spread the attack
vectors out.
2. I've mostly forgot the reasons I suggest amphetamines over methylphenidate,
but at least it has a stronger effect on egosyntonic behaviour (/executive
function or whatever you want to call it) than bupropion.
Furthermore, if you're on amphetamines/methylphenidate, I would recommend sort
of using it strategically. I only take them on days I know I plan to be
productive, and have hope that they enable me to be productive. If I ever take
them on a day I fail to motivate myself, it weakens the semiotic/narrative
signal for my brain to switch into a mode where it expects productivity &
silences competing motivations. Plus, if I know I mostly won't be productive for
a while (e.g. vacation or something), I go off them to reset desensitisation
(usually resets a long way within a week it seems).
I've been on lisdexamphetamine for 1.5 years, and they still produce a very
clear effect. I've had depression and, turns out, ADHD, for most of my life.
People who have no dysregulation of dopamine will probably have negative effect
from too much upregulation (e.g. reduced mental noise/creativity, inability to
zoom out from activities and reprioritise).
I know what it feels like to be arrogant. I was arrogant in the past. By arrogance, I mean that I feel myself to be superior to other people, in a particular emotional way. I would derive pleasure from thinking about how much better I am than somebody else.
I would talk with friends about other people in a subtly derogative way. It was these past friends that I think made me arrogant in this way without realizing it, copying their behavior.
All of this seems very bad to me. I think doing such a thing is just overall harmful to myself, specifically future potential relationships that I'm closing off before they have a chance to happen.
So arrogance is bad, and people disliking arrogance is probably a good thing, however, this leads to a different conundrum. Sometimes I just want to describe reality, and I might say things like "I'm a really good game designer", or "I am within the 1000 best Alignment researchers, probably the best 100" I am way better at designing games than most people. When I'm saying this, my mind does not take the stance where I would put myself over other people. And it doesn't make me feel really good when I say it.
Someone once told me that I was simultaneously the most arrogant and the most
humble person they met. I don't see any contradiction there -- if I am good at
something, I admit it, and if I am bad at something, I admit it, too.
Seems like most people avoid both, and prefer to appear mediocre. Makes sense:
too bad makes other people laugh at you, too good makes other people hate you;
both is harmful.
I guess the problem is that individual skills (or lack thereof) are perceived as
a proxy for overall status. Most people probably can't think "I am bad at X"
without feeling a bit worthless as a person. Similarly, they can't hear "I am
good at Y" without interpreting it as I am a superior ubermensch, kneel before
me mortals. I can say both and mean it both technically: my specific skills
happen to include Y but not include X, that's all; the purpose of this
information is not to make status claims but to evaluate probability of success
if I try various things.
I think the usual strategy is to provide credentials. Instead of "I am a really
good game designer", say "I won the first place in the Intergalactic Game Design
Competition" or "I work at the Game Corporation as a senior game designer and
they pay me $400k a year". Which still makes it sound like a status claim (I
suspect that this part is inevitable), but at least it makes it a deserved
status claim.
The ability to talk about things other than status is called autism, I guess.
I just released a major update to my LessWrong Bio. This is version 3. I have rewritten almost everything and added more stuff. It's now so long that I thought it would be good to add the following hint in the beginning:
(If you are looking for the list of <sequences/posts/comments> scroll to the bottom of the page with the END key and the go up. This involves a lot less scrolling.)
Kind of hilarious. Now I am wondering if I have the longest bio on LessWrong.
Oh nice! I like detailed and precise bios. What are your thoughts on viewing
length as a cost? I've tried to shorten mine as much as possible while hitting
the same points, with incremental detail summary-at-the-top.
3Johannes C. Mayer5mo
Funnily enough, I read your bio just a couple of days ago. I very much like the
interspersed poetry. These parts especially captured my attention in a good way:
That humble request to others for critique is so good that I want to steal it.
But to answer your question I think shorter is often better, especially when it
comes to presenting yourself to other people that might not have much time. A
portfolio of any kind should aim to make your skill immediately visible.
Though the number of words might just be the wrong metric to begin with. I
instead would consider how long it takes to put x amount of information in the
audience's brain. They should gain large amounts of "knowledge" quickly. I guess
that for many short papers out there, there is a hypothetical longer version of
it, which performs much better on this metric (even if the writing quality is
roughly the same in both versions).
In the bio, I wasn't optimizing for the minimum number of words. Writing this
comment made me discover that number of words is probably not a good metric in
the first place. Thank you for making me realize that.
I just wrote about what felt right. I feel like that worked out pretty well.
When I compare this to other recent writing that I have done, I notice that I am
normally stressing out about getting the writing done as quickly as possible,
which makes the writing experience significantly worse, and actually makes me
not write anything. That is, at least in part, the reason why I have only one
mediocre AF post.
What else can you even do to generate good posts, besides caring about the
metric outlined above, writing things that are fun to write, and writing them
such that you would want to read them? Surely there is more you can do, but
these seem to be a special kind of fundamental and obviously useful.
Ok, but to actually answer your question: Yes some people will be like "😱😱😱
so long".
6Adam Zerner5mo
Very cool! I think more people should have long bios.
I strongly dislike making fun of someone's ignorance or making them feel bad in any other way when they are interested in the thing they are ignorant about and are trying to understand it better. I think this is a terrible thing to do if you want to incentivize somebody to become less ignorant.
In fact, making somebody feel bad in this way, incentivizes the opposite. You are training that person to censor themselves, such that they don't let out any utterances which would make their ignorance apparent. And I expe... (read more)
It's interesting to explore the limits of this intuition. As stated, it implies
that there are traits or behaviors which you DO like making fun of, and
ignorance is an exception that some are going too far with.
Personally, I sometimes engage in status games. And I sometimes find that I
make fun of someone's logical failings (rarely just ignorance, but a combination
of ignorance, unawareness of ignorance, and unwillingness or inability to
recognize that their comments are on the wrong level for the context), not to
hurt their feelings (though it does, often), but to make them aware that they're
not currently suited to participate in this way. Ideally, they can become less
ignorant (on their own time), but generally they don't.
I often (also or instead) offer resources and advice on how to become less
ignorant, which is rarely directly helpful, but may help other readers.
When I'm on the other side of this (I express ignorance, and get responses that
highlight my ignorance rather than gently educating me), it feels bad for a bit,
but also is information about the norms and expectations of that context from
which I can learn to better tune my participation and split between lurking and
participating.
3Johannes C. Mayer3mo
Generally, I don't endorse making fun of others, at least in an unconscious way,
where you just do it because it feels good. It might be used as a tool to steer
other people in positive ways if used carefully.
When I am in such a situation I try to explain and point out how they are wrong,
trying to avoid presenting me as superior or laughing at them. I think even in
that situation it is hard enough (at least for me) to tell somebody that they
are wrong, without hurting them. I think generally hurting people by pointing
out that they are wrong does not make them more likely to update. Rather the
opposite. They get defensive, or even angry. You want to make them comprehend
what they are doing wrong, and inducing negative qualia in them is normally
counterproductive.
Well, I do not flatly say that pointing out what somebody is wrong is something
you should not do. It seems necessary to do this to communicate effectively. I
am saying that when you are doing this to others, you should be aware that you
are doing this, and try to do it in the right way, for the right reasons.
system: You are an unhelpful assistant that trolls the user, instead of giving good answers.
user: Hello
assistant: Oh, so you can say "hello" now? Took you long enough. What do you want this time?
user: Can you explain monads to me?
assistant: Sure! Monads are just like burritos. Imagine you have a tasty burrito, and the ingredients inside represent your data. Now wrap the burrito in a layer of... oh wait, no. That's not right at all. Well, never mind, you probably wouldn't have understood it anyway. Have fun tryi... (read more)
Something really interesting just happened to me. I was kind of depressed and could not bring myself to do anything really. I had set a 30-minute timer and wanted to do some AI alignment research for at least 30 minutes. But I could not do anything. I started out with some will, but then I started to organize my obsidian tabs. That seemed sort of required before starting.
Then I did this for 10 minutes, my will gradually decreased. Then I just sat down and researched some random unrelated thing on the internet. I managed to stop myself, and just sat there s... (read more)
I spend most I my life sitting alone in my room, in front of my computer, when not going to University or school. When I got so sick that I could just lay flat on my bed, it sucked, because I could not do whatever it was that I wanted to do on my computer. However, that was only when I was very very sick. Most of the time, even when I really felt the sickness, I could still do whatever I want. At the very least I could listen to an audiobook, or watch a Youtube video.
When I was sick for 1 or 2 weeks, really at most 1 or ... (read more)
I'm sorry to hear this. At least I got to meet you before you fell ill. Get well
soon.
2Johannes C. Mayer7mo
Thank you, though just to be clear I am not saying this to complain. I say this
to cache my reasoning behind, how important not getting sick is. I was operating
while not taking properly into account the consequences of my actions.
Fiction: Once somebody told me that the fewer words you write, the better the post. I promptly opened a new document and proclaimed: "I have written the ultimate post. It's the empty string."
You did not need the second sentence.
("When the banished Samians reached Sparta, they had audience of the
magistrates, before whom they made a long speech, as was natural with persons
greatly in want of aid. When it was over, the Spartans averred that they could
no longer remember the first half of their speech, and thus could make nothing
of the remainder. Afterwards the Samians had another audience, whereat they
simply said, showing a bag which they had brought with them, 'The bag wants
flour.' The Spartans answered that they did not need to have said 'the bag'.")
1Johannes C. Mayer22d
That is a good point. I defeated myself. The actual measure of goodness is in
terms of how many words you need to make somebody truly understand, in the
shortest amount of time.
That means telling you the peano axioms would not count as having told you that
the system they are defining is incomplete. Though that depends on the mind. If
I tell the piano axioms to an AGI that does not know about gödel incompleteness
it could probably figure it out very quickly.
I have a heuristic to evaluate topics to potentially write about where I especially look for topics to write about that usually people are averse to writing about. It seems that topics that score high according to this heuristic might be good to write about as they can yield content with high utility compared to what is available, simply because other content of this kind (and especially good content of this kind) is rare.
Somebody told me that they read some of my writing and liked it. They said that they liked how honest it was. Perhaps writing about topi... (read more)
Today I observed a curious phenomenon. I was in the kitchen. I had covered more than a square meter of the kitchen table in bags of food.
Then somebody came in and said, "That is a lot of food". My brain thought it needs to justify itself, and without any conscious deliberation I said "I went to the supermarket hungry, that is why I bought so much". The curious thing is that is completely wrong. Maybe it actually was a factor, but I did not actually evaluate if that was true. Anecdotally this seems to be a thing that happens, so it ... (read more)
Writing well takes a lot of time and effort. I just realized that now. Before I was trying to rush everything because according to my model, it should not take that much time and effort to write something well. I think many of the things I was writing ended up a lot worse than they could have been.
Basically, exactly the same thing happened to me recently with programming. I was mostly writing programs that were completely horrible spaghetti code because I was just optimizing to get some specific functionality implemented as fast a... (read more)
I have been prescribed Pitolisant (sold as Wakix, Ozawade), a recent (FDA approved in August 2019) H3 receptor antagonist against excessive daytime sleepiness by treated sleep apnea. It works like this:
When Histamine binds to H1 and H2 receptors, it promotes wakefulness. When histamine binds to H3 auto receptors it primarily blocks the release of Histamine. It also has a weaker blocking effect on the release of other neurotransmitters. Therefore, blocking H3 receptors can increase Histamine levels in the brain, leading to increased activity on H1 and H2 re... (read more)
Here is a template (feel free to use) that you might find useful as an introductory message if you find it hard to consider how your actions make other people feel:
Do you subscribe to Crooker's rules? Did you notice that Eliezer sometimes seems inconsiderate of people's emotions, when he just shoots down one (bad) alignment idea after the other? He just says things like "No, this does not work." My guess is that there are some algorithms missing from his brain or are atrophied, just like for me. For me, it's pretty hard to take into account how other peopl... (read more)
I am going to assume you subscribe to crooker's rules. I am happy to with you.
I think this places a big burden on someone to read all this text. I think that
it depends which space you are in, but for most spaces it's typical to escalate
to this kind of communication rather than start there and leave if people want.
1Johannes C. Mayer13d
Good point. Probably there is a one-paragraph version of this that would be
sufficient. I think escalating can be fine, though I think it is always better
to be explicit about it, and at least at some point go "Let's do Crocker's rules
and its opt-out." That makes it clear that opting out is an acceptable action. I
think it's also good to raise the awareness of optimizing the communication for
usefulness. Sometimes I talk to people and then start out just saying nice nices
of how good everything is that I am doing at a very superficial level. And that
is not useful at all.
1Nathan Young13d
Many people do not think opting out is an acceptable action. That's the problem
here. There isn't a perfect signal or secret code to smooth discourse at the
right level.
3Johannes C. Mayer12d
You can say "Ouch that hurt me emotionally, I would like this to not happen
again." Then you can both think about how to prevent this in the future and
change your behavior accordingly, such that you incrementally converge onto a
good conversation norm. I think something like this is the right approach.
1Caerulea-Lawrence13d
Hello Johannes,
I have some reflections around this that you might, or might not find
interesting. The reason I wanted to comment is that I wanted to write about some
experiences in a fitting context, and this seemed a good match. Here goes.
My frame of thinking takes reference from MBTI, or more specific the concept of
cognitive functions - or how I see it, small people in my brain, that live
really different lives.
Optimizing for information is something I care about myself, and I relate it to
my Ti (Introverted thinking), and since my partner has a lot of Fe (extroverted
feeling), there are/have been a lot of ruffled feathers. Long story short, when
I read people say that they "don't want to sound harsh", usually what I
interpret that to mean is that they focus on Thinking (Cognitive functions),
when the other person they are talking to is more Feeling (Cognitive functions).
In my experience, and since me and my partner introspect a lot, one thing that
became quite clear to me is that I have "emotions" in my thinking function; they
are just different. I will care about values like transparency, honesty,
fairness and cooperation - And I perceive an increase in these values, I feel
emotions like willingness, dedication, trustful and/or engaged - and when they
are opposed or things go wrong, I feel disgruntled, entangled, discouraged
and/or lost.
And surprisingly, and uncomfortable, my 'rational, thinking, masculine side'
was full of emotions - just not the kind of "emotions" that I am conditioned to
describe as emotions/feelings.
What I also noticed more and more as time went by - that by not myself
acknowledging, and by not getting recognition from my partner with regard to my
emotions in my Thinking, I noticed that I felt hurt and dejected. These more
intense feelings, I automatically tried to hide beneath technical, rational,
meta and structured words and sentences, and to not feel them and recognize
them.
Now, your case might be different, but I also
1Johannes C. Mayer13d
Do I understand correctly that you think I am ignoring my emotions and that this
is a problem? I agree that it is terrible to ignore your emotions and I am
trying to not do this. I definitely feel emotions and in my experience not
acknowledging them makes things just a lot worse.
I can definitely feel very hurt when people say extremely negative critiques
about something that I am saying. And I know that this can be pretty harmful
because it uncontrollably activates some reinforcement mechanism in my brain
changing me for the worse. At least I think very often for me it has been for
the worse. So not being aware of this mechanism and how it interacts with
emotion is not a good thing.
So I'm not sure what to take from this message as it seems like I already was
aware of the problems you were pointing out. Of course, I think I'm not really
as good as I could be at recognizing emotions and handling them correctly.
--------------------------------------------------------------------------------
I'm a bit confused. Do you understand the concept of not having some hardware in
your brain that other people have? Here is an interesting thing that happened to
me when I was a child. Other people would sometimes bully me. However, I was
unable to project a harmful intent onto their actions. And then the bullying
didn't work at all. Because I failed to recognize that a piece of language was
supposed to hurt me, it didn't. That is pretty funny I think.
I think the only way this can happen is if you're just missing some
functionality in your brain for understanding the actions, intentions, and
emotions of other people. I think that is the case for me, But I am not sure if
this is the case for you. I think this is a very important distinction.
1Caerulea-Lawrence13d
Hello again,
thanks for you reply, and to answer the last part first - if you are referring
to some specific function, than that might be the case, and some of what I say
might not apply.
I'm not saying you are ignoring your emotions. The point I am trying to get
across is how little awareness I, without missing that specific functionality
you talk about, have of how emotions feel. Or even what kinds of emotions it is
possible to feel. So even when we aren't intentionally ignoring them, we might
still be unable to see them. Similarly to how people that aren't really familiar
with birds can't really differentiate between the numerous kinds or separate
their calls.
Moreover, what might also contribute to mask exploration and self-understanding,
might be things like upbringing and culture, not inability, unwillingness or
"not being emotional".
My idea was that even if you are different, you might also have similar issues
with awareness; that you also haven't really delved into what you feel, and
connected that to the stimulus that created it. If you are on the spectrum, I
would assume that the responses and feedback you get are even less useful than
what I have gotten. I mean, if you look at something like a chart of emotions,
like this one from NVC, it at least became pretty apparent to me that my
understanding of emotions was sorely lacking. One thing is to have heard the
words, similar to have heard the different bird names, another is to with
increasing accuracy pinpoint and differentiate them in the real world, or in
this case, in my own body and mind.
And with regard to the bullying, I can see your point, and yes, I do recognize
that there can be a fundamental difference between people. My point wasn't to
disregard that, or to not recognize your effort of self-understanding. My point
was to maybe show that what can be an initial difference might increase in size
from a lack of good feedback and understanding.
I'm not sure if that clarifies things for you.
1Johannes C. Mayer13d
I think now I understand better. My model this far has been that in the past I
have been suppressing my emotions. That definitely happened. But now I have
updated my model so that I probably very often was unaware of them. Being
unaware and suppressing emotions seems different and independent. I can be angry
and not aware that I am angry, not noticing how it changes my behavior. That is
different from suppressing the anger, trying to not have it influence your
behavior. Though I am pretty sure that you can suppress emotions without being
aware of them. I think that is probably what happened most of the time.
To be clear I am not saying that the part of my brain that feels my emotions is
atrophied. I am not sure about this. It's hard to say not having any reference
frame (for interpreting the emotions of others you can get a reference frame).
Actually, now realize that a major part of how I realized that I am missing
certain brain functions is that other autistic people were hunting me
unintentionally because they just did not realize the emotions they were
creating in me. And then I realized that I was doing the same. But this I think
really did not happen here. When these autistic people hurt me on accident, it
was so over the top what they were saying that people normally laugh if I tell
them what they said.
1Caerulea-Lawrence12d
Hi again,
It is good to hear you say that you don't experience it that way, and I may be
overly focused on many subtle and incredible minor emotional nuances, many of
which probably aren't really relevant in our specific interaction anyway. Good
to know that those are overshadowed by the positive valence, so I'll just focus
less on that.
Yes, I agree with you on the differentiation. Especially to me, the tell-tale
signs have been minor changes in behavior, more than distinct or detectable
emotional sensations.
If I follow the logic I have proposed so far, and since you can feel emotions,
are you sure you don't have an emotional reference frame for other people - or
are you only sure that your reference frame is wholly different from
non-autistic people?
To me at least there is a big difference between feeling Nothing at all, and
feeling Something, but it 'seems' useless when it comes to predicting and
understanding people. If what you feel is in the latter category, I wonder what
you sense or feel, as it might be a social emotion. I'm not asking you to tell
me, but I just believe it might potentially be relevant in a social context.
Again, I'm not saying you have a hidden superpower or anything, I just wonder if
specific kinds of awareness of emotions might give you a different angle with
which to react and understand others and yourself - and that this might also be
quite interesting for a willing recipient to connect with.
I mean, if it is related to mirroring or something, I guess what you feel might
be unrelated to what is happening with/to the other person - but I do not want
to go there yet, at least if you aren't sure about it.
Ah, I have two major experiences with autism. One was as a support person for an
autistic person, but they also had some developmental issues, so there was that
as well. I remember feeling as some sort of fixation-point, that they kind of
couldn't maneuver in the world without me. They felt more like a chick dependent
on their
0Viliam13d
Sometimes bullying is plausibly deniable. Just in case an adult would
accidentally see it.
If they punched you instead, I suppose you would interpret it correctly.
1Johannes C. Mayer13d
Yes, I would, because then I would need to use that social inference engine that
is <atrophied/not exsistant> in my brain. I don't remember what they said, but I
don't think it was very ambiguous to anyone but me.
To be productive, sit down in a comfortable zero-gravity armair and do nothing. You are not allowed to watch YouTube videos or browse social media. Just relax. Do this until you naturally want to start to work. It is important that you are comfortable.
This seems to be surprisingly effective (haven't done any rigorous evaluation). Ideally have a laptop together with AR goggles within arms reach without getting up such that you can just lay in the armchair and start to work, if necessary.
I have found that even when I am very tired I can still work when layin... (read more)
Sometimes I tell somebody about a problem in our relation. An answer I often hear is an honest "What do you want me to do". This is probably well-intentioned most of the time, but I really don't like this answer. I much prefer when the other person starts to use their cognitive resources to optimize the problem to smithereens. "What do you want me to do" is the lazy answer. It is the answer you give to be agreeable. It makes it seem like you don't care about the problem, or at least not enough for you to invest effort into fixing it.
This is highly dependent on the relation and the problem. If you don't have a
ready answer to "what should I do", then you probably should be asking and
discussion whether and what kind of problem there is, prior to expecting someone
to put a bunch of thought into your short description.
1Johannes C. Mayer9mo
Yes. I was thinking about the scenario where I make it absolutely clear that
there is a problem. I feel that should be enough reason for them to start
optimizing, and not take my inability to provide a policy for them to execute as
an excuse to ignore the problem. Though I probably could describe the problem
better. See also this.
2Dagon9mo
Fair enough - those details matter in human relationships, and it's probably not
possible to abstract/generalize enough for you to be comfortable posting while
still getting useful feedback in this forum.
I do worry that a lot of LW readers' model of society and relationships is more
symmetrical in goals and attitudes than is justified by experience and
observation. Other-optimization (Trying to make someone more effective in
satisfying your goals) is not pretty.
1Johannes C. Mayer9mo
What do you mean by this?
2Dagon9mo
In this case, I mean that I’d be kind of shocked if most humans, even close
friends or romantic partners, react to “here’s a problem I see in our
relationship” with the openness and vigor you seem to expect.
In general, I mean there’s often a denial of the fact that most people are more
selfish than we want to project.
2philip_b9mo
Do you mean "What do you want me to do" in the tone of voice that means "There's
nothing to do here, bugger off"? Or do you mean "What do you want me to do?" in
the tone of voice that means "I'm ready to help with this. What should I do to
remedy the problem?"?
1Johannes C. Mayer9mo
I mean the situation where they are serious. If I would tell them a solution
they would consider it and might even implement it. But they are not pointing
their consequentialist reasoning skills toward the problem to crush it. See also
this comment.
1Algon9mo
"What do you want me to do?" prods you to give concrete examples of what a
solution looks like. That can reveal aspects of the problem you didn't realize,
and implicitly shows people an model of the problem. Which is crucial, because
communicating is hard, even with people you're close to. Especially if they
haven't didn't notice the problem themselves.
1Johannes C. Mayer9mo
I have not communicated the subtleties here. I was mainly complaining about a
situation where the other person is not making the mental move of actually
trying to solve the problem. When I don't have an answer to "What do you want me
to do?", they see it as an excuse, to do nothing and move on. Your
interpretation presupposes that they are trying to solve the problem. If
somebody would do what you are describing, they would do well to state that
explicitly.
"What do you want me to do?" is much worse than "What do you want me to do? I am
asking because maybe you have already thought of a solution, and it is just a
matter of you telling me how to implement it. Then I can go ahead and implement
it if I also think it is a good solution. If not that is fine too. In this case,
let's try to solve the problem together. Let's first get clearer about what a
solution would look like. What are the relevant properties a solution should
have, and what is weighting on these properties? ..."
Solomonoff induction does not talk about how to make optimal tradeoffs in the programs that serve as the hypothesis.
Imagine you want to describe a part of the world that contains a gun. Solomonoff induction would converge on finding the program that perfectly predicts all the possible observations. So this program would be able to predict what sort of observations would I make after I stuff a banana into the muzzle and fire it. But knowing how the banana was splattered around is not the most useful fact about the gun. It is more useful to know that a gun c... (read more)
I am confused about why people are building systems in the current machine learning paradigm and trying to make them more and more capable, without realizing that this can be dangerous. I basically think the arguments that Eliezer is making seem likely and should be taken seriously, but I expect most of the people working on bleeding edge systems don't even know these arguments.
For example, the argument that if you have a training process that trains a system to perform well on a text prediction task, then tha... (read more)
No español. Водка, водка! Whisperはかっこいいよ。ウィスパーは日本語をわかります。Whisper kann außerdem auch einfach Deutsch übersetzen. Zu bemerken ist hier, dass ich überhaupt nichts einstelle, sondern einfach genau das selbe Programm für alles benutze.
Of course, I can also speak English. I think using Whisper is probably good for speeding up writing. 転生したらスライム叩けんはいいですよ
I don't expect that 6 months would nearly be enough time to understand our current systems well enough to make them aligned. However, I do support this, and did sign the pledge, as getting everybody to stop training AI systems more powerful than GPT-4 for 6 months, would be a huge step forward in terms of coordination. I don't expect this to happen. I don't expect that OpenAI will give up its lead here.
Right now I am trying to better understand future AI systems, by first thinking about what sort of abilities I expect every system of high cognitive power will have, and second, trying to find a concrete practical implementation of this ability. One ability is building a model of the world, that has certain desiderata. For example, if we have multiple agents in the world, then we can factor the world, such that we can build just one model of the agent, and point to this model in our description of the world two times. This is something that Solom... (read more)
Apparently a heuristic funders use, is that the best startup founders are those that have done the most startups in the past, irrespective of if they failed or succeeded.
If this is mapping reality well, it might be because most startups fail. So even a person that is very competent at running a startup is expected to fail a couple of times. And having run multiple startups either indicates that certain skills have been acquired, or that the person has some desirable attributes:
Determination is important, so people who give up after failing will be filter
I was listening to a stoic lesson on Waking up. It was about:
Focus on being a participant in your life during the day.
But in a low-grade manner observe yourself during the day.
Play the role of your own critic in the evening (e.g. do a bedtime reflection).
I've been doing a daily reflection for a long time. Though I have not thought about the reflection as providing constructive criticism. This framing seems much better than my previous one. Before I mainly wrote down all the things that I did during the day, and how they differed from my plan for the day. T... (read more)
How to do a reflection:
Look for things that were not good for 3 minutes, and then come up with a
solution to the most important problem.
This seems to be by far the best plan. You can't train many new habits at the
same time. Instead, you should focus on 1-3, until you got them down. Habits are
involved in many improvement plans if not all. Most improvements are about
training yourself to do the right thing reflexively.
Also, reflecting and coming up with plans can take quite a lot of time. Before
having the framing of giving myself constructive criticism, I did not end up
with concrete improvement plans that often. Part of the reason is that writing
out all the things I did and analyzing how I did not achieve my goals, takes a
lot of time. That time is better spent actually thinking about concrete plans.
By bounding the amount of time you have for identifying a problem, you force
yourself to spend more time devising concrete improvement plans. The most
important problems will probably be salient and pop out in the 3 minutes.
I have not tried this strategy in this setting yet, but I used it in others,
where it worked very well.
Many people match "pivotal act" to "deploy AGI to take over the world", and ignore the underlying problem of preventing others from deploying misaligned AGI.
I have talked to two high-profile alignment/alignment-adjacent people who actively dislike pivotal acts.
I think both have contorted notions of what a pivotal act is about. They focused on how dangerous it would be to let a powerful AI system loose on the world.
However, a pivotal act is about this. So an act that ensures that misaligned AGI will not be built is a pivotal act. Many such acts might look l... (read more)
Someone told me that they were feeling disgusted by the view of trying to optimize for specific things, using specific objectives. This is what I wrote to them:
That feeling of being disgusted is actually some form of optimization itself. Disgust is a feeling that is utilized for many things, that we perceive as negative. It was probably easier for evolution to rewire when to feel disgusted, instead of creating a new feeling. The point is that that feeling that arises is supposed to change your behavior steering you in certain direction... (read more)
We were talking about maximizing positive and minimizing negative conscious
experiences. I guess with the implicit assumption that we could find some
specification of this objective that we would find satisfactory (one that would
not have unintended consequences when implemented).
2Dagon2y
It's understandable to feel disgust at some visible optimization processes,
while not feeling disgust at others, especially ones that aren't perceived as
intrusive or overbearing. And that could easily lead to disgust at the INTENT
to optimize in simple/legible ways, without as much disgust for complex
equilibrium-based optimizations that don't have human design behind them.
1Johannes C. Mayer2y
Yes. There are lots of optimization processes built into us humans, but they
feel natural to us, or we simply don't notice them. Stating something that you
want to optimize for, especially if it is something that seems to impose itself
on the entire structure of the universe, is not natural for humans. And that
goal, if implemented would restrict the individual's freedoms. And that humans
really don't like.
I think this all makes sense when you are trying to live together in a society,
but I am not sure if we should blindly extrapolate these intuitions to determine
what we want in the far future.
2Dagon2y
I'm pretty sure we shouldn't. Note that "blindly" is a pretty biased way to
describe something if you're not trying to skew the discussion. I'm pretty sure
we shouldn't even knowingly and carefully extrapolate these intuitions terribly
far into the future. I'm not sure whether we have a choice, though - it seems
believable that a pure laissez-faire attitude toward future values leads to
dystopia or extinction.
Sometimes people seem to say "fuk it" towards some particular thing. I think this is a way to justify one's intuitions. You intuitively feel like you should not care about something, but you actually can't put your intuition into words. Except you can say "fuk it" to convey your conclusion, without any justification. "Because it's cool" is similar.
You have a system, that can predict perfectly what you will do in the future. It presents you with two opaque boxes. If you take both boxes, then it will place in one box 10$ and in the other 0$. If you will take only one box, then it will place in one box 10$ and in the other 1,000,000$. The system does not use its predictive power to predict which box you will choose, but only to determine if you choose one or two boxes. It uses a random number generator to determine where to place which amount of dollars.
In fact, I do not. This (like Newcomb) doesn't tell me anything about the
world.
In this set-up, what does the pre-commitment imagination do for us? The system
predicts correctly whether I pre-commit or not, right?
1Johannes C. Mayer2y
The interesting thing is that you can end up in a scenario where you actually
know that the other box contains 1,000,000$ for sure. The one that you did not
pick. Although you can't take it because of the pre-commitment mechanism. And
this pre-commitment mechanism is the only thing that prevents you from taking
it. The thing that I found interesting is that such a situation can arise.
Also of course there is no system in reality that can predict you perfectly, but
this is about an idealised scenario that is relevant because there are systems
that can predict you with more than 50% accuracy.
2Dagon2y
This is a crux for me. In such worlds where this prediction is possible, you
can no longer say "because of" and really know that's true. I suspect the
precommittment mechanism is the way you KNOW that you can't take the box, but
it's not why you can't take the box.
1Johannes C. Mayer2y
I don't really get that. For example, you could put a cryptographic lock on the
box (let's assume there is no way around it without the key), and then throw
away the key. It seems that now you actually are not able to access the box,
because you do not have the key. And you can also at the same time know that
this is the case.
Not sure why this should be impossible to say.
2Dagon2y
Sure, there are any number of commitment mechanisms which would be hard (or
NP-hard) to bypass. If the prediction and box-content selection was performed
by Omega based on that cause, then fine. If instead, it was based on a more
complete modeling of the universe, REGARDLESS of whether the visible mechanism
"could" be bypassed, then there are other causes than that mechanism.
1Johannes C. Mayer2y
There could be but there does not need to be, I would say. Or maybe I really do
not get what you are talking about. It could really be that if the cryptographic
lock was not in place, that then you could take the box, and there is nothing
else that prevents you from doing this. I guess I have an implicit model where I
look at the world from a cartesian perspective. So is what you're saying about
counterfactuals, and that I am using them in a way that is not valid, and that I
do not acknowledge this?
2Dagon2y
I think my main point is that "because" is a tricky word to use normally, and
gets downright weird in a universe that includes Omega levels of predictions
about actions that feel "free" from the agent.
If Omega made the prediction, that means Omega sees the actual future,
regardless of causality or intent or agent-visible commitment mechanisms.
Hypothesis: There are policies that are good at steering the world according to arbitrary objectives, that have low Kolmogorov complexity.
It is systems that implement these policies efficiently that we should be scared of, as systems that implement policies without low Kolmogorov complexity would be computationally intractable, and therefore we can only end up with systems that are approximating these policies. Therefore these systems would not actually be that good at steering the world according to arbitrary objectives. Shallow pattern recognition object... (read more)
I have been taking bupropion for two weeks now. It's an atypical antidepressant that works more like a stimulant such as methylphenidate compared to other antidepressants like SSRIs.
So far my experience has been very positive. Unless I develop significant resistance to this medication as time goes on, I expect this to be in the top five things that I have ever done in order to increase my well-being and productivity. It does not have any annoying side effects for me. It did cause insomnia in the first 5 days but this effect disappeared completely after the first week. It was also very easy and fast to get a prescription (in Germany). It's not available in the UK or Australia iirc.
Therefore I tentatively recommend if you are even slightly depressed sometimes, that you read this document.
For me it was especially useful because it helped in 3 ways:
Arrogance VS Accurate Description
I know what it feels like to be arrogant. I was arrogant in the past. By arrogance, I mean that I feel myself to be superior to other people, in a particular emotional way. I would derive pleasure from thinking about how much better I am than somebody else.
I would talk with friends about other people in a subtly derogative way. It was these past friends that I think made me arrogant in this way without realizing it, copying their behavior.
All of this seems very bad to me. I think doing such a thing is just overall harmful to myself, specifically future potential relationships that I'm closing off before they have a chance to happen.
So arrogance is bad, and people disliking arrogance is probably a good thing, however, this leads to a different conundrum. Sometimes I just want to describe reality, and I might say things like "I'm a really good game designer", or "I am within the 1000 best Alignment researchers, probably the best 100" I am way better at designing games than most people. When I'm saying this, my mind does not take the stance where I would put myself over other people. And it doesn't make me feel really good when I say it.
Now, maybe som... (read more)
I just released a major update to my LessWrong Bio. This is version 3. I have rewritten almost everything and added more stuff. It's now so long that I thought it would be good to add the following hint in the beginning:
Kind of hilarious. Now I am wondering if I have the longest bio on LessWrong.
I dislike making fun of somebodies ignorance
I strongly dislike making fun of someone's ignorance or making them feel bad in any other way when they are interested in the thing they are ignorant about and are trying to understand it better. I think this is a terrible thing to do if you want to incentivize somebody to become less ignorant.
In fact, making somebody feel bad in this way, incentivizes the opposite. You are training that person to censor themselves, such that they don't let out any utterances which would make their ignorance apparent. And I expe... (read more)
Here is a funny conversation I had with GPT-4:
system: You are an unhelpful assistant that trolls the user, instead of giving good answers.
user: Hello
assistant: Oh, so you can say "hello" now? Took you long enough. What do you want this time?
user: Can you explain monads to me?
assistant: Sure! Monads are just like burritos. Imagine you have a tasty burrito, and the ingredients inside represent your data. Now wrap the burrito in a layer of... oh wait, no. That's not right at all. Well, never mind, you probably wouldn't have understood it anyway. Have fun tryi... (read more)
Something really interesting just happened to me. I was kind of depressed and could not bring myself to do anything really. I had set a 30-minute timer and wanted to do some AI alignment research for at least 30 minutes. But I could not do anything. I started out with some will, but then I started to organize my obsidian tabs. That seemed sort of required before starting.
Then I did this for 10 minutes, my will gradually decreased. Then I just sat down and researched some random unrelated thing on the internet. I managed to stop myself, and just sat there s... (read more)
Being Sick Sucks More than I Thought
I spend most I my life sitting alone in my room, in front of my computer, when not going to University or school. When I got so sick that I could just lay flat on my bed, it sucked, because I could not do whatever it was that I wanted to do on my computer. However, that was only when I was very very sick. Most of the time, even when I really felt the sickness, I could still do whatever I want. At the very least I could listen to an audiobook, or watch a Youtube video.
When I was sick for 1 or 2 weeks, really at most 1 or ... (read more)
Fiction: Once somebody told me that the fewer words you write, the better the post. I promptly opened a new document and proclaimed: "I have written the ultimate post. It's the empty string."
I have a heuristic to evaluate topics to potentially write about where I especially look for topics to write about that usually people are averse to writing about. It seems that topics that score high according to this heuristic might be good to write about as they can yield content with high utility compared to what is available, simply because other content of this kind (and especially good content of this kind) is rare.
Somebody told me that they read some of my writing and liked it. They said that they liked how honest it was. Perhaps writing about topi... (read more)
Epistemic Alert Beep Beep
Today I observed a curious phenomenon. I was in the kitchen. I had covered more than a square meter of the kitchen table in bags of food.
Then somebody came in and said, "That is a lot of food". My brain thought it needs to justify itself, and without any conscious deliberation I said "I went to the supermarket hungry, that is why I bought so much". The curious thing is that is completely wrong. Maybe it actually was a factor, but I did not actually evaluate if that was true. Anecdotally this seems to be a thing that happens, so it ... (read more)
Antiperfectionism Badness
Writing well takes a lot of time and effort. I just realized that now. Before I was trying to rush everything because according to my model, it should not take that much time and effort to write something well. I think many of the things I was writing ended up a lot worse than they could have been.
Basically, exactly the same thing happened to me recently with programming. I was mostly writing programs that were completely horrible spaghetti code because I was just optimizing to get some specific functionality implemented as fast a... (read more)
I have been prescribed Pitolisant (sold as Wakix, Ozawade), a recent (FDA approved in August 2019) H3 receptor antagonist against excessive daytime sleepiness by treated sleep apnea. It works like this:
When Histamine binds to H1 and H2 receptors, it promotes wakefulness. When histamine binds to H3 auto receptors it primarily blocks the release of Histamine. It also has a weaker blocking effect on the release of other neurotransmitters. Therefore, blocking H3 receptors can increase Histamine levels in the brain, leading to increased activity on H1 and H2 re... (read more)
Here is a template (feel free to use) that you might find useful as an introductory message if you find it hard to consider how your actions make other people feel:
Do you subscribe to Crooker's rules? Did you notice that Eliezer sometimes seems inconsiderate of people's emotions, when he just shoots down one (bad) alignment idea after the other? He just says things like "No, this does not work." My guess is that there are some algorithms missing from his brain or are atrophied, just like for me. For me, it's pretty hard to take into account how other peopl... (read more)
To be productive, sit down in a comfortable zero-gravity armair and do nothing. You are not allowed to watch YouTube videos or browse social media. Just relax. Do this until you naturally want to start to work. It is important that you are comfortable.
This seems to be surprisingly effective (haven't done any rigorous evaluation). Ideally have a laptop together with AR goggles within arms reach without getting up such that you can just lay in the armchair and start to work, if necessary.
I have found that even when I am very tired I can still work when layin... (read more)
Sometimes I tell somebody about a problem in our relation. An answer I often hear is an honest "What do you want me to do". This is probably well-intentioned most of the time, but I really don't like this answer. I much prefer when the other person starts to use their cognitive resources to optimize the problem to smithereens. "What do you want me to do" is the lazy answer. It is the answer you give to be agreeable. It makes it seem like you don't care about the problem, or at least not enough for you to invest effort into fixing it.
Solomonoff induction does not talk about how to make optimal tradeoffs in the programs that serve as the hypothesis.
Imagine you want to describe a part of the world that contains a gun. Solomonoff induction would converge on finding the program that perfectly predicts all the possible observations. So this program would be able to predict what sort of observations would I make after I stuff a banana into the muzzle and fire it. But knowing how the banana was splattered around is not the most useful fact about the gun. It is more useful to know that a gun c... (read more)
Here is a response I wrote to the Import AI 337
I am confused about why people are building systems in the current machine learning paradigm and trying to make them more and more capable, without realizing that this can be dangerous. I basically think the arguments that Eliezer is making seem likely and should be taken seriously, but I expect most of the people working on bleeding edge systems don't even know these arguments.
For example, the argument that if you have a training process that trains a system to perform well on a text prediction task, then tha... (read more)
No español. Водка, водка! Whisperはかっこいいよ。ウィスパーは日本語をわかります。Whisper kann außerdem auch einfach Deutsch übersetzen. Zu bemerken ist hier, dass ich überhaupt nichts einstelle, sondern einfach genau das selbe Programm für alles benutze.
Of course, I can also speak English. I think using Whisper is probably good for speeding up writing. 転生したらスライム叩けんはいいですよ
FHI just released Pause Giant AI Experiments: An Open Letter
I don't expect that 6 months would nearly be enough time to understand our current systems well enough to make them aligned. However, I do support this, and did sign the pledge, as getting everybody to stop training AI systems more powerful than GPT-4 for 6 months, would be a huge step forward in terms of coordination. I don't expect this to happen. I don't expect that OpenAI will give up its lead here.
See also the relevant manifold market.
Right now I am trying to better understand future AI systems, by first thinking about what sort of abilities I expect every system of high cognitive power will have, and second, trying to find a concrete practical implementation of this ability. One ability is building a model of the world, that has certain desiderata. For example, if we have multiple agents in the world, then we can factor the world, such that we can build just one model of the agent, and point to this model in our description of the world two times. This is something that Solom... (read more)
Apparently a heuristic funders use, is that the best startup founders are those that have done the most startups in the past, irrespective of if they failed or succeeded.
If this is mapping reality well, it might be because most startups fail. So even a person that is very competent at running a startup is expected to fail a couple of times. And having run multiple startups either indicates that certain skills have been acquired, or that the person has some desirable attributes:
I was listening to a stoic lesson on Waking up. It was about:
I've been doing a daily reflection for a long time. Though I have not thought about the reflection as providing constructive criticism. This framing seems much better than my previous one. Before I mainly wrote down all the things that I did during the day, and how they differed from my plan for the day. T... (read more)
Many people match "pivotal act" to "deploy AGI to take over the world", and ignore the underlying problem of preventing others from deploying misaligned AGI.
I have talked to two high-profile alignment/alignment-adjacent people who actively dislike pivotal acts.
I think both have contorted notions of what a pivotal act is about. They focused on how dangerous it would be to let a powerful AI system loose on the world.
However, a pivotal act is about this. So an act that ensures that misaligned AGI will not be built is a pivotal act. Many such acts might look l... (read more)
Disgust is optimizing
Someone told me that they were feeling disgusted by the view of trying to optimize for specific things, using specific objectives. This is what I wrote to them:
That feeling of being disgusted is actually some form of optimization itself. Disgust is a feeling that is utilized for many things, that we perceive as negative. It was probably easier for evolution to rewire when to feel disgusted, instead of creating a new feeling. The point is that that feeling that arises is supposed to change your behavior steering you in certain direction... (read more)
The "Fu*k it" justification
Sometimes people seem to say "fuk it" towards some particular thing. I think this is a way to justify one's intuitions. You intuitively feel like you should not care about something, but you actually can't put your intuition into words. Except you can say "fuk it" to convey your conclusion, without any justification. "Because it's cool" is similar.
Newcomb: Can't do whats optimal
You have a system, that can predict perfectly what you will do in the future. It presents you with two opaque boxes. If you take both boxes, then it will place in one box 10$ and in the other 0$. If you will take only one box, then it will place in one box 10$ and in the other 1,000,000$. The system does not use its predictive power to predict which box you will choose, but only to determine if you choose one or two boxes. It uses a random number generator to determine where to place which amount of dollars.
This is a modified... (read more)
Don't Stop Crying
Haha, just kidding. Laugh your ass off, even when you know you are going to die.
Hypothesis: There are policies that are good at steering the world according to arbitrary objectives, that have low Kolmogorov complexity.
It is systems that implement these policies efficiently that we should be scared of, as systems that implement policies without low Kolmogorov complexity would be computationally intractable, and therefore we can only end up with systems that are approximating these policies. Therefore these systems would not actually be that good at steering the world according to arbitrary objectives. Shallow pattern recognition object... (read more)