Epistemic status: personal exploration.
I wanted to share some thoughts and see if they would help me be less wrong. Instead, ended up, unofficially, as a war correspondent for gwern. He had asked me for stories from my firefighter military days. (I thought: would it be very unethical to use this as an opportunity to write 4 texts?)
To summarize: it starts with my moments of peak satisfaction and ends with despair.
Maybe, all of this leads to the following question:
What could I do when I feel like I'm in despair?
Yudkowsky conceded defeat to AI and announced his mission to “Death with Dignity,”
And I ask myself:
- Why do we need a 0% chance of dying to die with dignity?
- Why didn't we die with dignity before AI?
I don’t even need an “apocalypse AI” to feel despair.
Most of the time, we don’t.
Desperate people steal, and we cage or kill them.
It’s difficult, and expensive, to rebuild a human being.
Maybe the threat of AI’s “imminent demise” will finally push us to start re-engineering ourselves, to debug our despair.
As William MacAskill or some at LessWrong would say: "Even when the odds are low, act as if your actions matter, just because, in expected value."
Because even when he fights, he aligns with his values, not with his odds.
Should I take care of AI, of the world, or of myself, to be ready for death?
Who do I actually have the best chance of taking care of?
I chose myself.
How can I prevent despair in myself, even without imminent AI death?
Maybe “dying with dignity” just means dying with the greatest possible satisfaction.
Let’s call it:
Dignity = Satisfaction × Coherence / Despair
When Despair → ∞, Dignity → 0… unless Coherence grows faster.
Eating and sex are satisfying, sure.
But do they really matter at the end?
Leonidas in 300 thought:
“I can go have a little fuck… or fight 10,000 soldiers.”
“Will I win?”
“Probably not.”
“Then I’ll die with the maximum satisfaction I can muster.”
And maybe that’s the point:
even when he knows he’ll lose,
he aligns with his values, not with his odds. Make sense?
For the past six years, I’ve kept a kind of inner lab notebook,
a record of my peak satisfaction moments and the factors that preceded them.
Not a mood tracker. Not a diary.
Just notes from times when I felt: “This was worth existing for.”
Instead of asking what changed, I ask:
Where was my focus leading up to that moment?
Was my effort directed at the social, informational, emotional, or operational level?
Was I trying to change myself or my environment?
Over time, patterns began to appear,
some forms of dedication brought temporary pleasure, others a coherent afterglow.
I began mapping satisfaction as if debugging a neural net.
Quantitatively, this mapping reduced the background noise of despair:
from about 13 suicidal thoughts per day to roughly 2 per month, on average.
*Many variables to consider.
Peak satisfaction moment: Field tests of semi-rationality in Brazilian military training.
To analyze it, I compare each factor that led me there,
always against the one that seems most influential:
| Comparison | Result |
|---|---|
| Social or Intellectual? | Social |
| Intellectual or Emotional? | Intellectual |
| Emotional or Operational? | Operational |
| Focus: change the world or myself? | Myself |
| Confidence in this main factor? | 100% |
Then I estimate the “bet value”, how confident am I that this was the main factor behind my satisfaction?
→ I’d bet 100%.
Benefit: 100%.
I also use a reference point for difficulty , something that feels like 100% cost.
For example:
Writing for LessWrong as a dyslexic Brazilian living in Argentina.
Cost: 100%.
Repeating this calibration for every peak moment lets me normalize how strong or efficient each factor was,
making my internal metrics of satisfaction more stable and falsifiable.
Quantitatively, this reduced my despair background noise:
from roughly 13 suicidal thoughts per day to about 2 per month.
(Still many variables to consider.)
I wanted to ask a question here, but abstractapplic told me I needed to give more context.
Personally, I’ve always struggled with trust. After four psychologists, two psychiatrists and one stay in a sanatorium, I still couldn’t find trust, not in them, not even in me. So, I told the police I’d committed a crime, just to isolate from society.
There were two main threads behind that decision:
1. Social exhaustion.
I had been directing resources toward social projects (firefighting, community programs) expecting, perhaps naïvely, that those I’d bled for would be there for me too.
When I realized they weren’t, when I saw that I was the only one holding the line, I concluded I was simply a burden, and that the most rational course of action was to stop existing, to spare others my weight.
2. A year of preparation.
I slowly detached from almost everyone. But one friend didn’t give up.
I never told him my plan, yet he understood.
Seeing my happiest friend cry, refusing my investment, begging me not to “do anything stupid”, forced me to confront something I’d neglected: the possibility that my own reasoning might be wrong.
Through that crack, light started to leak in.
I began to rebuild from frameworks instead of feelings:
I thought biology was my path.
Then I realized: information theory was cheaper.
So I followed that.
For 11 years I’ve been studying programming, probabilities, neural networks ,
trying to apply all that to personal debugging.
Because even if the odds are low,
if he fights, he aligns with his values, not with his odds.
Self-updating isn’t just a cognitive practice,
it’s an act of survival.
I Gabriel Brito, a dyslexics brasilian guy living in Argentina, writing in English for LessWrong feels like a paraplegic playing for the major leagues: ambitious, awkward, and, every now and then, a miracle of technology.
I still find meaning in testing whether knowledge can really rewire a mind.
Because even if the odds are low,
if he fights, he aligns with his values, not with his odds.
Or maybe I’m just talking nonsense.
But at least, it’s nonsense with data for you.