Defeating Moloch, one optimisation at a time | MSc student in AI, currently working on generative modelling for bioinformatics | Machine learning, game theory, personalised medicine | Co-host of Bit of a Tangent podcast and Narrator of Replacing Guilt audiobook |

Wiki Contributions


Outline of Galef's "Scout Mindset"

This is a superb summary! I'll definitely be returning to this as a cheatsheet for the core ideas from the book in future. I've also linked to it in my review on Goodreads.

it's straightforwardly the #1 book you should use when you want to recruit new people to EA. [...] For rationalists, I think the best intro resource is still HPMoR or R:AZ, but I think Scout Mindset is a great supplement to those, and probably a better starting point for people who prefer Julia's writing style over Eliezer's.

Hmm... I've had great success with the HPMOR / R:AZ route for certain people. Perhaps Scout Mindset has been the missing tool for the others. It also struck me as a nice complement to Eliezer's writing, in terms of both substance and style (see below). I'll have to experiment with recommending it as a first intro to EA/rationality. 

As for my own experience, I was delightfully surprised by Scout Mindset! Here's an excerpt from my review:

I'm a big fan of Julia and her podcast, but I wasn't expecting too much from Scout Mindset because it's clearly written for a more general audience and was largely based on ideas that Julia had already discussed online. I updated from that prior pretty fast. Scout Mindset is a valuable addition to an aspiring rationalist's bookshelf — both for its content and for Julia's impeccable writing style, which I aspire to.

Those familiar with the OSI model of internet infrastructure will know that there are different layers of protocols. The IP protocol that dictates how packets are directed sits at a much lower layer than the HTML protocol which dictates how applications interact. Similarly, Yudkowsky's Sequences can be thought of as the lower layers of rationality, whilst Julia's work in Scout Mindset provides the protocols for higher layers. The Sequences are largely concerned with what rationality is, whilst Scout Mindset presents tools for practically approximating it in the real world. It builds on the "kernel" of cognitive biases and Bayesian updating by considering what mental "software" we can run on a daily basis. 

The core thesis of the book is that humans default towards a "soldier mindset," where reasoning is like defensive combat. We "attack" arguments or "concede" points. But there is another option: "scout mindset," where reasoning is like mapmaking. 

The Scout Mindset is "the motivation to see things as they are, not as you wish they were. [...] Scout mindset is what allows you to recognize when you are wrong, to seek out your blind spots, to test your assumptions and change course."

I recommend listening to the audiobook version, which Julia narrates herself. The book is precisely as long as it needs to be, with no fluff. The anecdotes are entertaining and relevant and were entirely new to me. Overall, I think this book is a 4.5/5, especially if you actively try to implement Julia's recommendations. Try out her calibration exercise, for instance.

Experimentation with AI-generated images (VQGAN+CLIP) | Solarpunk airships fleeing a dragon

I present to you VQGANCLIP's take on a Bob Ross painting of Bob Ross painting Bob Ross paintings 😂 This surpassed my wildest expectations!


I don't feel like it's the kind of polished thing I'd put on LW. But here it is on my blog:

Intentionally Making Close Friends

One decent way of engineering an authentic 1-1 conversation is to go through a bunch of personal and vulnerability-inducing questions together, a la 36 Questions that Lead in Love (after cutting the ⅔ of questions that I found dull). So I made a list of questions I considered interesting, which I expected to lead to authentic and vulnerable conversations.

This is a fascinating strategy and I'm surprised it worked so well. The linked article for the list of questions is paywalled (and NYT). 

After a bit of digging, this seems to be the original study for which the questions were formulated: . The various questions sets are listed in the Appendix that starts on page 12. 

And this site seems to be an open-access, interactive mirror of the 36 questions from the NYT article:

Experiments with a random clock

Okay, I absolutely love this post! In fact, if you were to break it down into three posts, I would probably have been a serious fan of all of them individually. 

Firstly, the expected utility formulation of lateness is excellent and explains a lot of my personal behaviour. I'm aggressively early for important events like client meetings and interviews, but consistently tardy when meeting for coffee or arriving for a lecture. Whilst your methodology focussed on unobservable shifts to the time axis, I suspect there are also interesting gains to be made in reshaping the utility curve — for instance, by always carrying reading material, like korin43 mentions in another comment.  

Secondly, your approach to self-blinding is fantastic. I do a lot of Quantified-Self research and self-blinding is one of the most challenging and essential components of interventional QS studies. I really like how your protocol builds from the theoretical formulation you created and acts as a convolution on the utility function. I had a little nerdgasm when reading that part!

Thirdly, the fact that you collected and visualised data to evaluate the methodology is outdone only by how pretty your plot is.

Finally, it would be remiss of me not to comment on your excellent use of humour. I chuckled multiple times whilst reading. Expertly balanced and timed to resonate with the tone of the technical content. 


Was about to comment the same thing. Saving it to my Wisdom List.

UPDATE: I've published the list here:

We need a standard set of community advice for how to financially prepare for AGI

Most rationalists are heavily invested into AGI in non-monetary ways — career paths, free time, hopes for longevity/coordination breakthroughs. As other commenters have pointed out, if humanity achieves aligned AGI in the future, financial returns will feasibly be far less important. Given that, maybe the best investments are to bet against AGI as a hedge for humanity not achieving it. 

There are 3 futures: If we achieve aligned AGI, we win the game and nothing else matters*. If we achieve misaligned AGI, we die and nothing else matters. If we fail to achieve AGI at all, then we've wasted a lot of our time, careers, and hopes. In that case, we want investments to fall back on. 

In that 3rd future, what commodities and equities are most successful? Can we buy those now?

*subject to accepting the singularity-like premise.

Load More