Half-researcher, half-distiller (see https://distill.pub/2017/research-debt/), both in AI Safety. Funded, and also finishing a PhD student in theoretical computer science (distributed computing) in France.
Ordered! Reading the table of contents made me want to look at some posts now, but I'll be a good boy and wait for the physical books.
I was unconvinced at first, but in the end I see this post as a good way to shift slightly the perspectives on AGI to make people think a bit more about their answers. Well done!
Thanks a lot for the thoughtful reply! Not having to move the items around could indeed help to lower the cognitive burden.
Really cool post.
While reading, I kept thinking back to the theory of distributed computing. It doesn't apply to everything here, but at least for orient, there's a clear analogy: how can you propagate information from one node to all the system? (An example is the problem of gossip). Some of the failure modes are also clearly related, making the communication or the synchronization between nodes more uncertain.
One good example of this in everyday life: a group of friends is meeting at a movie theater for a new release. If a few are late, the others know to buy them tickets and save seats - they coordinate even without communication.
In the same spirit as my previous paragraph, this reminds me of the big simplifying hypothesis in distributed computing that every node is running the same program. It helps with that kind of synchronization.
Thanks a lot for the explanation!
Notably, proteins embedded in the cell membrane—such as the ACE2 receptor that COVID-19 binds to—fold in the lipid bilayer of the cell and are difficult to crystallize.
Could you go into a bit more detail here? Why are these proteins so hard to measure? Can't you just "remove them from the membrane"? (Sorry if this is completely stupid, my knowledge of experimental biology is very limited)
Also, since you worked in the field, I'm curious about your take on the opinion that protein folding is not really useful, as articulated by John in my link post of the DeepMind blog.
Agreed. But that's true for any AI advance. At least this one doesn't seem to increase directly the existential risk (for AI at least) and to provide some positive in the world. So my point is more that if AI advances are unavoidable, I prefer to see more like this one.
Fair enough. My idea was focused on AI existential risk; from that perspective, it seems to me that this result doesn't increase directly the existential risk from AI, in the way that GPT-3 does, for example. But the effect of pushing more people in the field is probably a real issue.
I'm especially curious about the second one. Is there any known situation where we have a 3-dimensional arrangement but not the sequence? I can think of two possibilities: either we have an existing protein for which we know the structure but not the sequence (which from my modest understanding looks improbable, because sequence seems easier) or we have an application (medical for example) that needs a specific kind of structure, and we want to know how to create such a protein.
Are these what you had in mind?
That's an interesting take.
Do you have a simple explanation of why you consider structural biology useless? My outside view impression was that protein shape and folding was really important to understanding how to work. Isn't that useful in practice?
Franklin’s experience suggests that confidence and self-esteem, which both make self-criticism less threatening, are important in this bootstrapping process.
This fits with my personal experience. When I manage to be humble, it's usually because I don't feel like I need to prove to myself that I'm good enough or relevant enough. It can veer into arrogance, but when kept in check, simple confidence all the way up is a great way of being receptive to criticism.