I study computational neuroscience, BCI and AGI safety. I am mainly looking into the possibility of using BCI to extract human values or create better data for AI training.

Wiki Contributions


Our attention is one of the most valuable resources we have, and it is now through recent AI developments in NLP and machine vision that we are realizing that it might very well be a fundamental component of intelligence itself. 

This post brings this point to attention (pun intended) by using video games as examples, and encourages us to optimize the way we use this limited resource to maximize information gain and to improve our cooperation skills by avoiding being 'sound absorbers'. 

It is a nice thought experiment, but I've noticed that many AI researchers are devoted to their labour to the point of being comparable to religious fanaticism (on either camp, really). I don't think a fat pay check will make them stop their research so readily.

Very insightful, thanks for the clarification, as dooming as it is.

A nuclear reactor doesn't try to convince you intellectually with speech or text so that you behave in a way you would not have before interacting with the nuclear reactor. And that is assuming your statement 'current LLMs are not agentic' holds true, which seems doubtful.

As much as I agree that things are about to get really weird, that first diagram is a bit too optimistic. There is a limit to how much data humanity has available to train AI (here), and it seems doubtful we can make use of data x1000 times more effectively in such a short span of time. For all we know, there could be yet another AI winter coming - I don't think we will get that lucky, though.

Thank you for posting this. Been 'levelling up' my maths for machine learning lately and this is just perfect.

Even if pessimistic, it is invaluable to know that an idea is unlikely to succeed before you invest your only shot into it. 

Thanks for the pointers, I will research them and reformulate my plan.