alex

Developer/Engineer, AGI Researcher, Finance Quant, Entrepreneur, ML Consultant.  Have been helping long-term organizations understand the impact of AI since 2008 when I helped develop Google's first click fraud detection algorithms. Interested in meeting like minded folks serious about building organizations to ensure human-level Artificial General Intelligence is democratized, beneficial for humanity and respectful of HL+ AI agents.

Posts

Sorted by New

Wiki Contributions

Comments

alex50

alignment research is currently a mix of different agendas that need more unity. The alignment agendas of some researchers seem hopeless to others, and one of the favorite activities of alignment researchers is to criticize each other constructively

Given the risk-landscape uncertainty and conflicting opinions, I would argue that this is precisely the optimal high-level approach for AI Alignment research agendas at this point in time. 'Casting a broader net' can allow us to more quickly identify and mobilize resources towards areas of urgently-needed alignment research when they are identified with sufficient confidence.  IMHO, Constructive debate about research priorities is hard to argue against.  Moreover, much like the lack of publication of negative results in academic literature results in significant inefficiencies within scientific R&D, having even a shallow understanding of a broader 'space' of alignment solutions  has value in itself in that can identify areas that are ineffective or not applicable respective to certain AI capabilities.