Hi, I am a Physicist, an Effective Altruist and AI Safety student/researcher.
I believe you that in some parts of Europe this is happening, witch is good.
I "feel shocked that everyone's dropping the ball".
Maybe not everyone
The Productivity Fund (nonlinear.org)
Although this project has been "Coming soon!" for several months now. If you want to help with the non-dropping of this ball, you could check in with them to see if they could use some help.
Funding is not truly abundant.
I don't know to what extent this is due to the money not existing, or it's due to grant evaluation is hard, and there are some reason to not give out money to easily.
Is this... not what's happening?
No by default.
I did not have this mindset right away. When I was new to AI Safety I though it would require much more experience before I was qualified to question the consensus, because that is the normal situation, in all the old sciences. I knew AI Safety was young, but I did not understand the implications at first. I needed someone to prompt me to get started.
Because I've run various events and co-founded AI Safety Support, I've talked to loooots of AI Safety newbies. Most people are too causes when it comes to believing themselves and too ready to follow authorities. It's usually only takes a short conversation pointing out how incredibly young AI Safety is, and what that means, but many people do need this one push.
Yes, that makes sense. Having a bucked is defiantly helpful for finding advise.
I can't answer for Duncan, but I have had similar enough experiences that I will answer for my self. When I notice that someone is chronically typical minding (not just typical minding as a prior, but shows signs that they are unable to even to consider that others might be different in unexpected ways), then I leave as fast as I can, because such people are dangerous. Such people will violate my boundaries until I have a full melt down. They will do so in the full belief that they are helpful, and override anything I tell them with their own prior convictions.
I tired to get over the feeling of discomfort when I felt misunderstood, and it did not work. Because it's not just a reminder that the wold isn't perfect (something I can update on and get over), but an active warning signal.
Learning to interpret this warning signal, and knowing when to walk away, has helped a lot.
Different people and communities are more or less compatible with my style of weird. Keeping track of this is very useful.
I think this comment is pointing in the right direction. But I disagree with
E.g. today we have buckets like "ADHD" and "autistic" with some draft APIs attached
There are buckets, but I don't know what the draft APIs would be. Unless you count "finding your own tribe and stay away from the neurotypicals" as an API.
If you know something I don't let me know!
Yes, that is a thing you can do with decision transforms too. I was referring to variant of the decision transformer (see link in original short form) where the AI samples the reward it's aiming for.
I think having something like an AI Safety Global would be very high impact for several reasons.
I don't think that such a conference should double as a peer-review journal, the way many ML and CS conferences do. But I'm not very attached to this opinion.
I think making it not CEA branded is the right choice. I think it's healthier for AIS to be it's own thing, not a sub community of EA, even though there will always be an overlap in community membership.
What's your probability that you'll make this happen?
I'm asking because if you don't do this, I will try to convince someone else to do it. I'm not the right person to organise this my self. I'm good at smaller, less formal events. My style would not fit with what I think this conference should be. I think the EAG team would do a good job at this though. But if you don't do it someone else should. I also think the team behind the Human-aligned AI Summer School would do a good job at this, for example.
I responded here instead of over email, since I think there is a value in having this conversation in public. But feel free to email me if you prefer. linda.linsefors@gmail.com
I think that a closer to true model is that most current research directions will lead approximately no-where but we don't know until someone goes and check. Under this model adding more researchers increases the probability that at least someone is working on fruitful research direction. And I don't think you (So8res) disagree, at least not completely?
I do think that researchers stack, because there are lots of different directions that can and should be explored in parallel. So maybe the crux is to what fraction of people can do this? Most people I talk to do have research intrusions. I think it takes time and skill to cultivate one's intuition into an agenda that one can communicate to others, but just having enough intuition to guide one self is a much lower bar. However most people I talk to think they have to fit into someone else's idea of what AIS research look like in order to get paid. Unfortunately I think this is a correct belief for everyone without exceptional communication skills and/or connections. But I'm honestly uncertain about this, since I don't have a good understanding of the current funding landscape.
A side from money there are also imposter-syndrom type effects going on. A lot of people I talk to don't feel like they are allowed to have their own research direction, for vague social reasons. Some things that I have noticed sometimes helps: