Linda Linsefors

Hi, I am a Physicist, an Effective Altruist and AI Safety student/researcher.

Wiki Contributions

Comments

Like, as a crappy toy model, if every alignment-visionary's vision would ultimately succeed, but only after 30 years of study along their particular path, then no amount of new visionaries added will decrease the amount of time required from “30y since the first visionary started out”.

 

I think that a closer to true model is that most current research directions will lead approximately no-where but we don't know until someone goes and check. Under this model adding more researchers increases the probability that at least someone is working on fruitful research direction. And I don't think you (So8res) disagree, at least not completely?

I don't think we're doing something particularly wrong here. Rather, I'd say: the space to explore is extremely broad; humans are sparsely distributed in the space of intuitions they're able to draw upon; people who have an intuition they can follow towards plausible alignment-solutions are themselves pretty rare; most humans don't have the ability to make research progress without an intuition to guide them. Each time we find a new person with an intuition to guide them towards alignment solutions, it's likely to guide them in a whole new direction, because the space is so large. Hopefully at least one is onto something.

I do think that researchers stack, because there are lots of different directions that can and should be explored in parallel. So maybe the crux is to what fraction of people can do this? Most people I talk to do have research intrusions. I think it takes time and skill to cultivate one's intuition into an agenda that one can communicate to others, but just having enough intuition to guide one self is a much lower bar. However most people I talk to think they have to fit into someone else's idea of what AIS research look like in order to get paid. Unfortunately I think this is a correct belief for everyone without exceptional communication skills and/or connections. But I'm honestly uncertain about this, since I don't have a good understanding of the current funding landscape.

A side from money there are also imposter-syndrom type effects going on. A lot of people I talk to don't feel like they are allowed to have their own research direction, for vague social reasons. Some things that I have noticed sometimes helps:

  • Telling them "Go for it!", and similar things. Repletion helps.
  • Talking about how young AIS is as a field, and the implications of this, including the fact that their intrusions about the importance of expertise is probably wrong when applied to AIS.
  • Handing over a post-it note with the text "Hero Licence".

I believe you that in some parts of Europe this is happening, witch is good. 

I "feel shocked that everyone's dropping the ball".

 

Maybe not everyone
The Productivity Fund (nonlinear.org)
Although this project has been "Coming soon!" for several months now. If you want to help with the non-dropping of this ball, you could check in with them to see if they could use some help.

Funding is not truly abundant. 

  • There are people who have above zero chance of helping that don't get upskilling grants or research grants. 
  • There are several AI Safety orgs that are for profit in order to get investment money, and/or to be self sufficient, because given their particular network, it was easier to get money that way (I don't know the details of their reasoning).
  • I would be more efficient if I had some more money and did not need to worry about budgeting in my personal life. 

I don't know to what extent this is due to the money not existing, or it's due to grant evaluation is hard, and there are some reason to not give out money to easily. 

Is this... not what's happening?


No by default.

I did not have this mindset right away. When I was new to AI Safety I though it would require much more experience before I was qualified to question the consensus, because that is the normal situation, in all the old sciences. I knew AI Safety was young, but I did not understand the implications at first. I needed someone to prompt me to get started. 

Because I've run various events and co-founded AI Safety Support, I've talked to loooots of AI Safety newbies. Most people are too causes when it comes to believing themselves and too ready to follow authorities. It's usually only takes a short conversation pointing out how incredibly young AI Safety is, and what that means, but many people do need this one push.

Yes, that makes sense. Having a bucked is defiantly helpful for finding advise. 

I can't answer for Duncan, but I have had similar enough experiences that I will answer for my self. When I notice that someone is chronically typical minding (not just typical minding as a prior, but shows signs that they are unable to even to consider that others might be different in unexpected ways), then I leave as fast as I can, because such people are dangerous. Such people will violate my boundaries until I have a full melt down. They will do so in the full belief that they are helpful, and override anything I tell them with their own prior convictions. 

I tired to get over the feeling of discomfort when I felt misunderstood, and it did not work. Because it's not just a reminder that the wold isn't perfect (something I can update on and get over), but an active warning signal.

Learning to interpret this warning signal, and knowing when to walk away, has helped a lot.

Different people and communities are more or less compatible with my style of weird. Keeping track of this is very useful.  

I think this comment is pointing in the right direction. But I disagree with

E.g. today we have buckets like "ADHD" and "autistic" with some draft APIs attached

There are buckets, but I don't know what the draft APIs would be. Unless you count "finding your own tribe and stay away from the neurotypicals" as an API.

If you know something I don't let me know!

Yes, that is a thing you can do with decision transforms too. I was referring to variant of the decision transformer (see link in original short form) where the AI samples the reward it's aiming for. 

I think having something like an AI Safety Global would be very high impact for several reasons. 

  1. Redirecting people who are only interested in AI Safety from EAG/EAGx to the conference they actually want to go to. This would be better for them and for EAG/EAGx. I think AIS has a place at EAG, but it's inefficient that lots people go there basically only to talk to other people interested in AIS. That's not a great experience either for them, or for the people who are there to talk about all the other EA cause areas.
  2. Creating any amount of additional common knowledge in the AI Safety sphere. AI Safety is begging big and diverse enough that different people are using different words in different ways, and using different unspoken assumptions. It's hard to make progress on top of the established consensus when there is no established consensus. I defiantly don't think (and don't want) all AIS researchers to start agreeing on everything. But just some common knowledge of what other researches are doing would help a lot. I think that a yearly conference where each major research group gives an official presentation of what they are doing and their latest results, would help a lot. 
  3. Networking. 

I don't think that such a conference should double as a peer-review journal, the way many ML and CS conferences do. But I'm not very attached to this opinion. 

I think making it not CEA branded is the right choice. I think it's healthier for AIS to be it's own thing, not a sub community of EA, even though there will always be an overlap in community membership.

What's your probability that you'll make this happen?

I'm asking because if you don't do this, I will try to convince someone else to do it. I'm not the right person to organise this my self. I'm good at smaller, less formal events. My style would not fit with what I think this conference should be. I think the EAG team would do a good job at this though. But if you don't do it someone else should. I also think the team behind the Human-aligned AI Summer School would do a good job at this, for example.

I responded here instead of over email, since I think there is a value in having this conversation in public. But feel free to email me if you prefer. linda.linsefors@gmail.com 

Load More