This is a good question but it's hard to answer. I'll try and signal-boost this a little later, but I'll give it a shot first.
Asking questions for either is hard - you'd think I'd know something about the first, but I don't really. Maybe the people to ask are organizers for SERI? My stab would be general cognitive questions, asking about past research, science, or engineering -like projects, and maybe showing them a gridworld AI from Concrete Problems in AI Safety and getting them to explain what's going on, why the AI does something bad, and ask them to give one example of where the toy model seems like it would generalize to the real world, and one example of where it wouldn't.
In theory, though, interpretability work has plenty of places where skilled software engineers would help, in ways that are scaleable enough to justify larger organizations. Redwood Research is the org that has likely put the most thought into this, and maybe you should chat with them.
Hey guys,
I run a major recruiting firm in India working with Tech companies and wanted to use some of that access to the workforce to get the highly talented Ai people into alignment. the cool thing about India is that the cost of living is so low that full-time talented people in this field) can be snagged at 20k-50k a year.
The question I have is 2 part,