On first approaching the alignment problem for advanced agents, aka "robust and beneficial AGI", aka "Friendly AI", a very common approach is to try to come up with one idea that solves all of AI alignment. A simple design concept, a simple utility function, a simple development strategy, one simple guideline for everyone to adhere to, or a large diagram full of boxes with lines to other boxes, that is sufficient to realize around as much benefit from beneficial superintelligences as can possibly be realized.
Without knowing the details of your current idea, this article can't tell you why it's wrong - though frankly we've got one heck of a prior against it. But some very standard advice would be:
A good initial goal is not "persuade everyone in the field to agree with a new idea" but rather "come up with a contribution to an open discussion that is sufficiently crisply stated that, if it were in fact wrong, it would be possible for somebody else to shoot it down today." I.e., an idea such that if you're wrong, this can be pointed out in the form of a crisply derivable consequence of a crisply specified idea, rather than it taking 20 years to see what happens. For there to be sustained progress, things need to be stated modularly enough and crisply enough that there can be a conversation about them that goes beyond "does not / does to" - things need to be stated in forms that have sufficiently clear and derivable consequences that if there's a problem, people can see it and agree on it.
Better yet, for your first contribution, poke a clearly demonstrable flaw in some solution currently being critiqued. Since most proposals in alignment theory get shot down, trying to participate in the critiquing process has a great advantage over trying to invent solutions, in that you'll probably have started with the true premise "this proposal is broken or incomplete" rather than the false premise "this proposal works and solves everything".
Most of the respect in the field goes to people who've contributed an important idea that helped to resolve a vague and confusing problem into a crisply stated one, or who offered partial solutions or crisp-yet-flawed solutions that pushed the conversation further. Being able to say sufficiently crisp things, in a common language, about subproblems of building a nice AI, is most of the bar to entering the field.
Psychologizing a little about why people might try to solve all of alignment theory in one shot, one might recount Robyn Dawes's advice that:
...and finally remark that building a nice machine intelligence correctly on the first try must be pretty darned difficult, since so many people solve it in the first 10 seconds.
It's possible that everyone working in this field is just missing the obvious and that there is some simple observation which solves all the problems at once. But realistically, you should be aware that everyone in this field has already heard a dozen terrible Total Solutions. You shouldn't be surprised if it's hard to get current workers in the field to listen to you. You should not, in fact, be surprised if your own straightforward yet complete solution is also horrible.
Likewise: If not everyone believes you, or even if it's hard to get people to listen to you instead of talking to other workers in the field instead, that's not necessarily because of some deep-seated psychological problem on their part, such as being uninterested in outsiders' ideas. Even if you're not an obvious crank, people are still unlikely to take the time out to engage with you unless you start by showing awareness of what they think are the usual issues and obstacles. #include "stdnormaladvice.h"