I don't see the need for this new category of "requiredism;" most philosophical compatibilists have thought that free will required determinism. Van Inwagen calls the argument that free will requires determinism the "mind argument" (since there are apparently several papers in Mind from the mid 20th century all making versions of the argument), but it is quite clearly stated as early as Hume.
I don't know a standard name for it, but the soul-swap issue is quite old. Locke is interpreted as making some similar point in chapter XXVII, section 13 of the Essay Concerning Human Understanding; I know I always hear the point attributed to Locke, so he may be the first.
Cyan, what you describe sounds a bit mystical, but there is an observable tendency for people to seek some magic bullet, some simple underlying factor which explains everything. Single underlying factor theories are usually wrong, of course, and phenomena often involve a lot of complex relationships which need to be taken into account; some who call themselves reductionists are enamored of over-simplified single factor views (the way certain evolutionary psychologists talk about genes comes to mind), and it is likely that anti-reductionism is partly motiv...
Along the lines of my comment on your previous reductionism post, perhaps there would be fewer howls of protest at the declaration that rainbows are not fundamental were you not contrasting them with other things which you are claiming are fundamental (without evidence, I might add).
One minor quibble; how do we know there is any most basic level?
Levels are an attribute of the map. The territory only has one level. Its only level is the most basic one.
Let's consider a fractal. The Mandelbrot set can be made by taking the union of infinitely many iterations. You could think of each additional iteration as a better map. That being said, either a point is in the Mandelbrot set or it is not. The set itself only has one level.
Peter, most of the reasons people give for making exceptions are not themselves meta. For most of the examples you give, the intuitive justification is something along the lines of "the reason killing is wrong is that life is valuable, and in these cases not killing would involve valuing life less than killing would." Nothing meta there.