This, I think, may be too domain-specific to really be answerable in any useful way. Anyway, more broadly: when you run into errors, it's always good to think sort of like pilots or sysadmins in dealing with complex system failures - doing research and making errors is certainly a complex system, where there are many steps where errors could be caught. What are the root causes, how did the error start propagating, and what could have been done throughout the stack to reduce it?
constrain the results: Fermi estimates, informative priors, inequalities, and upper/lower bounds are all good for telling you in advance roughly what the results should be, or at least building intuition about what you expect
implement in code or theorem-checker; these are excellent for flushing out hidden assumptions or errors. As Pierce puts it, proving a theorem about your code uncovers many bugs - and it doesn't matter what theorem!
solve with alternative methods, particularly brute force: solvers like Mathematica/Wolfram are great just to tell you what the right answer is so you can check your work. In statistics/genetics, I often solve something with Monte Carlo (or ABC) or brute force approaches like dynamic programming, and only then, after looking at the answers to build intuitions (see: #1), do I try to tackle an exact solution.
test the results: unit test critical values like 0 or the small integers, or boundaries, or very large numbers; use property-based checking (I think also called 'metamorphic testing') like QuickCheck to establish that basic properties seem to hold (like always being positive, monotonic, input same length as output etc)
ensemble yourself: wait a while and sleep on it, try to 'rubber duck' it to activate your adversarial reasoning skills by explaining it, go through it in different modalities
generalize the results, so you don't have to resolve it: the most bugfree code or proof is the one you never write.
When you run into an error, think about it: how could it have been prevented? If you read something like Site Reliability Engineering: How Google Runs Production Systems or other books about failure in complex systems, you might find some useful inspiration. There is a lot of useful advice: for example, you should have some degree of failure in a well-functioning system; you should keep an eye on 'toil' versus genuinely new work and step back and shave some yaks when 'toil' starts getting out of hand; you should gradually automate manual workflows, perhaps starting from checklists as skeletons
Do you need to shave some yaks? Are your tools bad? Is it time to invest in learning to use better programs or formalization methods?
If you keep making an error, how can it be stopped?
If it's a simple error of formula or manipulation, perhaps you could make a bunch of spaced repetition system flashcards with slight variants all stressing that particular error.
Is it machine-checkable? For writing my essays, I flag as many errors or warning signs using two scripts and additional tools like
Can you write a checklist to remind yourself to check for particular errors or problems after finishing?
Follow the 'rule of three': if you've done something 3 times, or argued at length the same thesis 3 times, etc, it may be time to think about it more closely to automate it or write down a full-fledged essay. I find this useful for writing because if something comes up 3 times, that suggests it's important and underserved, and also that you might save yourself time in the long run by writing it now. (This is my theory for why memory works in a spaced repetition sort of way: real world facts seems to follow some sort of long-tailed or perhaps mixture distribution, where there are massed transient facts which can be safely forgotten, and long-term facts which pop up repeatedly with large spacings, so ignoring massed presentation but retaining facts which keep popping up after long intervals is more efficient than simply having memories be strengthened in proportion to total number of exposures.)