Heuristics and Biases are the ways human reasoning differs from a theoretical ideal agent, due to reasoning shortcuts that don't always work (heuristics) and systematic errors (biases).
See also: Affect Heuristic, Confirmation Bias, Fallacies, Rationality
The field of heuristics and biases was essentially created by Kahneman and Tversky in a series of experiments proving that people consistently make a set of errors when judging problems that have exact statistical answers. This isn't to say that humans only make errors on this type of problem; it's just a lot easier to track the errors they are making when you know an answer precisely.
This has a number of connotations; firstly, more people guessing the same thing doesn't necessarily improve the quality of your answer, contrary to what your naïve expectation may be. Secondly, more empirical information can worsen your ability to predict future outcomes as you become more certain of a biased conclusion. Thirdly, it seems likely that if we can systematically improve cognitive biases then it would benefit us in achieving many goals.
The Sequence, Predictably Wrong, offers a good introduction to the topic for those who are not familiar.
While a bias is always wrong, and gives you the wrong answer, a heuristic is just a shortcut which may or may not give you an accurate answer. Just because you know complex mathematical methods for precisely calculating the flight of objects through space doesn't mean you should be using them to play volleyball. Heuristics are necessary for actually getting anything done, but because they are just approximations they frequently produce biases, which is where the problem lies. A further confusion is with fallacies, which are errors in reasoning, rather than flaws in the reasoning algorithm [unclear, rewrite].
A good example of a heuristic is the affect heuristic-- people tend to guess unknown traits about people or things based on the perceived goodness of badness of known traits, whether or not they would explicitly expect these traits to correlate with one another. In some circumstances this is a useful shortcut-- you may like to assume, for instance, that people who are good singers are more likely to be good dancers, too. However, it also frequently produces (unconscious) biases-- a bias towards believing that people who are tall and good looking have better moral character, for instance. An example of a fallacy, on the other hand, would be planning fallacy-- people's explicit models of how long things will take to get done are consistently very optimistic.
Well, no. If it were that easy we wouldn't need a community initially dedicated to overcoming bias (the name of the blog which this website grew out of). Unfortunately, learning about a bias alone doesn't seem to improve your ability to avoid it in real life. There's also the (major) issue that knowing about biases can hurt people. Instead of being purely focused on removing negative habits, there is now a major focus at LessWrong to implementing positive habits. These are skills such as how to update (change your mind) the correct amount in response to evidence, how to resolve disagreements with others, how to introspect, and many more.