(This is a semi-serious introduction to the metaethics sequence. You may find it useful, but don't take it too seriously.)
Meditate on this: A wizard has turned you into a whale. Is this awesome?

"Maybe? I guess it would be pretty cool to be a whale for a day. But only if I can turn back, and if I stay human inside and so on. Also, that's not a whale.
"Actually, a whale seems kind of specific, and I'd be suprised if that was the best thing the wizard can do. Can I have something else? Eternal happiness maybe?"
Meditate on this: A wizard has turned you into orgasmium, doomed to spend the rest of eternity experiencing pure happiness. Is this awesome?
...
"Kindof... That's pretty lame actually. On second thought I'd rather be the whale; at least that way I could explore the ocean for a while.
"Let's try again. Wizard: maximize awesomeness."
Meditate on this: A wizard has turned himself into a superintelligent god, and is squeezing as much awesomeness out of the universe as it could possibly support. This may include whales and starships and parties and jupiter brains and friendship, but only if they are awesome enough. Is this awesome?
...
"Well, yes, that is awesome."
What we just did there is called Applied Ethics. Applied ethics is about what is awesome and what is not. Parties with all your friends inside superintelligent starship-whales are awesome. ~666 children dying of hunger every hour is not.
(There is also normative ethics, which is about how to decide if something is awesome, and metaethics, which is about something or other that I can't quite figure out. I'll tell you right now that those terms are not on the exam.)
"Wait a minute!" you cry, "What is this awesomeness stuff? I thought ethics was about what is good and right."
I'm glad you asked. I think "awesomeness" is what we should be talking about when we talk about morality. Why do I think this?
-
"Awesome" is not a philosophical landmine. If someone encounters the word "right", all sorts of bad philosophy and connotations send them spinning off into the void. "Awesome", on the other hand, has no philosophical respectability, hence no philosophical baggage.
-
"Awesome" is vague enough to capture all your moral intuition by the well-known mechanisms behind fake utility functions, and meaningless enough that this is no problem. If you think "happiness" is the stuff, you might get confused and try to maximize actual happiness. If you think awesomeness is the stuff, it is much harder to screw it up.
-
If you do manage to actually implement "awesomeness" as a maximization criteria, the results will be actually good. That is, "awesome" already refers to the same things "good" is supposed to refer to.
-
"Awesome" does not refer to anything else. You think you can just redefine words, but you can't, and this causes all sorts of trouble for people who overload "happiness", "utility", etc.
-
You already know that you know how to compute "Awesomeness", and it doesn't feel like it has a mysterious essence that you need to study to discover. Instead it brings to mind concrete things like starship-whale math-parties and not-starving children, which is what we want anyways. You are already enabled to take joy in the merely awesome.
-
"Awesome" is implicitly consequentialist. "Is this awesome?" engages you to think of the value of a possible world, as opposed to "Is this right?" which engages to to think of virtues and rules. (Those things can be awesome sometimes, though.)
I find that the above is true about me, and is nearly all I need to know about morality. It handily inoculates against the usual confusions, and sets me in the right direction to make my life and the world more awesome. It may work for you too.
I would append the additional facts that if you wrote it out, the dynamic procedure to compute awesomeness would be hellishly complex, and that right now, it is only implicitly encoded in human brains, and no where else. Also, if the great procedure to compute awesomeness is not preserved, the future will not be awesome. Period.
Also, it's important to note that what you think of as awesome can be changed by considering things from different angles and being exposed to different arguments. That is, the procedure to compute awesomeness is dynamic and created already in motion.
If we still insist on being confused, or if we're just curious, or if we need to actually build a wizard to turn the universe into an awesome place (though we can leave that to the experts), then we can see the metaethics sequence for the full argument, details, and finer points. I think the best post (and the one to read if only one) is joy in the merely good.
Awesome and moral clearly have overlap. How much?
There's a humorous, satirical news story produced by The Onion, where the US Supreme Court rules that the death penalty is "totally badass". And it is, even though badass-ness is not a criteria to decide the death penalty's legality.
Similarly, awesomeness makes me think of vengeance. Though some vengeance is disproportionate with the initial offense, and thus not so awesome, vengeance seems on the whole to have that aura of glorious achievement that you'd find at the climax of an action / adventure film. And yet that doesn't really match my ideas of morality, though maybe I don't feel strongly positively enough for the restoration of justice.
The idea that vengeance is awesome but not moral might be an artifact of looking at it from the victor's side vs target's side. So maybe we should distinguish between awesome experiences and awesome futures / histories / worlds.
But those were just the first distinctions between morality and awesomeness I thought of while reading. I'm probably missing a lot of stuff, since morality and awesomeness are both big, complicated things. They're probably too big to think about all at once in detail, much less retrieve on a whim. Are there lists of of moral and/or awesome stuff we can look at to better define their overlap?
Schwartz is a psychologist who came up with 10 factors of culturally universal values. I'd say the factors of power, achievement, pleasure, excitement, self-direction, and tradition sound like things you'd find in awesome worlds, while pleasure, universalism, benevolence, conformity, and security sounds like things you find in worlds that are moral but not as awesome. Lovely and boring lives worth living. I included pleasure in both worlds, because that's a hard one to skip on in valuable futures. I wonder how good of a weirdtopia someone could write that didn't involve pleasure.
Anyway, that's less haphazard, but still crude analysis. I mean, some tradition looks like narrative myths and impressive ceremonies, which are awesome, and some tradition looks like shaming people for being sexually abused, which is not awesome. So "tradition" doesn't cut at the joints of awesome vs moral.
Is there a more fine grained list?
43 things is a popular website where people can list their goals, keep track of their progress, talk about why they failed, things like that. It's probably biased toward far-mode endorsements, and misses out on a lot of aesthetics which aren't neatly expressible as goal content, but it's still an interesting source of data on morality and awesomeness. The developers of 43 things have a blog, where they do shallow statistical analysis, like listing top habit goals, but the lists are very short and have a lot of overlap.
"A Hierarchical Taxonomy of Human Goals" (Chulef+ 2001) lists 138 unique goals derived from psychological literature and 30 goal clusters.
If you look through the list, you'll see a bunch of goals that start with "being". Being ambitious, responsible, respected, etc. Also some appearance ones like "looking fit". I think its fair to say that human goal content includes a fair bit of virtuousness, and that we could make a virtue theory of awesomeness just as much as a virtue theory of morality (though it might be too narrow a theory).
Sorting the big list turns out to be pretty hard, because the goals are a mix of awesomeness, boring morality, and other things. Like "Living close to family" initially sounds like a boring moral thing, but it sounds way cooler when they're riding mechanical rocket dinosaurs with you and helping you take down the Dark Evil's super weapon. Or even just "being a good parent". That doesn't sound as exciting as rocket dinosaurs, but neither can I quite bring myself to say that being a good parent is not totally awesome.
I did start sorting though. One thing that stood out is that awesome goals are more often about seeking and boring moral goals are more often about having. But that's going off what I had when I accidentally closed the document unsaved, so small sample bias. I think I might have drifted back toward thinking about awesome experiences instead of awesome futures too. Or even just features of a high status life. And status clearly isn't equivalent with morality. Oops.
In summary, I still really don't know whether awesome futures are the same as moral futures, or whether awesome moral futures are the same as valuable ones.
--
This comment is stupid. Morality is usually used to describe actions, not experiences or world states or histories. Calling awesomeness the same as morality is a type mismatch. Also I put universalism in the boring category, even though I had just said that justice-y vengeance is awesome. And I said goals on 43 things are biased to far mode, which they're not if you just look at them (neither are they near mode), and it doesn't matter either way because I didn't do anything with 43 things other than name drop it, like that stupid Onion sketch. And why did I bring up goal content at all? Goals are the products of valuation thought processes, not the constituents. We have psychology and neuroscience; we can just look at how awesomeness feelings work instead of imaging situations associated with goals and decided, "hrm, yes, trench coats definitely sound awesome, I wonder what that tells me about morality". And goals aren't all that interesting for characterizing human values that aren't moral ones, in that specific, social sense that Haidt talks about. Like I'm pretty sure not being horribly burned by fire is a common human value, and yet no one on 43 things wrote about it, and it's only weakly implicit in Schwartz' value taxonomy with the pleasure and security factors. And yet not burning people alive is probably a more important thing to ensure in the design of humanity's future than making sure people can stay with their families or have high self esteem or have math parties.
As badass as shooting a fish in a barrel. Which is to say, no, not really.