Relevant: the non-adversarial principle of AI alignment
Whereas if you're good at your work and you think that your job is important, there's an intervening layer or three—I'm doing X because it unblocks Y, and that will lead to Z, and Z is good for the world in ways I care about, and also it earns me $ and I can spend $ on stuff...
Yes initially there might be a few layers, but there's also the experience of being really good at what you do, being in flow, at which point Y and Z just kind of dissolve into X, making X feel valuable in itself like jumping on a trampoline.Seems like this friend wants to be in this state by default. If X inherits its value from Z through an intellectual link, a S2-level association, the motivation to do X just isn't as strong as when the value is directly hardcoded into X itself on the S1 level. "Why was I filling in these forms again? Something with solving global coordination problems? Whatever it's just my Duty as a Good Citizen." or "Whatever I can do it faster than Greg".But there is a problem: the more the value is a property of X, the harder it will be to detach from it when X suddenly doesn't become instrumental to Z anymore. Here we find ourselves in the world of dogma and essentialism and lost purposes.So we're looking at a fundamental dilemma: do I maintain the most accurate model by always deriving my motivation from first principles, or do I declare the daily activities of my job to be intrinsically valuable?In practice I think we tend to go back and forth between these extremes. Why do we need breaks, anyway? Maybe it's to zoom out a bit and rederive our utility function.
A thought experiment: would you A) murder 100 babies or B) murder 100 babies? You have to choose!
Sidestepping the politics here: I've personally found that avoiding (super)stimuli for a week or so, either by not using any electronic devices or by going on a meditation retreat, tends to be extremely effective in increasing my ability to regulate my emotions. Semi-permanently.I have no substitute for it, it's my panacea against cognitive dissonance and mental issues of any form. This makes me wonder: why aren't we focusing more on this from an applied rationality point of view?
This seems to be a fully general counterargument against any kind of advice.As in: "Don't say 'do X' because I might want to do not X which will give me cognitive dissonance which is bad"You seem to essentially be affirming the Zen concept that any kind of "do X" will imply that X is better than not X, i.e. a dualistic thought pattern, which is the precondition for suffering.But besides that idea I don't really see how this post adds anything. Not to mention that identity tends to already be an instance of "X is better than not X". Paul Graham is saying "not (X is better than not X) is better than (X is better than not X), and you just seem to be saying "not (not (X is better than not X) is better than (X is better than not X)) is better than (not (X is better than not X) is better than (X is better than not X))".At that point you're running in circles and the only way out is to say mu and put your attention on something else.
Since this is the first Google result and seems out of date, how do we get the RSS link nowadays?
I may have finally figured out the use of crypto.
It's not currency per se, but the essential use case of crypto seems to be to automate the third party.
This "third party" can be many things. It can be a securities dealer or broker. It can be a notary. It can be a judge that is practicing contract law.
Whenever there is a third party that somehow allows coordination to take place, and the particular case doesn't require anything but mechanical work, then crypto can do it better.
A securities dealer or broker doesn't beat a protocol that matches buyers and sellers automatically. A notary doesn't beat a public ledger. A judge in contract law doesn't beat an automatically executed verdict, previously agreed upon in code.
(like damn, imagine contracts that provably have only one interpretation. Ain't that gonna put lawyers out of business)
And maybe a bank doesn't beat peer to peer transactions, with the caveat that central banks are pretty competent institutions, and if anyone will win that race it is them. While I'm optimistic about cryptocurrency, I'm still skeptical about private currency.
I was in this "narcissist mini-cycle" for many years. Many google searches and no luck. I can't believe that I finally found someone who recognizes it. Thank you so much.fwiw, what got me out of it was to attend a Zen temple for 3 months or so. This didn't make me less narcissistic, but somehow gave me the stamina to actually achieve something that befit my inflated expectations, and now I just refer back to those achievements to quell my need for greatness. At least while I work on lowering my expectations.
It does not, but consider 2 adaptations:A: responds to babies and more strongly to bunniesB: responds to babies onlyB would seem more adaptive. Why didn't humans evolve it?Plausible explanation: A is more simple and therefore more likely to result from a random DNA fluctuation. Is anyone doing research into which kinds of adaptations are more likely to appear like this?
Can you come up with an example that isn't AI? Most fields aren't rife with infohazards, and 20% certainty of funding the best research will just divide your impact by a factor 5, which could still be good enough if you've got millions.For what it's worth, given the scenario that you've at least got enough to fund multiple AI researchers and your goal is purely to fix AI, I concede your point.