All of Cole Killian's Comments + Replies

Unlike a maximiser, that will attempt to squeeze the universe to every drop of utility that it can, a satisficer will be content when it reaches a certain level expected utility (a satisficer that is content with a certain level of utility is simply a maximiser with a bounded utility function).

Does it make sense to to claim that a satisficer will be content when it reaches a certain level of expected utility though? Some satisficers may work that way, but they don't all need to work that way. Expected utility is somewhat arbitrary.

Instead, you could hav... (read more)

3Stuart_Armstrong5mo
If U is the utility and u is the value that it needs to be above, define a new utility V, which is 1 if and only if U>u and is 0 otherwise. This is a well-defined utility function, and the design you described is exactly equivalent with being an expected V-maximiser.

Is there a reason for not link posting all overcoming bias posts to lesswrong?

Could you elaborate on the reasoning behind the high bar for alignment forum membership?

4Ruby7mo
The Alignment Forum is supposed to be a very high signal-to-noise place for Alignment content, where researchers can trust that all content they read will be material they're interested in seeing (even at the expense of some false negatives).

I looked briefly into Ziz. My conclusion is that she had some interesting ideas I hadn't heard before, and some completely ridiculous ideas. I couldn't find her definition of "good" or "bad" or the idea of tiling the future lightcone with copies of herself.

Thanks for reminding me about that scene from the Matrix. Gave it a look on YouTube. Awesome movie.

I'm wondering, how do you look at the question of what we want to tile the future lightcone with?

Yea I like the way you describe it.

I'll check out his writings on the history of Buddhism and meditation, thanks.

I agree it can be seen as a destructive meme. At the same time, I wonder why it has spread so little. Maybe because it doesn't have a very evangelical property. People who become infected with it might not have much of a desire to pass it on to others.

Hey thanks for the link Richard that was an interesting read. There definitely seems to be some similarities.

I was actually thinking about what we want to tile the future lightcone with the other day. This was the progression I saw:

  • Conventional Morality :: Do what feels right without thinking much about it.
  • Utilitarianism I :: The atomic unit of "goodness" and "badness" is the valence of human experience. The valence of experience across all humans matters equally. The suffering of a child in Africa matters just as much as the suffering of my neighbor.
  • U
... (read more)

I took a look at meaningness a few months ago but couldn't really get into it. It felt a bit too far from rationality and very hand wavy.

Did you find Meaningness valuable? I may take another look

1sig8mo
I think Meaningness has some interesting discussion on what "post-modernity" can mean in terms of epistemology and (scientific) thinking https://metarationality.com/stem-fluidity-bridge I think he writes well (unlike OP, sorry :D) and gets to his point with relatively little text. I think his STEM-fluidity-postmodernism idea is on the more useful side, out of those I've seen in the whole rationality scene.

Meaningness is a great example of the art of deferral. Chapman promises much, but always there are preliminaries he has to explain first, and preliminaries to those preliminaries, and the promised meat course never shows up. I have to wonder if the endless hors d'oeuvres and pre-banquet entertainments are the whole of it, and the promises are just the carrot on the stick, jam tomorrow to get people to keep reading.

I have found him illuminating on the history of Buddhism and meditation.

2Mitchell_Porter8mo
No, I couldn't get into it either. 

You're assessment seems very accurate!

It didn't occur to me that there are probably many more people like him than I realize. I'm not sure I've met any. Have you?

2Mitchell_Porter8mo
I've met at least one person who was just giving away their independent writings on the nature of enlightenment.  You might also want to look at "Meaningness", which has been influential among "post-rationalists". 

My response is to say that sometimes it doesn't all add up to normality. Sometimes you learn something which renders your previous way of living obsolete.

It's similar to the idea of thinking of yourself as having free will even if it isn't the case: It can be comforting to think of yourself as having continuity of consciousness even if it isn't the case.

Wei Dai posts here (https://www.lesswrong.com/posts/uXxoLPKAdunq6Lm3s/beware-selective-nihilism) suggesting that we "keep all of our (potential/apparent) values intact until we have a better handle on how w... (read more)

Thanks for writing this post.

You mention that:

only conscious beings will ask themselves why they are conscious

But at the same time you support epiphenomenalism whereby consciousness has no effect on reality.

This seems like a contradiction. Why would only conscious things discuss consciousness if consciousness has no effect on reality?

Also, what do you think about Eliezer's Zombies post? https://www.lesswrong.com/posts/7DmA3yWwa6AT5jFXt/zombies-redacted

I think we mostly agree.

That's only clear if you define "long enough" in a perverse way. For any finite sequence of bets, this is positive value. Read SBF's response more closely - maybe you have an ENORMOUSLY valuable existence.

I agree that it's positive expected value calculated as the arithmetic mean. Even so, I think most humans would be reluctant to play the game even a single time.

tl;dr: it depends on whether utility is linear or sublinear in aggregation. Either way, you have to accept some odd conclusions.

I agree it's mostly a question of... (read more)

2Dagon10mo
Probably, but precision matters.  Mixing up mean vs sum when talking about different quantities of lives is confusing.  We do agree that it's all about how to convert to utilities.  I'm not sure we agree on whether 2x the number of equal-value lives is 2x the utility.  I say no, many Utilitarians say yes (one of the reasons I don't consider myself Utilitarian).   Again, precision in description matters - that game maximizes log wealth, presumed to be close to linear utility.  And it's not clear that it shows what you think - it never leaves you nothing, just very often a small fraction of your current wealth, and sometimes astronomical wealth.  I think I'd play that game quite a bit, at least until my utility curve for money flattened even more than simple log, due to the fact that I'm at least in part a satisficer rather than an optimizer on that dimension.  Oh, and only if I could trust the randomizer and counterparty to actually pay out, which becomes impossible in the real world pretty quickly. But that only shows that other factors in the calculation interfere at extreme values, not that the underlying optimization (maximize utility, and convert resources to utility according to your goals/preferences/beliefs) is wrong.

I posted a V2 of the post here: https://www.lesswrong.com/posts/WYGp9Kwd9FEjq4PKM/sbf-pascal-s-mugging-and-a-proposed-solution. I'm curious what do you think?

The new approach is to also incorporate (with more details in the post):

  • A bounded utility function to account for human indifference to changes in utility above or below a certain point.
  • A log or sub log utility function to account for human risk aversion.

Good point thanks for the comment. I'll think about it some more and get back to you.

Gotcha thanks yea I should have elaborated more.

I think the general consensus is that it's very unlikely bitcoin inevitably takes a monopoly position in the cryptocurrency scene, which is what the bitcoin maxi position is referring to here.

Vitalik goes into reasons here: https://blog.ethereum.org/2014/11/20/bitcoin-maximalism-currency-platform-network-effects/

But I could have been more charitable to the bitcoin maxi position.

2Wei Dai1y
Have you seen Vitalik's recent In Defense of Bitcoin Maximalism? There's some speculation that it was an April Fools joke, which Vitalik addressed in an interview at https://youtu.be/m4vYEn_Twog?t=975. In short, it's not his "primary opinion" but he sees "benefits in both sides". Seems to contradict your "People no longer saw bitcoin maximalism as a defensible position" statement.

Yes good point, I agree that it's bad advice to ask people to dispose of beliefs which actually work.

I'd also say that disposing of the belief that "an environment of multiple competing cryptocurrencies is undesirable, that it is wrong to launch 'yet another coin', and that it is both righteous and inevitable that the Bitcoin currency comes to take a monopoly position in the cryptocurrency scene", does not forbid somebody from investing in bitcoin based on some other belief.

I think the advice of asking somebody to find better grounding for beliefs is dange... (read more)

Gotcha yea I hadn't considered those terms; my thinking was that there isn't an established standard name for this phenomenon. I haven't seen a standard term for it on lesswrong, but you would think if such a term existed it would be found here pretty easily. Of the ones you list I think "orthodox" fits the best, but they are all highly overloaded and generally used with religious connotations.

I agree that "maxi" doesn't ring a bell outside of the crypto space right now, but my thinking was to introduce it as a term to represent this idea of "belief in bel... (read more)