All of Snowyowl's Comments + Replies

AI Box Log

Three years late, but: there doesn't even have to be an error. The Gatekeeper still loses for letting out a Friendly AI, even if it actually is Friendly.

TV's "Elementary" Tackles Friendly AI and X-Risk - "Bella" (Possible Spoilers)

There have been other sci-fi writers talking about AI and the singularity. Charles Stross, Greg Egan, arguably Cory Doctorow... I haven't seen the episode in question, so I can't say who I think they took the biggest inspiration from.

Initiation Ceremony

9/16ths of the people present are female Virtuists, and 2/16ths are male Virtuists. If you correctly calculate that 2/(9+2) of Virtuists are male, but mistakenly add 9 and 2 to get 12, you'd get one-sixth as your final answer. There might be other equivalent mistakes, but that seems the most likely to lead to the answer given.

Of course, it's irrelevant what the actual mistake was since the idea was to see if you'll let your biases sway you from the correct answer.

Causal Universes

The later Ed Stories were better.

In the first scenario the answer could depend on your chance of randomly failing to resend the CD, due to tripping and breaking your leg or something. In the second scenario there doesn't seem to be enough information to pin down a unique answer, so it could depend on many small factors, like your chance of randomly deciding to send a CD even if you didn't receive anything.

Good point, but not actually answering the question. I guess what I'm asking is: given a single use of the time machine (Primer-style, you turn it on... (read more)

0cousin_it9yMy original comment had two examples, one had no coinflips, and the other had two coinflips. You seem to be talking about some other scenario which has one coinflip? The structure I have in mind is a branching tree of time, where each branch has a measure. The root (the moment before any occurrences of time travel) has measure 1, and the measure of each branch is the sum of measures of its descendants. An additional law is that measure is "conserved" through time travel, i.e. when a version of you existing in a branch with measure p travels into the past, the past branches at the point of your arrival, so that your influence is confined to a branch of measure p (which may or may not eventually flow into the branch you came from, depending on other factors). So for example if you're travelling to prevent a disaster that happened in your past, your chance of success is no higher than the chance of the disaster happening in the first place. In the scenarios I have looked at, these conditions yield enough linear equations to pin down the measure of each branch, with no need to go through Markov chains. But the general case of multiple time travelers gets kinda hard to reason about. Maybe Markov chains can give a proof for that case as well?
Causal Universes

I wasn't reasoning under NSCP, just trying to pick holes in cousin_it's model.

Though I'm interested in knowing why you think that one outcome is "more likely" than any other. What determines that?

2A1139yI said not receiving a CD from the future is the most likely because that's what usually happens. But I do have a pretty huge sampling bias of mainly talking to people who don't have time machines. i would expect "no CD" to be the most common even if you do have one, just because I feel like a closed time loop should take some effort to start. But this is probably a generalization from fiction, since if they happen in the real universe they do "just happen" with no previous cause. So I guess I can't support it well enough to justify my intuition. I will say that if I'm wrong about this, any time traveller should be prepared for these to happen all the time on totally trivial things.
Causal Universes

You make a surprisingly convincing argument for people not being real.

2Bugmaster9yI could apply the same argument to rocks, or stars, or any other physical object. They can be encoded as bit strings, too -- well, at least hypothetically speaking.
4Nornagest9yDepends what you mean by "people", and what you mean by "real", really.
Causal Universes

Last time I tried reasoning on this one I came up against an annoying divide-by-infinity problem.

Suppose you have a CD with infinite storage space - if this is not possible in your universe, use a normal CD with N bits of storage, it just makes the maths more complicated. Do the following:

  • If nothing arrives in your timeline from the future, write a 0 on the CD and send it back in time.

  • If a CD arrives from the future, read the number on it. Call this number X. Write X+1 on your own CD and send it back in time.

What is the probability distribution of t... (read more)

-1Eugine_Nier9yDisagree. This example depends fundamentally on having infinite storage density. Edit: would whoever downvoted this care to provide an example with finite storage density.
0cousin_it9yThese are pretty strong arguments, but maybe the idea can still be rescued by handwaving :-) In the first scenario the answer could depend on your chance of randomly failing to resend the CD, due to tripping and breaking your leg or something. In the second scenario there doesn't seem to be enough information to pin down a unique answer, so it could depend on many small factors, like your chance of randomly deciding to send a CD even if you didn't receive anything. Seconding A113's recommendation of "Be Here Now", that story along with the movie Primer was my main inspiration for the model.
0[anonymous]9yIn the first scenario, the answer seems to depend on the chance of you failing to resend the CD. In the second, on the chance of you deciding to send a CD even if you haven't received anything. So as long as you can't make these probabilities literally zero, I think the system can still be made to work. And yeah, seconding A113's recommendation of "Be Here Now". That story, along with the movie Primer, was my inspiration for the model.
6A1139yThe Novikov Self-Consistency Principle [] can help answer that. It is one of my favorite things. I don't think it was named in the post, but the concept was there. The idea is that contradictions have probability zero. So the first scenario, the one with the paradox, doesn't happen. It's like the Outcome Pump if you hit the Emergency Regret Button. Instead of saying "do the following," it should say "attempt the following." If it is one self-consistent timeline, then you will fail. I don't know why you'll fail, probably just whatever reason is least unlikely, but the probability of success is zero. The probability distribution is virtually all at "you send the same number you received." (With other probability mass for "you misread" and "transcription error" and stuff). If your experiment succeeds, then you are not dealing with a single, self-consistent universe. The Novikov principle has been falsified. The distribution of X depends on how many "previous" iterations there were, which depends on the likelihood that you do this sequence given that you receive such a CD. I think it would be a geometric distribution? The second one is also interesting. Any number is self-consistent. So (back to Novikov) none of them are vetoed. If a CD arrives, the distribution is whatever distribution you would get if you were asked "Write a number." More likely, you don't receive a CD from the future. That's what happened today. And yesterday. And the day before. If you resolve to send the CD to yourself the previous day, then you will fail if self-consistency applies Have you read HPMoR yet? I also highly recommend this short story [].
Epilogue: Atonement (8/8)

The flaw i see is why could the super happies not make separate decisions for humanity and the baby eaters.

I don't follow. They waged a genocidal war against the babyeaters and signed an alliance with humanity. That looks like separate decisions to me.

And why meld the cultures? Humans didn't seem to care about the existence of shockingly ugly super happies.

For one, because they're symmetrists. They asked something of humanity, so it was only fair that they should give something of equal value in return. (They're annoyingly ethical in that regard.) A... (read more)

0ikrase9yYeah... I guess I just didn't quite pick up on the whole symmetry thing. It seems like they could have, for example, immediately waged war on the baby eaters (I think it was not actually genocide but rather cultural imperialism, or forced modification so that the baby eaters would cause disutility) and THEN made the decision for the humans.
6DaFranker9yMore accurately:
Causal Reference

I'd say it would make a better creepypasta than an SCP. Still, if you're fixed on the SCP genre, I'd try inverting it.

Say the Foundation discovers an SCP which appears to have mind-reading abilities. Nothing too outlandish so far; they deal with this sort of thing all the time. The only slightly odd part is that it's not totally accurate. Sometimes the thoughts it reads seem to come from an alternate universe, or perhaps the subject's deep subconscious. It's only after a considerable amount of testing that they determine the process by which the divergence is caused - and it's something almost totally innocuous, like going to sleep at an altitude of more than 40,000 feet.

Rationality Quotes November 2012

They came impressively close considering they didn't have any giant shoulders to stand on.


Yep. If nothing of what Archimedes did counts as ‘science’, you're using an overly narrow definition IMO.

2DanArmak9yWell, all of classical and medieval Europe had writing, and yet science was created much later than writing. There were many other pieces to the puzzle: naturalism, for instance.
Why Are Individual IQ Differences OK?

I think it's more the point that some of us have more dislikable alleles than others.

[This comment is no longer endorsed by its author]Reply
Rationality Quotes June 2012

The latter one doesn't work at all, since it sounds rather like you're ignoring the very advice you're trying to give.

Rationality Quotes June 2012

I agree with Wilson's conclusions, though the quote is too short to tell if I reached this conclusion in the same way as he did.

Using several maps at once teaches you that your map can be wrong, and how to compare maps and find the best one. The more you use a map, the more you become attached to it, and the less inclined you are to experiment with other maps, or even to question whether your map is correct. This is all fine if your map is perfectly accurate, but in our flawed reality there is no such thing. And while there are no maps which state "Th... (read more)

Rationality Quotes June 2012

Or accept that each map is relevant to a different area, and don't try to apply a map to a part of the territory that it wasn't designed for.

And if you frequently need to use areas of the territory which are covered by no maps or where several maps give contradictory results, get better maps.

-1Eugine_Nier10yBasically, keep around a meta-map that keeps track of which maps are good models of which parts of the territory.
Glenn Beck discusses the Singularity, cites SI researchers

Does it matter? People read Glenn Beck's books; this both raises awareness about the Singularity and makes it a more "mainstream" and popular thing to talk about.

Rationality Quotes May 2012

I think this conversation just jumped one of the sharks that swim in the waters around the island of knowledge.

Making Reasoning Obviously Locally Correct

Actually, x=y=0 still catches the same flaw, it just catches another one at the same time.

1Perplexed11yOur disagreement seems to derive from my use of the words "different flawed step " and your use of "same flaw". Eliezer suggested substituting 1 for x and y in : yielding Thus, since a true equation was transformed into a false one, the step must have been flawed. Under my suggestion, we have: So, under Eliezer's suggested criterion (turning true to false) this is not a flawed step, though if you look carefully enough, you can still notice the flaw - a division by zero.
Rationality Quotes: March 2011

My personal philosophy in a nutshell.

Rationality Quotes: March 2011

Not all of them. Which applies to Old Testament gods too, I guess: the Bible is pretty consistent with that "no killing" thing.

9moshez11yThe bible doesn't say "don't kill". In KJV times, "kill" meant what we mean by "murder", and "slay" was the neutral form (what we now mean by "kill"). (This, by the way, actually corresponds to the Hebrew version) This post brought to you by the vast inferential distance you have from the people who wrote KJV
7wedrifid11yExcept for the countless times when killing is outright mandated on, well, pain of death.
Rationality Quotes: March 2011

Possible corollary: I can change my reality system by moving to another planet.

Lifeism, Anti-Deathism, and Some Other Terminal-Values Rambling

How about: Given the chance, would you rather die a natural death, or relive all your life experiences first?

1Normal_Anomaly11yI like that formulation. One question: would I be able to remember having lived them while I was reliving them? Because then it would be more boring than the first time.
Lifeism, Anti-Deathism, and Some Other Terminal-Values Rambling

(1) I'm not hurting other people, only myself

But after the fork, your copy will quickly become another person, won't he? After all, he's being tortured and you're not, and he is probably very angry at you for making this decision. So I guess the question is: If I donate $1 to charity for every hour you get waterboarded, and make provisions to balance out the contributions you would have made as a free person, would you do it?

0Pavitra11yIn thought experiment land... maybe. I'd have to think carefully about what value I place on myself as a special case. In practice, I don't believe that you can fully compensate for all of the unknown accomplishments I might have made to society.
0wedrifid11yPavitra is a he? I must have guessed wrong.
Blues, Greens and abortion

Well, I was at the time I wrote the comment. I wrote it specifically to get LW's opinions on the matter. I am now pro-choice.

0Normal_Anomaly11yOh, I get it now. Thanks. All confusion cleaned up.
Blues, Greens and abortion

And doesn't our society consider that children can't make legally binding statements until they're 16 or 18l?

2JoshuaZ11yThat's a) an arbitrary rule that doesn't have any justification other than history and b) not even completely true. For example, children can be witnesses in court cases and if their parents are getting divorced their preferences in regards to custody can matter a lot. Similarly, in some jurisdictions, kids below 18 can get married if they and the parents agree.
Blues, Greens and abortion

Oh for crying out loud. Please tell me it's fixed now.

3Normal_Anomaly11yIt currently says this: So, you are a pro-life person who values life over freedom, yah?
Blues, Greens and abortion

I think it's been blown rather out of proportion by political forces, so what you're describing seems very likely.

0byrnema11yI don't think so. I think some people feel very strongly about this issue independent of politics. Their strong feelings are something that politics is trying to harness/exploit.
Blues, Greens and abortion

I reject treating human life, or preservation of the human life, as a "terminal goal" that outweighs the "intermediate goal" of human freedom.

Hmm... not a viewpoint that I share, but one that I empathise with easily. I approve of freedom because it allows people to make the choices that make them happy, and because choice itself makes them happy. So freedom is valuable to me because it leads to happiness.

I can see where you're coming from though. I suppose we can just accept that our utility functions are different but not contradictory, and move on.

7TheOtherDave11yIn some sense, they are contradictory. Or at least mutually opposed. That is, if you and I uncovered the Visitors' plan to forcibly prevent humans from engaging in any activity that lowers our expected lifespans, you would (I infer) endorse that plan, and I might not. Depending on the situation, I might even act to disrupt that plan, and you might act to stop me. Of course, that's not going to happen. But you might vote and donate money to support criminalizing unhealthy practices (because doing so buys life at the cost of mere freedom) while I vote/donate to support legalizing some of them (because sometimes I value freedom more than life). In any case, I'm happy to move on in a pragmatic sense, but I wanted to be clear that there really is a point of pragmatic opposition here; this isn't an entirely academic disagreement.
Blues, Greens and abortion

And a fetus lacks the sentience which makes humans so important, so killing it, while still undesirable, is less so than the loss of freedom which is the alternative. Thanks! I'm convinced again.

Blues, Greens and abortion

I don't think you meant to write "against", I think you probably meant "for" or "in favor of".

Typo, thanks for spotting it.

Also, I'm not entirely sure that Less Wrong wants to be used as a forum for politics.

I posted this on LessWrong instead of anywhere else because you can be trusted to remain unbiased to the best of your ability. I had completely forgotten that part of the wiki though; it's been a while since I actively posted on LW. Thanks for the reminder.

4Pavitra11yPart of the reason we manage to remain unbiased is because we avoid talking about things that make us stupid.
Blues, Greens and abortion

I naturally take a stance against abortion. It's easy to see why: a woman's freedom is much more important than another human's right to live

Fixed, thanks.

5Normal_Anomaly11yUm, no it's not. It currently says:
Using the Karma system to call for a show of hands - profitable?

Good point. Since karma is gained by making constructive and insightful posts, any "exploit" that let one generate a lot of karma in a short time would either be quickly reversed or result in the "karma hoarder" becoming a very helpful member of the community. I think this post is more a warning that you may lose karma from making such polls, though since it's possible to gain or lose hundreds of points by making a post to the main page this seems irrelevant.

3ewang11y [] is VERY RELEVANT.
Subjective Relativity, Time Dilation and Divergence

Are you suggesting that AIs would get bored of exploring physical space, and just spend their time thinking to themselves? Or is your point that a hyper-accelerated civilisation would be more prone to fragmentation, making different thought patterns likely to emerge, maybe resulting in a war of some sort?

If I got bored of watching a bullet fly across the room, I'd probably just go to sleep for a few milliseconds. No need to waste processor cycles on consciousness when there are NP-complete problems that need solving.

1jacob_cannell11yI'm suggesting AI's will largely inhabit the metaverse - an expanding multiverse of pervasive simulated realities that flow at their accelerated speeds. The external physical universe will be too slow and boring. I imagine that in the metaverse uploads and AIs will be doing everything humans have ever dreamed of, and far more. Yes divergence or fragmentation seems in the cards so to speak because of the relative bandwidth/latency considerations. However that doesn't necessarily imply war or instability (although nor could I rule that out). Watching the real world would be just one activity, there would be countless other worlds and realities to explore.
How to make your intuitions cost-sensitive

and I think other mathematicians I've met are generally bad with numbers

Let me add another data point to your analysis: I'm a mathematician, and a visual thinker. I'm not particularly "good with numbers", in the sense that if someone says "1000 km" I have to translate that to "the size of France" before I can continue the conversation. Similarly with other units. So I think this technique might work well for me.

I do know my times tables though.

Rationality Quotes: February 2011

Weiner has a blog? My life is even more complete.

Rationality Quotes: February 2011

IIRC, he uses this joke several times.

0Sniffnoy11yAh, nevermind then.
Why people reject science

And if you reject science, you conclude that scientists are out to get you. The boot fits; upvoted.

On Charities and Linear Utility

Yes, but that only poses a problem if a large number of agents make large contributions at the same time. If they make individually large contributions at different times or if they spread their contributions out over a period of time, they will see the utility per dollar change and be able to adjust accordingly. Presumably some sort of equilibrium will eventually emerge.

Anyway, this is probably pretty irrelevant to the real world, though I agree that the math is interesting.

2whpearson11yWith perfect information. and infinity flexible charities (that could borrow off future giving if they weren't optimal that time period), then yep. I'd agree it is irrelevant to the real world because most people aren't following the "giving everything to one charity" strategy. If everyone followed givewell then things might get hairy for charities as they became and lost being flavour of the time period.
4David_Gerard11yYou mean, like donating to a funding drive with a specific aim?
Rationality Quotes: February 2011

In Dirk Gently's universe, a number of everyday events involve hypnotism, time travel, aliens, or some combination thereof. Dirk gets to the right answer by considering those possibilities, but we probably won't.

A sealed prediction

I made a prediction with sha1sum 0000000000000000000000000000000000000000. It's the prediction that sha1sum will be broken. I'll only reveal the exact formulation once I know whether it was true or false.

Hindsight Devalues Science

Out of curiosity, which time was Yudkowsky actually telling the truth? When he said those five assertions were lies, or when he said the previous sentence was a lie? I don't want to make any guesses yet. This post broke my model; I need to get a new one before I come back.

It is a process lesson, not a lesson about facts.

But, if you have to know the facts, it is easy enough to click on the provided link to the Meyer article and find out. Which, I suppose, is another process lesson.

6TheOtherDave11yYou might find it a worthwhile exercise to decide what your current model is, first. That is, how likely do you consider those five statements? Once you know that, you can research the actual data and discover how much your model needs updating, and in what directions. That way you can construct a new model that is closer to observed data. If you don't know what your current model is, that's much harder.
Omega can be replaced by amnesia

Sorry, my mistake. I misread the OP.

Omega can be replaced by amnesia

I don't think it's quite the same. The underlaying mathematics are the same, but this version side-steps the philosophical and game-theoretical issues with the other (namely, acausal behaviour).

Incidentally; If you take both boxes with probability p each time you enter the room, then your expected gain is p1000 + (1-p) 1000000. For maximum gain, take p=0; i.e. always take only box B.

EDIT: Assuming money is proportional to utility.

Omega can be replaced by amnesia

The first time you enter the room, the boxes are both empty, so you can't ever get more than $1,000,000. But you're otherwise correct.

4[anonymous]11yNo, I can get $1001,000. If I randomly choose to take one box the first time, then both boxes will contain money the second time, where I might randomly choose to take both. (Unless randomising devices are all somehow forced to come up with the same result both times)
Don't plan for the future

Er... yes. But I don't think it undermines my point that we are unlikely to be assimilated by aliens in the near future.

Intrapersonal negotiation

This is a very interesting read. I have, on occasion, been similarly aware of my own subsystems. I didn't like it much; there was a strong impulse to reassert a single "self", and I wouldn't be able to function normally in that state. Moreover, some parts of my psyche belonged to several subsystems at once, which made it apparently impossible to avoid bias (at least for the side that wanted to avoid bias).

In case you're interested, the split took the form of a debate between my atheist leanings, my Christian upbringing, and my rationalist "judge". In decreasing order of how much they were controlled by emotion.

Don't plan for the future

we should assume there are already a large number of unfriendly AIs in the universe, and probably in our galaxy; and that they will assimilate us within a few million years.

Let's be Bayesian about this.

Observation: Earth has not been assimilated by UFAIs at any point in the last billion years or so. Otherwise life on Earth would be detectably different.

It is unlikely that there are no/few UFAIs in our galaxy/universe, but if they do exist it is unlikely that they would not already have assimilated us.

I don't have enough information to give exact probabi... (read more)

2PhilGoetz11yGiven the sort of numbers thrown about in Fermi arguments, believing the former would suggest you are outrageously overconfident in your certainty that your beliefs are correct about the likely activities of AIs. Surely the second conclusion is more reasonable?
0Desrtopa11yI think we can reasonably conclude that Earth has not been assimilated at any point in its entire existence. If it had been assimilated in the distant past, it would not have continued to develop uninfluenced for the rest of its history, unless the AI's utility function were totally indifferent to our development. So we can extend the observed period over which we have not been assimilated back to a good four and a half billion years or so. The milky way is old enough that intelligent life could have developed well before our solar system ever formed, so we can consider that entire span to contain opportunities for assimilation comparable to those that exist today. We could make a slightly weaker claim that no Strong AI has assimilated our portion of the galaxy since well before our solar system formed.
Load More