All of CannibalSmith's Comments + Replies

What the Hell, Hero?
Cryonics requires a good singularity.

FAI is no good if you're dead before it comes along

I think FAI is very good even if I'm dead before it comes along.

This is not how I make choices.

Here are my five minutes: nanomachines need to carry a charge to be accelerable, right? Well, it works the other way too - they will decelerate on their own in destination's Van Allen belts.

They don't actually decelerate in the Van Allen belts, though. Magnetic fields apply a force to a charged particle perpendicular to it's direction of motion. F*V = Deceleration Power = 0. Also worth noting that a charged nanomachine has a much higher mass/charge ratio than the usual charged particles (He2+, H+, and e-), so it would be much less affected. I was actually thinking of neutralizing the seed at the muzzle to avoid troublesome charge effects.

That's why I'm asking how much. How much money do you want? As in, total.

The reason is that the pay-per-download model is detrimental for LessWrong's goals. As DSimon said, we want as many people as possible to be exposed to LW ideas. People who haven't heard of LW, people who would never themselves pay for such content. But I want your podcasts be played on the radio.

Instead I propose the Kickstarter model - you name your price, the LW crowd raises the money, you then have no problem giving copies away for free because you've already been paid.

FYI, The Simple Truth, as an experiment is now CC-BY-SA licensed.
Hi Cannibal, why would you like us to release an episode under that license?

Wow, your answer exceeds all my expectations! Thank you.

I respectfully request an explanation of the merits of publishing like this when you have the entire fic written already.

Three reasons:

  • Most people who read the beta version know why I specifically want chapter 4 to be released on a Friday. It was serendipitous and unplanned when I released the beta on a Friday evening, and I'm trying to recapture that in the real release. Making sure this happened was a design goal of my schedule.

  • My second roomate tried to read the entire thing in one sitting, and complained that once things got heavy and philosophical, he felt a bit overwhelmed. I want people to have some time to mull things over and think "what would I do in that

... (read more)

Harry Potter is about the wizarding world. Avatar is about a world war. MLP is about people.

Mind blown, then blown again.

After two years of bronydom, I dreamed of ponies for the first time after reading this fic. So that's a 10 out of 10. I'm reminded of Eliezer's warning about imagining worlds so much better than reality that the act sucks out your soul.

In case you're interested in suggestions for improvement:

  • Make this fic more accessible to others, not just super smart LW bronies - the text has words that could be replaced with more common synonyms, for example.
  • Make it less silly. Computers in Earth's mantle and indestructible experience centers while possible are so
... (read more)
At certain stage of technological development it will become practical, trivial even. I think it's just it would be nice if more of the action took place earlier, before such decisive technological advantage is established.

Haven't followed Steve since about 2008. What has he been up to? Is he still newageous?

Yes, he is. Steve's idea of truth differs a bit from the lesswrong consensus.
I haven't looked into his new material in around a year, now, and even then I was focused on his old stuff (I found him through researching Uberman, I think). I believe the answer is "even more than he was then." That quote is from his 2009 book [].

Riga reporting. Will the meetup be in English?

I wouldn't expect that having the meetup in English would be a problem for most of the prospective participants.
Anybody showing up and speaking English will probably get everybody talking in English.

I have no dependents.

How do I estimate the probability of early death or disability?

What other multinational life insurance companies are there?

This [] is my favorite I've seen so far.

Now, all that's left to do is convince the world that nukes are not evil so we can build an Orion drive powered ship and colonize that planet.

Just give us a proper forum already!

9Scott Alexander13y
I made a poll [], one of whose options was that we should have a proper forum. Not only did ZERO of the fifty-odd people choose the forum option, but it actually got down-voted in total contravention of the poll rules.
What benefit does a forum have that this section doesn't?


or something

I was offered life insurance by an Alico agent. What do you guys think? I don't know what I don't know about life insurance, so throw me all you've got.

I would get 36k€ if I was crippled 50% or more and 20k€ if I died or survived till 62, all for 260€ a year (that's 5% of my current salary). Does anyone know what injuries constitute what crippledness %?


Here's a tiny bit of rationality:

The new arrivals [soldiers who'd died and gone to hell only to keep fighting] didn’t fight the demon way, for pride and honor. Rahab realized they fought for other reasons entirely, they fought to win and woe to anybody who got in their way.

If your enemy is much weaker than you, it may be rational to fight to win. If you are equals, ritualized combat is rational from a game-theoretic perspective; that's why it is so widespread in the animal kingdom, where evolutionary dynamics make populations converge on an equilibrium of behavior, and that's why it was widespread in medieval times (that Hell is modeled from). So the passage you quoted doesn't work as a general statement about rationality, but it works pretty well as praise of America. Right now, America is the only country on Earth that can "fight to win". Other countries have to fight "honorably" lest America deny them their right of conquest.
It's funny. When describing the history of Hell, the author unwittingly explains the benefits of ritualized warfare while painting them as stupid. It seems he doesn't quite grasp how ritualized combat can be game-theoretically rational and why it occurs so often in the animal kingdom. Fighting to win is only rational when you're confident enough that you will win.
Do you think you could describe this image to an arbitrarily talented artist and end up with an image that even looked like it was based on it? [] It's not so much, "Such insolence, our ideas are so awesome they can not be broken down by mere reductionism" as "Wow, words are really bad at describing things that are very different from what most of the people speaking the language do." I think you could make an elaborate set of equations on a cartesian graph and come up with a drawing that looked like it and say fill up RGB values #zzzzzz at coordinates x,y or whatever, but that seems like a copout since that doesn't tell you anything about how Fragonard did it.

Looks like I've been using "terminal values" incorrectly.

If that distinction exists, my three formulations are not identical. Yes?

Not sure. "Inherently good" could mean "good for its own sake, not good for a purpose", but it seems like it could also mean "by its very nature, it's (instrumentally) good". And the fact that you said "gather or preserve" makes me want to come up with a value system that only cares about gathering or only cares about preserving. I'm not sure one couldn't find similarly sized semantic holes in anything, but there they are regardless.
I think so. "All information is inherently good" could mean "inherently instrumentally good", and the fact that you said "gather or preserve" makes me want to come up with a value system that only cares about gathering or only cares about preserving.
Your 3 formulations should be identical. Here's your argument: My first thought when I read this is, Why are we gathering information? The answer? Because we may need it in the future. What will we need it for? Presumably to attain some other (terminal) end, since if information was a terminal end the argument wouldn't be "we may need it in the future," it would be "we need it." Maybe I am just misunderstanding you?

Well, all agents are resource-constrained. But I get what you mean.

My mom complains I take things too literally. Now I know what she means. :)

Seriously though, I mean readable, usable, computable information. The kind which can conceivably turned into knowledge. I could also say, we want to lossly compress the Universe, like an mp3, with as good a ratio as possible.

Neither. I guess I shouldn't have used the term "terminal value". See the elaboration - how do you think I should generalize and summarize it?

It sounds like you're trying to say information is an instrumental value, without exception.

We cannot know what information we might need in the future, therefore we must gather as much as we can and preserve all of it. Especially since much (most?) of it cannot be recreated on demand.

That's not an argument for information as a terminal value since it depends on the consequences of information, but it's a decent argument for gathering and preserving information.

Help me, LessWrong. I want to build a case for

  1. Information is a terminal value without exception.
  2. All information is inherently good.
  3. We must gather and preserve information for its own sake.

These phrasings should mean the exact same thing. Correct me if they don't.

Elaboration: Most people readily agree that most information is good most of the time. I want to see if I can go all the way and build a convincing argument that all information is good all of the time, or as close to it as I can get. That misuse of information is problem about the misuser a... (read more)

7Scott Alexander13y
You probably don't mean trivial information eg the position of every oxygen atom in my room at this exact moment. But if you eliminate trivial information and concentrate only on useful information, you've turned it into a circular argument - all useful information is inherently useful. Further, saying that we "must" gather and preserve information ignores opportunity costs. Sure, anything might eventually turn out to be useful, but at some point we have to say the resources invested in disk space would be better used somewhere else. It sounds more like you're trying to argue that information can never be evil, but you can't even state that meaningfully without making a similar error. Certainly giving information to certain people can be evil (for example, giving Hitler the information on how to make a nuclear bomb). See this discussion [] for why I think calling something like "information" good is a bad idea.
Reading what you have said in this thread, I was confident that you were committing the fallacy of rationalization. Your statement is simple, and it seems like reality can be made to fit it, so you do so. But your name looked familiar, and so I clicked on it, and found that your karma is higher than mine, which seems to be strong evidence that you would not commit such a fallacy, using phrases so revealing as "I want to build a case for . . .". Your words say you are rationalizing; your karma says you are not. I am confused.
At first I thought you were saying that you wanted the comments to be flat rather than threaded; I figured that that was because you wanted inbox notification of each new reply. Then I saw you replying to replies yourself, so I was less sure. I take it you actually mean that (for example) I shouldn't include remarks on the main topic in this comment, or vice versa?
* Storing information has an inherent cost in resources, and some information might be so meaningless that no matter how abundant those resources are, there will always be a better or more interesting use for them. I'm not sure if that's true. * "Information" might be an unnatural category [] in the way you're using it. Why are the bits encoded in an animal's DNA worth more than the bits encoded in the structure of a particular rock? Doesn't taking any action erase some information about the state the world was in before that action? * EY might call information bad that prevents pleasant surprise [].
A straightforward counter-argument is that forgetting, i.e. erasing information, is a valuable habit to acquire; some "information" is of little value and we would burden our minds uselessly, perhaps to the point of paralysis, by hanging on to every trivial detail. If that holds for an individual mind, it could perhaps hold for a society's collective records; perhaps not all of YouTube as it exists now needs to be preserved for an indefinite future, and a portion of it may be safely forgotten.
May I suggest adding to your list of test cases the blueprints for a non-Friendly AI? By that I mean any program which is expected to be a General Intelligence but which isn't formally or rigorously proven to be Friendly. (I still haven't come to definite conclusions about the plausibility of an AI intelligence explosion, therefore about the urgency of FAI research and that of banning or discouraging the dissemination of info leading to non-F, but given this blog's history it feels as if this test case should definitely be on the list.)
Some counter-arguments What exactly is the pro-information position here? Cause I'm against this being produced and agree with bans on it's distribution and possession as a way of hurting its purveyors. The way such laws are enforced, at least in America, is sometimes disgraceful. But I don't think it is an inherrently bad policy. Biological, computer and memetic? The last one looks like an open and shut case to me. If learning information (being infected by a meme) can damage me then I should think that information should be destroyed. Maybe we want some record around so that we can identify them to protect people in the future? Maybe this stuff is too speculative to flesh out. For the IQ issue: Here is my read of the status quo: most people believe the science says there is no innate racial difference in IQ. This is probably what it says but if we really want to know for sure we'd need to gather more data. If we gathered more data there are three possible outcomes: (1) we find out conclusively there is no innate IQ difference. Most people's beliefs do not change. An impassioned minority continues to assert that there is an IQ difference and questions the science, perpetuating the controversy. This is socially the status quo but some people paying attention have actually learned something. (2) We don't learn anything conclusive one way or the other. The status quo continues. (3) We learn there are innate racial differences in IQ. All hell breaks lose.
I don't make arguments for terminal values. I assert them. Arguments that make any (epistemic) sense in this instance would be references to evidence to something that represents the value system (eg. neurological, behavioural or introspective observations about the relevant brain).
I'll attempt a counter-example. It's not definitive, but at least makes me question your notion: Does a spy want to know the purpose of his mission? What if (s)he gets caught? Is it easier for them to get though an interrogation not knowing the answers to the questions?
Information takes work to produce, to filter, and to receive, and more work to evaluate it and (if genuinely new) to understand it. There's a strong case that information isn't a terminal value because it's not the only thing people need to do with their time. You wouldn't want your inbox filled with all the things anyone believes might be information for you. Another case of limiting information: rules about what juries are allowed to know before they come to a verdict. There might be an important difference between forbidding censorchip vs. having information as a terminal value.
I very much doubt that we have enough understanding of human values / preferences / utility functions to say that anything makes the list, in any capacity, without exception. In this case, I think that information is useful as an instrumental value, but not as a terminal value in and of itself. It may lie on the path to terminal values in enough instances (the vast majority), and be such a major part of realizing those values, that a resource-constrained reasoning agent might treat it like a terminal value, just to save effort. I look at it like a genie bottle: nearly anything you want could be satisfied with it, or would be made much easier with its use, but the genie isn't what you really want.
What would an unfriendly superintelligence that wanted to hack your brain say to you? Does knowing the answer to that have positive value in your utility function? That said, I do think information is a terminal value, at least in my utility function; but I think an exception must be made for mind-damaging truths, if such truths exist.
Do you mean that information already is a terminal value for (most) humans? Arguing that something should be a terminal value makes only a limited amount of sense, terminal values usually don't need reasons, though they have (evolutionary, cultural etc.) causes.
First of all, I recommend clearing away the moral language (value, good, and must) unless you want certain perennial moral controversies to muddy the waters. Example phrasings of the case you may be trying to make: I suppose this is true. If you've ever done a jigsaw puzzle, you can probably think of a counterexample to this.
One thing you may want to address is what you mean by "gather and preserve information." The maximum amount of information possible to know about the universe is presently stored and encoded as the universe. The information that's useful to us is reductions and simplifications of this information, which can only be stored by destroying some of the original set of information.
We cannot know what information we might need in the future, therefore we must gather as much as we can and preserve all of it. Especially since much (most?) of it cannot be recreated on demand.

It's 2 PM and I still haven't done any work today. Thanks. :(

A tangent: if we found extinct life on Mars, it would provide precious extra motivation to go there which is a good thing.

If LessWrong is based on Reddit, and Reddit can spawn subreddits at will, why can't LessWrong do the same?

Reddit's codebase has been heavily modified for Less Wrong, and subreddits can't be introduced without breaking the site. Seriously. People have tried to do this. It's hard.

I value time spent in flow times the amount of I/O between me and the external world.

"Time spent in flow" is a technical term for having a good time.

By I/O (input/output) I mean both information and actions. Talking to people, reading books, playing multiplayer computer games, building pyramids, writing software to be used by other people are examples of high impact of me on the world and/or high impact of the world on me. On the other hand, getting stoned (or, wireheaded) and daydreaming has low interaction with the external world. Some of it is okay though because it's an experience I can talk to other people about.

Where's the "neither" option? I don't like open threads, but neither I do going off site. Why can't we have a sub-lesswrong?

"Where's the "neither" option? I don't like open threads, but neither I do going off site. Why can't we have a sub-lesswrong?" Because this is hard to implement. If you want to implement it yourself, please do so.

Actually, fuck it.

Come on, guys, we can do better than image macros.

False. Nothing can ever be better than this thread.

What, really? Wait, what!? Uh.

  1. Could you please answer my question directly in the form of "yes/no, because"?
  2. Do you mean by subjective probability the fact(?) that probability is about the map and not the territory?
  3. If yes, what does it have to do with anthropics?
  4. If yes, what! Contrary?? I learned about it here!
  5. If no, I'm completely confused.

Also, dear reader, vote parent up or down to tell me whether he's correct about you.

No, because it isn't isn't meaningless. No, you can get it from mathematics. Even basic arithmetic. Infinite series of events, on the other hand, those are hard to come by. I dismiss many examples of (bad) anthropic reasoning because they assume that that the probability of their subjective experience is what you get if you draw a random head out of a jar of all things that meet some criteria of self awareness. Kind of. Read Probability is subjectively objective [] The frequentist dogma was the 'contrary' part, not the 'maps/territory' stuff. Probability doesn't come from statistics and definitely applies to single events.
No, probability is not "meaningless for singular events". We can meaningfully discuss, in Bayesian terms, the probability of drawing a red ball from a jar, even if that jar will be destroyed after the single draw. The probabilities are assessments about our state of knowledge. Therefore no, we cannot dismiss all anthropic reasoning for the reasons you suggested. If you got "probability is meaningless for singular events" from what you learned here, either you are confused, or I am. (Possibly both.)

Can we dismiss all anthropic reasoning by saying that probability is meaningless for singular events? That is, the only way to obtain probability is from statistics, and I cannot run repeated experiments of when, where, and as what I exist.

It seems to me that the disagreement here is because you're looking at different parts of the problem. It might well be said that you can't have a well-calibrated prior for an event that never happened before, if that entails that you actually don't know anything about it (and that might be what you're thinking of). On the other hand, you should be able to assign a probability for any event, even if the number mostly represents your ignorance.
That's entirely contrary to the Bayesian program that this site broadly endorses: throwing out the subjective probability baby with the anthropic bath water, as it were.

When rational argument fails, fall back to dark arts. If that fails, fall back to damage control (discredit him in front of others). All that assuming it's worth the trouble.

Finally get to play Crysis.

Write a real time ray tracer.

Explaining it would ruin the funnies. Also, Google. Also, inevitably, somebody else did the job for me.

Just curious: who downvoted this, and why? I found it amusing, and actually a pretty decent suggestion. It bothers me that there seems to be an anti-humor bias here... it's been stated that this is justified in order to keep LW from devolving into a social, rather than intellectual forum, and I guess I can understand that... but I don't understand why a comment which is actually germane to the parent's question, but just happens to also be mildly amusing, should warrant a downvote.

The voting system is of utmost importance and I'd rather be inconvenienced by the current system than have a karma-free zone on this site.

On a serious note, what is your (the reader's) favorite argument against a forum?

("I voted you down because this is not a meta thread." is also a valid response.)

General risk-aversion; LW is a city on a hill, and the only one, so we should be very warey of fiddling unnecessarily.
The voting system is of utmost importance and I'd rather be inconvenienced by the current system than have a karma-free zone on this site.

Are you guys still not tired of trying to shoehorn a reddit into a forum?

We could just start a forum and stop complaining about it.
I'm tired of it, I'd like to get a real subreddit enabled here as soon as possible.
I don't understand the question. What are we doing that you describe this way, and why do you expect us to be tired of it?

ಠ_ಠ ....ashdkfrflguhhhhhhhhh

Debug output: when I first saw your request, I was in a very, what's the word, eager(?) mood and started writing, then realized it would be very long, then I wanted to chat and brag about coding skills, then later my mood was lower than average, and you said "yes", and I was like, groan, and... aaanyway, my Skype is cannibalsmith. If you catch me, I'll probably will be delighted to talk about programming. Yeah, so... uh...

:-) Thanks!

You should clarify that you're talking about epistemic rationality a lot sooner than the 8th paragraph.

when they are after the truth rather than after winning a debate.

To win at life you must be willing to lose at debate.

Please improve upon this slogan/proverb.

* Learn to enjoy being proven wrong, or you'll never learn anything. * If you never lose an argument, then you need to find some better arguments. * Winning an argument is satisfying; losing an argument is productive.
Load More