All of Dre's Comments + Replies

I'm coming to this party rather late, but I'd like to acknowledge that I appreciated this exchange more than just by upvoting it. Seeing in depth explanations of other people's emotions seems like the only way to counter Typical Mind Fallacy, but is also really hard to come by. So thanks for a very levelheaded discussion.

Going off of what others have said, I'll add another reason people might satisfice with teachers.

In my experience, people agree much more about which teachers are bad than about which are good. Many of my favorite (in the sense that I learned a lot easily) teachers were disliked by other people, but almost all of those I thought were bad were widely thought of as bad. If you're not as interested in serious learning this might be less important.

So avoiding bad teachers requires a relatively small amount of information, but finding a teacher that is not just good, but good for you requires a much larger amount. So people reasonably only do the first part.

I thought this was an interesting critical take. Portions are certainly mind-killing, eg you can completely ignore everything he says about rich entrepreneurs, but overall it seemed sound. Especially the proving-too-much argument; the projections involve doing multiple revolutionary things, each of which would be a significant breakthroughs on its own. The fact that Musk isn't putting money into doing any of those suggests it would not be as easy/cheap as predicted (not just in a "add a factor of 5" way, but in a "the current predictions are... (read more)

Some points from the article you linked: * The cost estimates don't make much sense. * The proposed accelerations (and change in acceleration) would be likely to cause motion sickness in many passengers. * Capacity would be low, and the proposed headway of 30 seconds is unrealistic. * Musk's values for HSR energy consumption are not accurate.

If there are generally decreasing returns to measurement of a single variable, I think this is more what we would expect see. If you've already put effort into measurement of a given variable it will have lower information value on the margin. If you add in enough costs for switching measurements, then even the optimal strategy might spend a serious amount of time/effort pursuing lower value measurements.

Further, if they hadn't even thought of some measurements they couldn't have pursued them, so they wouldn't have suffered any declining returns.

I don't think this is the primary reason, but may contribute, especially in conjunction with reasons from sibling comments.

I don't know if this is exactly what you're looking for, but the only way I've found to make philosophy of identity meaningful is to interpret it as about values. In this reading questions of personal identity are what you do/should value as "yourself".

Clearly you-in-this-moment is yourself. Do you value you-in-ten-minutes the same as yourself-now? ten years? simulations?, etc. Then Open Individualism (based on my cursory googling) would say we should value everyone (at all times?) identically as ourselves. Then it's clearly descriptively false, and, at least to me, seems highly unlikely to be any sort of "true values", so it's false.

First not: I'm not disagreeing with you so much as just giving more information.

This might buy you a few bits (and lots of high energy physics is done this way, with powers of electronvolts the only units here). But there will still be free variables that need to be set. Wikipedia claims (with a citation to this John Baez post) that there are 26 fundamental dimensionless physical constants. These, as far as we know right now, have to be hard coded in somewhere, maybe in units, maybe in equations, but somewhere.

As a reference for anyone encountering this discussion, I thought I'd mention Natural Units [] explicitly. Basically, they are the systems of units that particle physicists use. They are attempts to normalize out as many fundamental constants as possible, exactly as you discuss. Unfortunately, you can't build a system that gets them all. You are always left with some fraction over pi, or the square root of the fine-structure constant, or something.

Professionals read the Methods section.

Ok, but I am not a professional in the vast majority of fields I want to find studies in. I would go so far as to say I'm a dilettante in many of them.

even as a dilettante you can often dismiss the conclusions of a paper based on really obvious problems in the methodology (especially in nutrition/exercise/longevity research).

My strategy in situations like that is to try to get rid of all respect for the person. If to be offended you have to care, at least on some level, about what the person thinks then demoting them from "agent" to "complicated part of the environment" should reduce your reaction to them. You don't get offended when your computer gives you weird error messages.

Now this itself would probably be offensive to the person (just about the ultimate in thinking of them as low status), so it might not work as well when you have to interact with the... (read more)

Oddly enough, I get much angrier at my computer for not working than I ever do at other humans. Though I wouldn't say I often get "offended" by either. I wonder how common this is?

The problem is that we have to guarantee that the AI doesn't do something really bad while trying to stop these problems; what if it decides it really needs more resources suddenly, or needs to spy on everyone, even briefly? And it seems (to me at least) that stopping it from having bad side effects is pretty close, if not equivalent to, Strong Friendliness.

I should have made that more clear: I still think Weak-Friendliness is a very difficult problem. My point is simply that we only need an AI that solves the big problems, not an AI that can do our taxes. My second point was that humans seem to already implement weak-friendliness, barring a few historical exceptions, whereas so far we've completely failed at implementing strong-friendliness. I'm using Weak vs Strong here in the sense of Weak being a "SysOP" style AI that just handles catastrophes, whereas Strong is the "ushers in the Singularity" sort that usually gets talked about here, and can do your taxes :)

I worry that this would bias the kind of policy responses we want. I obviously don't have a study or anything, but it seems that the framing of the War on Drugs and the War on Terrorism have encouraged too much violence. Which sounds like a better way to fight the War on Terror, negotiating in complicated local tribal politics or going in and killing some terrorists? Which is actually a better policy?

I don't know exactly how this would play out in a case where no violence makes sense (like the Cardiovascular Vampire). Maybe increased research as part of a "war effort" would work. But it seems to me that this framing would encourage simple and immediate solutions, which would be a serious drawback.

This feels like reading too much into it, but is

and each time the inner light pulsated, the assembly made a vroop-vroop-vroop sound that sounded oddly distant, muffled like it was coming from behind four solid walls, even though the spinning-conical-section thingy was only a meter or two away.

supposed to be something about the fourth wall?

I think you need to start by cashing out "understand" better. Certainly no physical system can simulate itself with full resolution. But there are all sorts of things we can't simulate like this. Understanding (as I would say its more commonly used) usually involves finding out which parts of the system are "important" to whatever function you're concerned with. For example, we don't have to simulate every particle in a gas because we have gas laws. And I think most people would say that gas laws show more understanding of thermodynamic... (read more)

typically cashing out [].
Good point. It might be that any 1-self-aware system is ω-self-aware.

Took most of it. I pressed enter accidentally after the charity questions. I would like to fill out the remainder. Is there a way I can do that without messing up the data?

Though I don't think its that simple because both sides are claiming that the other side is not reporting how they truly feel. One side claims that people are calling things creepy semi-arbitrarily to raise their own status, and the other claims that people are intentionally refusing to recognize creepy behavior as creepy so they don't have to stop it (or being slightly more charitable, so they don't take a status hit for being creepy).

But all we want is an ordering of choices, and affine transformations (with a positive multiplicative constant) are order preserving.

I don't think this is the right place to report this, but I don't know where the right place is, and this is closest. In the title of the page for comments for the deleted account (eg) the name of the poster has not been redacted.

Wouldn't this be a problem for tit for tat players going up against other tit for tat players (but not knowing the strategy of their opponent)?

Only if it's common knowledge that both players are human. ETA: Since I got downvoted, maybe I wasn't being clear. I think that the Warren Buffett quote applies to human psychology more than to game theory in general. If outright deception were easy, it would probably become a good strategy to keep your allies in some doubt about your intentions, as a bargaining chip. But we humans don't seem to be good at pulling that off, and so ambivalence is a strong signal of opposition.

In the sense that there are multiple equilibriums or that there is no equilibrium for reflection?

Either would qualify, although I put a higher chance on multiple equilibriums.

Not necessarily. See Chlamer's reply to Hilary Putnam who asserted something similar, especially section 6. Basically, if we require that all of the "internal" structure of the computation be the same in the isomorphism and make a reasonable assumption about the nature consciousness, all of the matter in the Hubble volume wouldn't be close to large enough to simulate a (human) consciousness.

I found this (scroll down for the majority of articles) graph of all links between Eliezer's articles a while ago, it could be be helpful. And its generally interesting to see all the interrelations.

The thing I got out of it was that human brain processes appear to be able to do something (assign a nonzero probability to a non-computable universe) that our current formalization of general induction cannot do and we can't really explain why this is.

As I understand it, it is a comparative advantage argument. More rational people are likely to have comparative advantage in making money as compared to less rational people, so the utility maximizing setup is for more rational people to make money and pay less rational people to do the day to day work of implementing the charitable organization. Thats the basic form of the argument at least.

It definitely seems the other way around to me: very high rationality may help a lot in making money, but it's not a necessary condition, while it does appear to be necessary for most actually effective object-level work (at the current margin; rationalist organizations will presumably become better able to use all sorts of people over time).

You are right, I should have said something like "implementing MWI over some morality."

I don't think MWI is analogous to creating extra simultaneous copies. In MWI one maximizes the fraction of future selves experiencing good outcomes. I don't care about parallel selves, only future selves. As you say, looking back at my self-tree I see a single path, and looking forward I have expectations about future copies, but looking sideways just sounds like daydreaming, and I don't have place a high marginal value on that.

In MWI, one can do nothing about the proportion of future selves experiencing good outcomes that would not have happened anyway.
Re: "In MWI one maximizes the fraction of future selves experiencing good outcomes." Note that the MWI is physics - not morality, though.
Exactly my view. A clarification: suppose Roko throws such a qGrenade (TM) at me, and I get $100. I will become angry and my attempt to inflict violence upon Roko. However, that is not because I’m sad about the 50% of parallel, untouchable universes where I’m dead. Instead, that is because Roko’s behavior is strong evidence that in the future he may do dangerous things; righteous anger now (and, perhaps, violence) is simply intended to reduce the measure of my current “futures” where Roko kills me. On a slightly different note, worrying about my “parallel” copies (or even about their futures) seems to me quite akin to worrying about my past selves. I simply doesn’t mean anything. I really don’t care that my past self a year ago had a toothache — except in the limited sense that it’s slight evidence that I may be in the future predisposed to tooth aches. I do care about the probability of my future selves having aching teeth, because I may become them. Like Sly, I don’t put much value in “versions” of me I can’t interact with. (The “much” is there because, of course, I don’t know with 100% certainty how the universe works, so I can’t be 100% sure what I can interact with.) But my “future selves” are in a kind of interaction with me: what I do influences which of those future selves I’ll become. The value assigned to them is akin to the value someone in free-fall assigns to the rigidity of the surface below them: they aren’t angry because (say) the pavement is hard, in itself; they are angry because it implies a squishy future for themselves. On the other hand, they really don’t care about the surface they’ve fallen from.

There is also an opportunity cost to the poor use of statistics instead of proper use. This may be only externalities (the person doing the test may actually benefit more from deception), but overall the world would be better if all statistics were used correctly.

But the important (and moral) question here is "how do we count the people for utility purposes." We also need a normative way to aggregate their utilities, and one vote per person would need to be justified separately.

This scenario actually gives us a guideline for aggregating utilities. We need to prevent Dr. Evil from counting more than once. One proposal is to count people by different hours of experience, so that if I've had 300,000 hours of experience, and my clone has one hour that's different, it counts as 1/300,000 of a person. But if we go by hours of experience, we have the problem that with enough clones, Dr. Evil can amass enough hours to overwhelm Earth's current population (giving ten trillion clones each one unique hour of experience should do it). So this indicates that we need to look at the utility functions. If two entities have the same utility function, they should be counted as the same entity, no matter what different experiences they have. This way, the only way Dr. Evil will be able to aggregate enough utility is to change the utility function of his clones, and then they won't all want to do something evil. Something like using a convergent series for the utility of any one goal might work: if Dr. Evil wants to destroy the world, his clone's desire to do so counts for 1/10 of that, and the next clone's desire counts for 1/100, so he can't accumulate more than 10/9 of his original utility weight.

I don't know game theory very well, but wouldn't this only work as long as not everyone did it. Using the car example, if these contracts were common practice, you could have one for 4000 and the dealer could have one for 5000, in which case you could not reach the pareto optimum.

In general, doesn't this infinitely regress up meta levels? Adopting precomittments is beneficial, so everyone adopts them, then pre-precomittments are beneficial... (up to some constraint from reality like being too young, although then parents might become involved)

Is this (like... (read more)

The car example has the two actors signing contracts with opposing goals. I can't see why someone would set up beforehand a contract that prevented them from signing a prenup. The reluctance to prenuptial arrangements only appears after you've met "the one", and all that the anti-prenup actor is concerned with is the motivation of the pro-prenup actor, and signing a counter-contract won't allay that.
Schelling's introduction mentions that his work sits in a space between pure theoretical game theory and purely pragmatic or psychological bargaining. A pre-commitment is part of a bargaining process, so if you can pre-commit before the one you're bargaining with (not necessarily chronologically) you win. If you both pre-commit simultaneously, you both lose.

I think the majority of people don't evaluate AGI incentives rationally, especially failing to fully see the possibilities of it. Whereas this is an easy to imagine benefit.

Personally, pseudonymity wasn't that helpful, its not that I didn't want to risk my good name or something, as much as that I just didn't want to be publicly wrong among intelligent people. Even if people didn't know that the comment was from me per se, they were still (hypothetically) disagreeing with my ideas and I would still know that the post was mine. For me it was more hyperbolic discounting than it was rational cost-benefit analysis.

As a semi-lurker, this likely would have been very helpful for me. One problem that I had is a lack of introduction to posting. You can read everything, but its hard to learn how to post well without practice. As others have remarked, bad posts get smacked down fairly hard, so this makes it hard for people to get practice... vicious cycle. Having this could create an area where people who are not confident enough to post to the full site could get practice and confidence.

I avoided this problem by using a hard-to-Google pseudonym, figuring that I could always make a new account or just stop posting if I majorly screwed up. I don't know if pseudonymity alone would reassure other lurkers, though; framing it as fictional roleplaying might be more useful for people who aren't me. ETA: perhaps adding a reminder to the FAQ that pseudonymity is acceptable would help? And linking the FAQ more prominently.
I wonder if experienced, high-karma posters should offer to take on apprentices, or something. Would that be valuable? Does anybody want to be my apprentice and try it out?

But doesn't this make precommitting have a positive expected utility to students, so students would precommit to whatever they thought was most likely to happen and the teacher would still expect more late papers from having this policy.

Well they can't pick circumstances that are actually likely to come about. If such circumstances can be foreseen the professor will have expected the student to finish the paper earlier just in case.The more likely the event the the more likely the professor is to make that determination and not accept the excuse. Presumably there is some ideal rate of unlikelihood that satisfies the professor's utility function.

I don't know that much about the topic, but aren't viruses more efficient at many things than normal cells? Could there be opportunities for improvement in current biological systems through better understanding of viruses?

Or create (or does one exist) some thread(s) that would be a standard place for basic questions. Having somewhere always open might be useful too.

OB has threading (although it doesn't seem as good/ as used as on LW).

5Paul Crowley13y
That may be a recent innovation; it wasn't threaded in the days when Eliezer's articles appeared there.

This seems like both a wonderful idea, and not mutually exclusive with the original. Having this organization could potentially increase the credibility of the entire thing, get some underdog points with the general public (although I don't know how powerful this is for average people), and act as a backup plan.

I think they actually are mutually exclusive. The original plan calls for quickly getting lots of people to support using ads on CraigsList to support inefficient causes that are already popular with the general public. The plan to start with ads on LessWrong supporting high value organizations such as SIAI, and then expanding virally through other blogs has a long term goal of getting big sites such as CraigsList to join. If these big targets already have entrenched competing programs, this would be much harder. To be compatible, the original plan needs to involve convincing people to support high value causes.

It seems interesting that lately this site has been going through a "question definitions of reality" stage (the ai in a box boxes you., this series). It does seem to follow that going far enough to materialism leads back to something similar to Cartesian questions, but its still surprising.

Surprising? As the nature of experience and reality is the "ultimate" question, it would seem bizarre that any attempt to explain the world didn't eventually lead back to it.
Indeed. My hunch is that upon sufficiently focused intensity, the concept of material reality will fade away in a haze of immaterial distinctions. I label this hunch, 'pessimism'.

My technique is get time is to say "wait" about ten times or until they stop and give me time to think. This probably won't work for comment threads very well, but in reality not letting the person continue generally works. Probably slightly rude, but more honest and likely less logically rude, a trade-off I can often live with.

I think the first problem we have to solve is what the burden of proof is like for this discussion.

The far view says that science and reductionism have a very good record at demystifying lots of things that were thought to be unexplainable (fire, life, evolution), so the burden is on those saying the Hard Problem does not just follow from the Easy Problems. According to this, opponents of reductionism have to provide something close to a logical inconsistency with reducing conciseness. It would require huge amounts of evidence against reducing to overcome ... (read more)

(please note that this is my first post)

I found the phrasing in terms of evidence to be somewhat confusing in this case. I think there is some equivocating on "rationality" here and that is the root of the problem.

For P=NP, (if it or its inverse is provable) a perfect Bayesian machine will (dis)prove it eventually. This is an absolute rationality; straight rational information processing without any heuristics or biases or anything. In this sense it is "irrational" to not be able to (dis)prove P=NP ever.

But in the sense of "is this... (read more)