All of I_D_Sparse's Comments + Replies

First comes some gene A which is simple, but at least a little useful on its own, so that A increases to universality in the gene pool. Now along comes gene B, which is only useful in the presence of A, but A is reliably present in the gene pool, so there's a reliable selection pressure in favor of B. Now a modified version of A arises, which depends on B, but doesn't break B's dependency on A/A. Then along comes C, which depends on A and B, and B, which depends on A* and C.

Can anybody point me to some specific examples of this type of evolution? I'... (read more)

2tlhonmey2y
Sure.  Your cells have two methods for copying DNA.  One of them is fast and highly accurate.  The other is quite slow and makes mistakes several times more often. The chemical structure of the accurate method is basically an order of magnitude more complex than the inaccurate one.  It seems likely that the inaccurate method is the remnant of some previous stage of development. The inaccurate method has stuck around because the error checking on the accurate method also causes the process to stall if it hits a damaged segment.  At which point the strand being copied gets kicked over to the older machinery. The new method, being significantly more complex, is dependent for assembly on significantly more complicated structures than the old method, structures which could not have been created without the old method or something like it.  Figuring out exactly how far down the stack of turtles goes is tricky though since all the evidence has long-since decayed.  Maybe as we get better at decoding DNA we'll find leftover scraps of some of them lurking in the seemingly-unused sections of various genomes.

If someone uses different rules than you to decide what to believe, then things that you can prove using your rules won't necessarily be provable using their rules.

0SnowSage44447y
No, really, what? What "Different rules" could someone use to decide what to believe, besides "Because logic and science say so"? "Because my God said so"? "Because these tea leaves said so"?

Yes, but the idea is that a proof within one axiomatic system does not constitute a proof within another.

Not particularly, no. In fact, there probably is no such method - either the parties must agree to disagree (which they could honestly do if they're not all Bayesians), or they must persuade each other using rhetoric as opposed to honest, rational inquiry. I find this unfortunate.

0snewmark7y
Could you elaborate on that? Sorry, I just don't get it.

Regarding instrumental rationality: I've been wondering for a while now if "world domination" (or "world optimization", as HJPEV prefers) is feasible. I haven't entirely figured out my values yet, but whatever they turn out to be, WD/WO sure would be handy for achieving them. But even if WD/WO is a ridiculously far-fetched dream, it would still be a very good idea to know one's approximate chances of success with various possible paths to achieving one's values. I have therefore come up with the "feasibility problem." Basical... (read more)

2Bound_up7y
Am I reading this right as, basically, crack the alignment problem manually, and then finish science (then proceed to take over the world)?
0Elo7y
can you do me a favour and separate this into paragraphs, (or fix the formatting). Thanks. The lesswrong slack has a channel called #world_domination.

What if the disagreeing parties have radical epistemological differences? Double crux seems like a good strategy for resolving disagreements between parties that have an epistemological system in common (and access to the same relevant data), because getting to the core of the matter should expose that one or both of them is making a mistake. However, between two or more parties that use entirely different epistemological systems - e.g. rationalism and empiricism, or skepticism and "faith" - double crux should, if used correctly, eventually lead ... (read more)

2gjm7y
Is there good reason to believe that any method exists that will reliably resolve epistemological disputes between parties with very different underlying assumptions?

This is an interesting idea, although I'm not sure what you mean by

It can work without people understanding why it works

Shouldn't the people learning it understand it? It doesn't really seem much like learning otherwise.

0Bound_up7y
You don't have to understand what it does or why it works (or care about those) to successfully perform it. You can put yourself in the other side's shoes without understanding the effects of doing so.

Good point - "aspiring rationalist", perhaps?

0tristanm7y
I think "aspiring rationalist" has basically the same problems, because the word (as Lumifer mentioned) doesn't carry a lot of meaning, and in this case "aspiring" doesn't specify what kind of rationalism we're talking about. In my brain I still think of LW-type rationalists as "Bayesian rationalists", and I'll probably continue to use that label at least mentally for the time being. It's not that much better, but it at least conveys that we're not simply claiming that we think correctly or that we're particularly sane people. Bayesian rationalists make a pretty hefty claim, at least relative to what is commonly believed even by philosophers (who often claim there is no well-defined concept of rationality). That claim is basically that it is possible to define rationality, and we have proof! Like, real, mathematical proof! So, whatever label you use should at least convey that there is a specific claim to be made, and that it's not an intuitively obvious claim that all "sane" people would know. Most of rationality is in fact going against how most people think.

That's a valid point - I suppose there's no harm as long as one is careful. Allowing any part of your map to gain too much autonomy, however - internalizing a belief-label - is something to avoid. That's not to say that identity is bad - there's nothing wrong with being proud that you're a fan of Lost, or of your sexual orientation, etc. There is, I believe, something wrong with being proud that you're an atheist/socialist/republican/absurdist/singularitarian (etc.).

0Dagon7y
wait, what? Please describe the difference between the first acceptable identities and the second not-OK list. I think you're confusing the type of grouping or identity with the level of identification. acknowledging membership in a group (unsure about pride, but ignore that for now) is fine. Believing that membership is exclusive and exactly describes you is a mistake.

Sorry about the text at the top, it's the wrong size for some reason. Does anybody know how to fix that?

0Viliam7y
You have 10 karma now; act quickly! :D :D :D

I must admit to some amount of silliness – the first thought I had upon stumbling onto LessWrong, some time ago, was: “wait, if probability does not exist in the territory, and we want to optimize the map to fit the territory, then shouldn’t we construct non-probabilistic maps?” Indeed, if we actually wanted our map to fit the territory, then we would not allow it to contain uncertainty – better some small chance of having the right map, then no chance, right? Of course, in actuality, we don’t believe that (p with x probability) with probability 1. We do n... (read more)

1WhySpace_duplicate0.92616921290755277y
I rather like this way of thinking. Clever intuition pump. Hmmm, I guess we're optimizing out meta-map to produce accurate maps. It's mental cartography, I guess. I like that name for it. So, Occam's Razor and formal logic are great tools of philosophical cartographers. Scientists sometimes need a sharper instrument, so they crafted Solomonoff induction and Bayes' theorem. Formal logic being a special case of Bayesian updating, where only p=0 and p=1 values are allowed. There are third alternatives, though. Instead of binary Boolean logic, where everything most be true or false, it might be useful to use a 3rd value for "undefined". This is three-value logic, or more informally, Logical Positivism. You can add more and more values, and assign them to whatever you like. At the extreme is Fuzzy Logic, where statements can have any truth value between 0 and 1. Apparently there's also something which Bayes is just a special case of, but I can't recall the name. Of all these possible mental cartography tools though, Bayes seems to be the most versatile. I'm only dimly aware of the ones I mentioned, and probably explained them a little wrong. Anyone care to share thoughts on these, or share others they may know? Has anyone tried to build a complete ontology out of them the way Eliezer did with Bayes? Are there other strong metaphysical theories from philosophy which don't have a formal mathematical corollary (yet)?
2Houshalter7y
I think a concrete example is good for explaining this concept. Imagine you flip a coin and then put your hand over it before looking. The state of the coin is already fixed on one value. There is no probability or randomness involved in the real world now. The uncertainty of it's value is entirely in your head.
2Viliam7y
Sure; including probability in the map means admitting that it is a map (or a meta-map as you called it).

I don't see it... do you need a certain amount of karma to vote?

3g_pepper7y
Yes, my understanding is that you need a certain number of karma points to vote. I think the number is 10, but I am not certain of this.
0Elo7y
Have you verified your email address?

Ah, thanks. Uh, this may be a stupid question, but how do I upvote?

1Elo7y
At the bottom left of each post or comment is a thumb up button

I wrote an article, but was unable to submit it to discussion, despite trying several times. It only shows up in my drafts. Why is this, and how do I post it publicly? Sorry, I'm new here, at least so far as having an account goes - I've been a lurker for quite some time and have read the sequences.

1Elo7y
Below the edit area is a drop down menu where you can select "discussion", you may need 10 karma to post it.