All of zero_call's Comments + Replies

It's an interesting idea but I feel very skeptical about the generic plan. Personally, a revulsion for organized/standardized education is what drove me to look at things like Less Wrong in the first place. I think this is fairly common in the community, with many people interested in discussion of akrasia and self-work habits.

Also, considering the informality of ideas like "I want to be a good rationalist", I would expect this sort of thing to be much more open-ended and unstructured anyways. It doesn't seem to fit with the idea of a rigid syst... (read more)

It's an interesting idea but I feel very skeptical about the generic plan. Personally, a revulsion for organized/standardized education is what drove me to look at things like Less Wrong in the first place. I think this is fairly common in the community, with many people interested in discussion of akrasia and self-work habits.

Also, considering the informality of ideas like "I want to be a good rationalist", I would expect this sort of thing to be much more open-ended and unstructured anyways. It doesn't seem to fit with the idea of a rigid syst

... (read more)
If you mouse over those sections on the application, messages should be appearing on the right saying "Don't be discouraged if you haven't read much of Less Wrong, we have other ways of gauging your knowledge during the interview" and "Again, don't be discouraged if these are unfamiliar". If I had put it on the application, it would be to gather information about the applicant's devotion to research on the subject of rationality, not their actual knowledge (reading does not imply understanding). Still, the current best of those ideas that have been actively discussed for thousands of years is what went into the creation of the sequences and (hopefully, I haven't read much of them) the foundation of the SIAI literature. They did that on purpose; it would have been rather silly for them not to.

I've also read it several times before that physicists and scientists tend to achieve their best results by their mid-thirties. But I don't think the characterization necessarily works for physics/math/etc. like it does for baseball and athletics. There's just a major qualitative difference there -- e.g., athletes are forced to retire fairly young, whereas teachers are very rarely forced to retire until they are really nearing the end of their viable lifespan. Although I do agree that in something like physics, there is also a component of "mental ath... (read more)

In the 419991 times this simulation has run, players have won $1811922 And by won I mean they have won back $1811922 of the $419991 they spent (431%).

Mating is good. I am somewhat baffled as to why the "PUA" discussion has had a strong negative connotation. As you say, there's a ton of benefits for everyone involved, and it serves as a successful, easy-to-test model for many related skill sets. Personally I think the hesitancy to talk about mating and mating development is likely no more than a sort of vestigial organ of society's ancient associations with religion. It still seems "improper" in ordinary society to talk about how to get into someone's pants. But I see no reason why the sort of thing like "pick-up-artistry" must be unethical or wrong.

Its more than religion. It has components of gender and class memetic warfare not to mention just plain old signaling.

I am somewhat baffled as to why the "PUA" discussion has had a strong negative connotation. As you say, there's a ton of benefits for everyone involved

There's at least two groups of people who potentially stand to lose from widespread discussion of PUA: women, who may fear that they will be duped into choosing low quality mates by males emulating the behaviours they use to identify high quality mates and men who are already successful with women who may fear increased competition.

These sources of antipathy to PUA are rarely consciously express... (read more)

Yes -- I agree strongly with this analysis.

The whole "happiness limited by shyness/social awkwardness which results in no dates" stereotype does not apply to many people here.

How's that?

Because some people are in happy long term relationships, where picking new people up or dating new people are not very important.

Hypertext reading has a strong potential, but it also has negative aspects that you don't have as much with standard books. For example, it's much easier to get distracted or side-tracked with a lot of secondary information that might not even be very important.

It's not that books take longer to produce, it's that books just tend to have higher quality, and a corollary of that is that they frequently take longer to produce. Personally I feel fairly certain that the average quality of my online reading is substantially lower than offline reading.

Any problem in government can only be suboptimal relative to a different set of policies, and as such, criticism of government should come with an argument that a solution is possible.

I think most criticism is based on the implicit understanding that a solution is possible. Otherwise you are basically hiding behind a shield of nihilism or political anarchy or something. It seems overly restrictive to say that any criticism without an auxiliary solution is worthless. Just because you see a problem doesn't mean you are able to see a solution. I guess it's a bit like asking all voters to also be politicians.

I think you've touched on something really important when you mention how it is easier to be a strong critic than to have a real, working solution. This is a common retort against strong criticism -- "Oh, but you don't how to make it any better" -- and it seems to be something of a logical fallacy.

There is a certain sense of energy and inspiration behind good criticism which I've always been fond of. This is important, because criticism seems to be almost always non-conformist or pessimistic in a certain sense, so I think you kind of need encouragement to remind yourself that criticism is generally originating from good intentions.

One of the heartening/depressing parts of "Bridging the Chasm between Two Cultures" by Karla McLaren [] related to this principle:

I would argue that charity is just plain good, and you don't need to take something simple and kind and turn it into an inconclusive exercise in societal interpretation.

Are you familiar with the Hansonian view of signaling?

This sort of brings to my mind Pirsig's discussions about problem solving in ZATAOMM. You get that feeling of confusion when you are looking at a new problem, but that feeling is actually a really natural, important part of the process. I think the strangest thing to me is that this feeling tends to occur in a kind of painful way -- there is some stress associated with the confusion. But as you say, and as Pirsig says, that stress is really a positive indication of the maturation of an understanding.

That's funny. Well, perhaps Foucault may not have been very accurate -- I'm not at all qualified to comment. But the book still stands as an amazing work of intellectual writing.

Some fiction....

  1. The Color of Magic (Discworld series) -- Terry Pratchett -- pretty funny, top British author. The first book (this one) seems to be unmatched by at least the next five in the series, but there are like 30 in the series total, so...

  2. Neutron star -- Larry Niven -- a collection of short stories in Larry Niven's fascinating future.

  3. Fire upon the deep -- Vernor Vinge -- just the best picture of a future filled with GAI's that I have read.

  4. Neuromancer -- William Gibson -- incredible action/cyberpunk story, incredible characters. Gets pretty

... (read more)
I liked the end of Neuromancer (and the rest). "Fire" is definitely good.
Some claims that Foucault's "Madness and Civilization" was a terrible example of scholarship from the mindhacks blog: [] []

Pirsig's book is brilliant... I recommend that to everyone as well...

AFAIK there's currently no major projects attempting to send contact signals around the galaxy (let alone the universe). Our signals may be reaching Vega or some of the nearest star systems, but definitely not much farther. It's not prohibitively difficult to broadcast out to say, a 1000 lightyear radius ball around earth, but you're still talking about an antenna that's far larger than anything currently existing.

Right now the SETI program is essentially focused on detection, not broadcasting. Broadcasting is a much more expensive problem. Detection is f... (read more)

Signals get sent out fairly often, though: []

I don't think this is much of an insight, to be honest. The "anthropic" interpretation is a statement that the universe requires self-consistency. Which is, let's say, not surprising.

The purpose of natural selection, fine-tuning of physical constants in our universe, and of countless other detailed coincidences (1) was to create me. (Or, for the readers of this comment, to create you)

My feeling is that this is a statement about the English language. This is not a statement about the universe.

Note that one could just as easily come up with a two page article about a "Futuristic Life Meme" which represents the cryonics supporters' sense of being threatened by death.

The analysis of a new, emerging science deserves critique. From what I can tell, this particular critique is essentially ad-hominem, in that it attempts to attack a belief based on the characteristics of the individuals, rather than their arguments.

It trivializes the fact that there are reasons for being reluctant to invest in cryonics. Lastly, this writing conflates cryonics skepticism with unwillingness to invest.

Is there anything particularly remarkable about being threatened by death? I would find that strange, given that threatening someone with death has historically been such a popular form of intimidation. Cryonics has been "emerging" for over 40 years. People still choose not to think about it, despite lots of exposure to the concept. There has to be a reason. If you think there are rational reasons, I'm interested in hearing what they are. Contrary to popular expectation, it's not particularly expensive. But the real problem I'm attacking is unwillingness to think about the issue. I haven't invested money in this myself yet. I have invested the time and energy to understand it, and come to the conclusion that I should endorse and support it. There's no particularly good humanitarian or ethical reason I can think of not to.

We've argued a lot about the advisability of cryonics. This article takes that advisibility as a given and attempts to further discussion among those who agree. If you don't agree, that's fine, but It's OK for an article to move on sometimes.

You're right - it's worth noting that the article does not describe all [] cryonics objectors.

My take is basically: if their understanding is so deep, why exactly is their teaching skill so brittle that no one can follow the inferential paths they trace out? Why can't they switch to the infinite other paths that a Level 2 understanding enables them to see? If they can't, that would suggest a lack of depth to their understanding.

I would LOVE to agree with this statement, as it justifies my criticism of poor teachers who IMO are (not usually maliciously) putting their students through hell. However, I don't think it's obvious, or I think maybe you... (read more)

I've thought about this some, and I think I see your point now. I would phrase it this way: It's possible for a "Level 3 savant" to exist. A Level 3 savant, let's posit, has a very deeply connected model of reality, and their excellent truth-detecting procedure allows them to internally repair loss of knowledge (perhaps below the level of their conscious awareness). Like an expert (under the popular definition), and like a Level 1 savant, they perform well within their field. But this person differs in that they can also peform well in tracing out where its grounding assumptions go wrong -- except that they "just have all the answers" but can't explain, and don't know, where the answers came from. So here's what it would look like: Any problem you pose in the field (like an anomalous result), they immediately say, "look at factor X", and it's usually correct. They even tell you to check critical aspects of sensors, or identify circularity in the literature that grounds the field (i.e. sources which generate false knowledge by excessively citing each other), even though most in the field might not even think about or know how all those sensors work. All they can tell you is, "I don't know, you told me X, and I immediately figured it had to be a problem with Y misinterpreting Z. I don't know how Z relates to W, or if W directly relates to X, I just know that Y and Z were the problem." I would agree that there's no contradiction in the existence of such a person. I would just say that in order to get this level of skill you have to accomplish so many subgoals that it's very unlikely, just as it's hard to make something act and look like a human without also making it conscious. (Obvious disclaimer: I don't think my case is as solid as the one against P-zombies.)

Ah, OK, I read your article. I think that's an admirable task to try to classify or identify the levels of understanding. However, I'm not sure I am convinced by your categorization. It seems to me that many of these "Level 1 savants" as you call them are quite capable of fitting their understanding with the rest of reality. Actually it seems like the claim of "Level 1 understanding" basically trivializes that understanding. Yet many of these people who are bad teachers have a very nontrivial understanding -- else I don't think this wou... (read more)

Thanks for reading it and giving me feedback. I'm interested in your claim: Well, they can fit it in the sense that they (over a typical problem set) can match inputs with (what reality deems) the right outputs. But, as I've defined the level, they don't know how those inputs and outputs relate to more distantly-connected aspects of reality. I had a discussion [] with others about this point recently. My take is basically: if their understanding is so deep, why exactly is their teaching skill so brittle that no one can follow the inferential paths they trace out? Why can't they switch to the infinite other paths that a Level 2 understanding enables them to see? If they can't, that would suggest a lack of depth to their understanding. And regarding the archetypal "deep understanding, poor teacher" you have in mind, do you envision that they could, say, trace out all the assumptions that could account for an anomalous result, starting with the most tenuous, and continuing outside their subfield? If not, I would call that falling short of Level 2.

Suppose that inventing a recursively self improving AI is tantamount to solving a grand mathematical problem, similar in difficulty to the Riemann hypothesis, etc. Let's call it the RSI theorem.

This theorem would then constitute the primary obstacle in the development of a "true" strong AI. Other AI systems could be developed, for example, by simulating a human brain at 10,000x speed, but these sorts of systems would not capture the spirit (or capability) of a truly recursively self-improving super intelligence.

Do you disagree? Or, how likely is this scenario, and what are the consequences? How hard would the "RSI theorem" be?

This seems like a bad analogy. If you could simulate a group of smart human going at 10,000 times normal speed, say copies of Steven Chu or of Terry Tao, I'd expect that they'd be able to figure out how to self-improve pretty quickly. In about 2 months they have had about 5000 years worth of time to think about things. The human brain isn't a great structure for recursive self-improvement (while some aspects are highly modular, other aspects are very much not so) but given enough time one could work on improving that architecture.

I will reply to this in the sense of

"do you believe you are aware of the inferential connections between your expertise and layperson-level knowledge?",

since I am not so familiar with the formalism of a "Level 2" understanding.

My uninteresting, simple answer is: yes.

My philosophical answer is that I find the entire question to be very interesting and strange. That is, the relationship between teaching and understanding is quite strange IMO. There are many people who are poor teachers but who excel in their discipline. It seems to ... (read more)

A lot of the questions you pose, including the definition of the Level 2 formalism, are addressed in the article [] I linked (and wrote). I classify those who can do something well but not explain or understand the connections from the inputs and outputs to the rest of the world, to be at a Level 1 understanding. It's certainly an accomplishment, but I agree with you that it's missing something: the ability to recognize where it fits in with the rest of reality (Level 2) and the command of a reliable truth-detecting procedure that can "repair" gaps in knowledge as they arise (Level 3). "Level 1 savants" are certainly doing something very well, but that something is not a deep understanding. Rather, they are in the position of a computer that can transform inputs into the right outputs, but do nothing more with them. Or a cat, which can fall from great heights without injury, but not know why its method works. (Yes, this comment seems a bit internally repetitive.)

Cool... that's really close to where I work. I'll probably make it. Thanks for taking the initiative guys.

I'm not sure if I buy that the "frequentist" explanations (as in the disease testing example) are best characterized by being frequentist -- it seems to me that they are just stating the problem and the data in a more relevant way to the question that's being asked. Without those extra statements, you have to decode the information down from a more abstract level.

For example: I've heard vague rumors that GWF Hegel concludes that the Prussian State (under which, coincidentally, he lived) was the best form of human existence. I've also heard that Descartes "proves" that God exists. Now, whether or not Hegel or Descartes may have had any valid insights, this is enough to tell me that it's not worth my time to go looking for them.

This is an understandable sentiment, but it's pretty harsh. Everybody makes mistakes -- there is no such thing as a perfect scholar, or perfect author. And I think that when Desca... (read more)

I understand that there is work supporting the idea that cryonics/regeneration/etc. will eventually be successful. However, I don't feel the need to respond to this work very directly, because this work, after all, is very indirect, in the sense that it is only making plausibility arguments. As a cryonics skeptic, I am not attempting to rule out the plausibility or possibility of cryonics. After all, it seems fairly plausible that this stuff will eventually get worked out, as with the usual arguments for technological advancement. As a cryonics skeptic, I ... (read more)

4Paul Crowley13y
You're saying the level of confidence to look for is one that would be appropriate for any new medical treatment, rather than (say) the confidence you'd look for when making a change in foreign policy. One reason we're so demanding in the realm of medical evidence is because we can be, and since it can be life and death, if we can be we probably should be. In the case of a pill, we can do an RCT, providing high quality, repeatable statistical evidence on its efficacy - so anyone proposing I take a pill who doesn't have an RCT backing them up is a bit suspicious. In the case of cryonics, I hope it's clear that it's not because of a lack of respect for evidence that we're unable to show you an RCT. There is absolutely insufficient evidence to have the same confidence in cryonics as we do in, say, ibuprofen for reducing inflammation. Because of this lack of evidence, we have great uncertainty. What we're trying to ask is, in the face of that uncertainty, how will you make a decision? Is it always appropriate in the medical sphere to choose inaction whenever the evidence in favour of action is only weak and circumstantial?
That's very much not the case. If one has a hypothesis we don't care which method of bringing evidence for that hypothesis you do as long as it is actual evidence. For example, the neutrino was originally hypothesized based on very indirect evidence. That evidence then became progressively stronger. But at no point did anyone assert that they wouldn't accept neutrinos unless a specific experiment was performed.
In this case, indirect evidence is the only kind of evidence you can hope to obtain, so your current conclusion has to be formed based on indirect evidence. And this applies to any conclusion. If you believe that cryonics won't work, this is also based only on indirect evidence. It has to be. Now, in most cases, the prior of a given idea not working is high enough, so we have a "by default" argument, believing something is impossible until proven possible. But this is a matter of framing: is it possible to implement a manned expedition to Mars, say? Is it possible to travel faster than light? The difference is always in intuitive estimation of detail that goes into the question, and what exactly is being asked matters. The "impossible by default" heuristic is a good tool, but has apparent points of failure, and you have to be aware of these where obtaining explicit evidence is not expected.
So following on from my other comment, I say to you: go ahead. Perform the experiment of whether "believe nothing except that which has been shown by fairly direct evidence." or "incorporate all available evidence to arrive at probabilistic beliefs and then calculate expected utilities" is best. Go ahead and perform it in on yourself by not getting cryo. If you do this, and cryo works, then I will be revived and know that you and many others just got proved wrong catastrophically. If cryo fails, then I will be 80 cents a day poorer and I'll be just as dead as you.
You have a point about the epistemology at work in the sciences. But the founders of this rationality movement actually think that they know better, that they are smarter than the average scientist, and that they can prognosticate probabilisitically about the future. And really, I think that in this case, it isn't too hard to be smarter than a scientist; scientists know a lot about science and mostly nothing about philosophy of science/epistemology. Scientists (especially biologists) mostly still work with a yes/no epistemology rather than a probabilistic one and so underperform versus a good probabilistic reasoner.
Investment and sociological acceptance seem to me separate from the purely physical and biological aspects. For example, I am signaling optimism about the future very strongly by signing up for cryonics. Even an extremely low probability rating for cryonics working would not change this fact. But in any case, the specific proofs needed for at least some guarded acceptance in physics and biology are already available. We know cells survive in large numbers, memories are structural (not dependent on electric fields), and vitrification limits damage (to the point that a kidney can survive in working condition). If you want to be a scientific skeptic of cryonics you need to be begin as literate of these facts and refer to reasons why you are still skeptical. The demand for reanimation of a whole human or complex animal is far in excess of what is necessary to prove this as a good bet based on physical and biological data. The cost of ignoring the evidence in favor of cryonics that can accrue before that particular demonstration is vastly disproportionate to the cost of a false positive.
Ok. So you are talking about the entire process. What then is your objection to the refrigeration aspect? Do you think that the information is irretrievably destroyed? Do you think that the information is not destroyed but that the body is too far damaged to ever be restored in any useful way? Do you think that the preservation process does not do a good enough job at preventing ongoing damage? Do you think that the probability of thawing due to the catastrophic events or economic problems is too high? Or do you have some other objection that I have not listed?
I am something of a cryo-skeptic because I think at best all you will get is a copy of the person who was frozen. I am much more interested in SENS-style rejuvenation efforts. But I am curious about the source of your (and Sam Adams') skepticism. Are you of the opinion that it would be impossible to set all those frozen molecules in motion again, or impossible to make a living copy of the frozen original? Do you doubt that the important information (memory, personality...?) survives the freezing process? Contemporary science and technology are showing that nature permits atoms to be manipulated with extraordinary precision. Of course, your molecular structure is a lot more complex and dynamic than that of Carbon Monoxide Man []. But then we would hardly need to get every atom back to exactly where it was, in order to make something a lot like you. We would just need tissues grown from cells containing your genome, and then arranged in a structure grossly resembling your current body. The brain is presumably the place where certain fine details matter the most. But I really don't see what is to stop us from growing a decerebrated body in your image (having first synthesized a copy of your genome a la Craig Venter), and then carefully filling its skull, layer by layer, with synthetic neural tissue made in imitation of the microstructure of your frozen brain, assuming that we have it available. That is a procedure for making a copy of you; but I would tend to think that something which reanimates the frozen carcass is also possible, albeit more difficult to describe. These things are very high technology by current standards, but how to do them is not an unfathomable mystery. It's of a level of difficulty more akin to constructing an inhabited space station that will orbit Neptune. A big engineering challenge.

On the other hand I suggest I understand you perfectly and have attempted to respond to the core objection I have with your comment. That is, it is a demand for unobtainable evidence.

The entire purpose of cryonics is to freeze a person pending the availability of future technologies. If that technology was, in fact, available now it would be evidence that cryonics was unnecessary.

You make the claim:

he entire argument in favor of cryonics is based on projections for future discoveries and technologies, which any cryonics proponent will admit. Thus their a

... (read more)
What ought we discuss if not evidence?
Cryonics either works, or does not; there's no way for it to work "on current evidence" but not work on some other set of evidence. Perhaps you mean that cryonics hasn't worked yet, but this is also what you would expect to see if it would eventually work. [] In part, this seems to merely be a disagreement over the definition of "evidence". []
Expectation can only be obtained based on currently available indirect evidence.

Could you break down your objection?

EDIT to look at it from another angle: it's clear that the first serve in this discussion has to come from the cryonicists, since we're the ones trying to change people's minds. But cryonicists have served and served and served; there's a massive literature arguing in favour, of which I'd pick out Ben Best's "Scientific Justification of Cryonics Practice". If you don't feel that anything in that literature is enough to show that cryonics might be a good idea, you're going to have to make some sort of actual r... (read more)

That's because it's absolutely clear that cryonics (on current evidence) does not work.

Doesn't work? It quite clearly gets heads and freezes them in a static state. You appear to be demanding evidence regarding functional medical nanotechnology, a rather different problem.

This kind of rebuttal absolutely fails, because it simply doesn't address the point. You're taking the OP completely out of context. The OP is arguing against cryonics evidence in the context of having to dish out substantial money. The pro-cryonics LW community asserts that you must pay money if you believe in cryonics, since it's the only rational decision, or some such logic. In response, critics (such as the OP) contend that cryonics evidence isn't sufficient to justify paying money. This is totally different from asserting that you don't believe in cr... (read more)

Perhaps Sam can clarify his remarks but that's strongly not what I got from the context. That argument has some validity, but he actually wrote: He didn't say one needs to assign low value to the probability that it will happen but had a problem assigning "any value" due to a "total lack of evidence." That sounds like a much stronger claim especially when he then refers to making a "bet" by comparison on life extension. If that is what Sam meant, I'd be particularly curious what the monetary level would be where he'd sign up. Incidentally, the claim that because a technology does not yet exist we must assign it a very low probability of arising seems almost trivially false. The largest hard drives today are in the 2-4 terabyte range. I'm pretty willing to bet that we will see 10 terabyte hard drives pretty soon and almost certainly will eventually. The only major ways for this not to happen are a very large scale catastrophe or the discovery of new technologies that render large hard drives unnecessary. Thus, the tiny chance of this not occurring is even smaller if one instead talks about compact data storage objects in the 10 TB range. One can use other examples which are slightly less trivial. Currently, the best Go programs are in the mid to low dan rankings. But I don't think anyone seriously thinks that because no one has demonstrated a better program that the probability of such programs arising is therefore very low. The argument type used fails even more badly when one is talking about something like cryonics where we don't even need the technology soon, it just needs to eventually exist. This argument might be different if Sam focused on technical aspects that would make cryonics difficult in the long-term or if one focused on sociological aspects (which he did briefly touch upon but not in any detail). But the argument being dealt with by my comment seems to focus simply on the claimed lack of "evidence" due to the technology not yet existing. That st
M, but that doesn't seem to be what SamAdams said. He didn't just say the probability was low enough for it to not be worth it, he said "There is a total lack of evidence in support of resurrecting a frozen human because its never been done and as of now nobody knows if it is even possible." Admittedly, he did say immediately afterward, "So essentially cryonics is a way to spend money on a one in a million chance you might be revived in the future. " So that seems to be a little inconsistent? I would think that if things really were as he described before, one in a million would be quite an overestimate.
So would it be right to say your objection is based on the expected utility of working cryonics instead of its probability?
Take your beliefs seriously. If you believe something, you must accept all consequences; if you don't accept some consequences, you must stop believing. The alternative is hypocrisy, compartmentalization, curiosity-stopping. Paying money is a decision made based on your beliefs, not the other way around. You are not allowed to change your beliefs based on the decisions your beliefs suggest, only on evidence pertaining to the beliefs themselves.

You might think about the zen idea, in which the proposal of solutions is certainly held off, or treated differently. This is a very common idea in response to the tendency of solutions to precipitate themselves so ubiquitously.

Without any way of authenticating the donations, I find this to be rather silly.

I'd also like these donations to be authenticated, but I'm willing to wait if necessary. Here's step 2, including the new "ETA" part, from my original comment []: Would you be willing to match my third $60 if I could give you better evidence that I actually matched the first two? If so, I'll try to get some.

I just saw this and realized I basically just expanded on this above.

I wasn't familiar with this description of "world states", but it sounds interesting, yes. I take it that positing "I am a think that things" is the same as asserting K(E). In asserting K(K(E)), I assert that I know that I know that I am a thing that thinks. If this understanding is incorrect, my following logic doesn't apply.

I would argue that K(K(E)) is actually a necessary condition for K(E). Because if I don't know that I know proposition A, then I don't know proposition A.

Edit/Revised: I think all you have to do is realize that &... (read more)

K(A) is always a stronger statement than A because if you know K(A) you necessarily know A. (To get the terms clear: a "strong" statement corresponds to a smaller set of world states than a "weak" one.) It is debatable whether K(K(A)) is always equivalent to K(A) for human beings. I need to think about it more.

Um, if you're a brain in a vat, then any "brain" you perceive in the real world like on a "real world" MRI is nothing but a fictitious sensory perception that the vat is effectively tricking you into thinking is your brain. If you're a brain in a vat, you have nothing to tell you that what you perceive as your brain is actually really your brain. It may be hard to implement the brain in the vat scenario, but when implemented, its absolutely undetectable.

People don't mention anything like altering the brain itself.

Altering the brain itself? The brain itself is the only thing there is to alter. The only thing that exists in the brain in the vat example is the brain, the vat, and whatever controls the vat. The "human experiences" are just the outcome of an alteration on the brain, e.g., by hooking up electrodes. I really have no idea how else you imagine this is working.

FWIW, my original comment talked about a realistic version of brain in a vat, not the philosophical idealized model. But now that I thought about it some more, the idealized model is seeming harder and harder to implement. The robots who take care of my vat must possess lots of equipment besides electrodes! A hammer, boxing gloves, some cannabis extract, a faster-than-light transmitter so I can't measure the round-trip signal delay... Think about this: what if I went to a doctor and asked them to do an MRI scan as I thought about stuff? Or hooked some electrodes to my head and asked a friend to stimulate my neurons, telling me which ones only afterward? Bottom line, I could be an actual human in an actual world, or a completely simulated human in a completely simulated world, but any in-between situations - like brains in vats - can be detected pretty easily.

You don't seem to be familiar with this concept.

You could posit a brain in the vat where the controllers also have lots of actual drugs or electromagnetic stimulants read to go to duplicate those effects on the brain,

This is the entire point of the brain in the vat idea. It's not that "you could posit it", you do posit it. The external world as we experience is utterly and completely controlled by the vat. If we correlate "experienced brain damage" (in our world) with "reduced mental faculties", that just means that the v... (read more)

Hmm. Your comment has brought to my attention an issue I hadn't thought of before. Are you familiar with Aumann's knowledge operators [\])? In brief, he posits an all-encompassing set of world states that describe your state of mind as well as everything else. Events are subsets of world states, and the knowledge operator K transforms an event E into another event K(E): "I know that E". Note that the operator's output is of the same type as its input - a subset of the all-encompassing universe of discourse - and so it's natural to try iterating the operator, obtaining K(K(E)) and so on. Which brings me to my question. Let E be the event "you are a thing that thinks", or "you exist". You have read Descartes and know how to logically deduce E. My question is, do you also know that K(E)? K(K(E))? These are stronger statements than E - smaller subsets of the universe of discourse - so they could help you learn more about the external world. The first few iterations imply that you have functioning memory and reason, at the very least. Or maybe you could take the other horn of the dilemma: admit that you know E but deny knowing that you know it. That would be pretty awesome!
When I've read about the brain-the-vat as an example before they normally just talk about sensory aspects. People don't mention anything like altering the brain itself. So at minimum, cousin it has picked up a hole in how this is frequently described. Considering how much philosophy is complete nonsense I'd think that LWers would be more careful about using the argument that something in philosophy is widely known to be not resolvable. I agree that if when people are talking about the brain-the-vat they mean one where the vat is able to alter the brain itself in the process then this is not resolvable.

How to check that you aren't a brain in a vat: inflict some minor brain damage on yourself. If it influences your mind's workings as predicted by neurology, now you know your brain is physically here, not in a vat somewhere.

No, there's no way of knowing that you're not being tricked. If your perception changes and your perception of your brain changes, that just means that the vat is tricking the brain to perceive that.

The "brain in the vat" idea takes its power from the fact that the vat controller (or the vat itself) can cause you to perceive anything it wants.

If you are a brain in a vat then that should alter sensory perception. It shouldn't alter cognitive processes (say ability to add numbers, or to spell or the like). You could posit a brain in the vat where the controllers also have lots of actual drugs or electromagnetic stimulants read to go to duplicate those effects on the brain, but the point is that we have data about how the external world relates to us that isn't purely sensory.

That's a flagrant misinterpretation. The OP's intention was to say that innocent people don't get put in prison intentionally.

I sometimes get various ideas for inventions, but I'm not sure what to do with it, as they are often unrelated to my work, and I don't really possess the craftsmanship capabilities to make prototypes and market them or investigate them on my own. Does anyone have experience and/or recommendations for going about selling or profiting from these ideas?

Sell patents is right, but only if your invention is something that sells and markets itself because it is so obviously awesome and not just an incremental improvement on an existing invention. Even if it's an incredibly awesome invention, you may be better off raising money and doing it all yourself. I'm generally good at telling people whether or not their ideas are any good -- if you want to talk privately sometime, let me know.
Sell patents. (or more specifically, patent your invention and wait until someone else wants to use it. If this seems unethical, remember you will usually be blocking big evil corporations, not other inventors, and that the big evil corporations would always do the same thing to you if they could.)

This comment just seems really harsh to me... I understand what you're saying but surely the author doesn't have bad intentions here...

This seems very well written and I'd like to complement you on that regard. I find the shaman example amusing and also very fun to read.

For Sophie, if she has a large data set, then her theory should be able to predict a data set for the same experimental configuration, and then the the two data sets would be compared. That is the obvious standard and I'm not sure why it's not permitted here. Perhaps you were trying to emphasize Sophie's desire to go on and test her theory on different experimental parameters, etc.

The original shaman example works very w... (read more)

There's a much better, simpler reason to reject cryonics: it isn't proven. There might be some good signs and indications, but it's still rather murky in there. That being said, it's rather clear from prior discussion that most people in this forum believe that it will work. I find it slightly absurd, to be honest. You can talk a lot about uncertainties and supporting evidence and burden of proof and so on, but the simple fact remains the same. There is no proof cryonics will work, either right now, 20, or 50 years in the future. I hate to sound so cynical... (read more)

This is a very bad argument. First, all claims are probabilistic, so it isn't even clear what you mean by proof. Second of all, I could under the exact same logic say that one shouldn't try anything that involves technology that doesn't exist yet because we don't know if it will actually work. So the argument has to fail.
That's a widely acknowledged fact. And, if you make that your actual reason [] for rejecting cryonics, there are some implications that follow from that: for instance, that we should be investing massively more in research aiming to provide proof than we currently are. The arguments we tend to hear are more along the lines of "it's not proven, it's an expensive eccentricity, it's morally wrong, and besides even if it were proved to work I don't believe I'd wake up as me so I wouldn't want it".
I have no idea whether it will work, but right now, the only alternative is death. I actually think it's unlikely that people preserved now will ever be revived, more for social and economic reasons than technical ones.

This looks somewhat similar to what I was thinking and the attempt at formalization seems helpful. But it's hard for me to be sure. It's hard for me to understand the conceptual meaning and implications of it. What are your own thoughts on your formalization there?

I've also recently found something interesting where people denote the criterion of mathematical existence as freedom from contradiction. This can be found on pg. 5 of Tegmark here, attributed to Hilbert.

This looks disturbingly similar to my root idea and makes me want to do some reading on this... (read more)

I'm inclined to think that it doesn't really show anything metaphysically significant. When we encode facts about S as propositions, we are conceptually slicing and dicing the-way-S-is into discrete features for our map of S. No matter how we had sliced up the-way-S-is, we would have gotten a collection of features encoded as proposition. Finer or coarser slicings would have given us more or less specific propositions (i.e., propositions that pick out minuter details). When we put those propositions back together with propositional formulas, we are, in some sense, recombining some of the features to describe a finer or coarser fact about the system. The fact that T is closed under all the formulas in C just says that, when we slice up the-way-S-is, and then recombine some of the slices, what we get is just another slice of the-way-S-is. In other words, my remark about T and C is just part of what it means to pick out particular features of a physical system.

B meant "This rock is heavier than this pencil." So, "B or ~B" means "Either this rock is heavier than this pencil, or this rock is not heavier than this pencil." Surely that is something that I can say truthfully regardless of where the pencil's weight lies. So I don't understand why you say that we can't say "B or ~B" if the pencil's weight lies in a certain range.

My idea was that the rock weighs 1.5 plus/minus sigma. If the pencil then weighs 1.5 plus/minus sigma, then you can't compare their weights with absol... (read more)

Little note to self:

I guess my original idea (i.e., the idea I had in my very first question in the open thread) was that the physical systems can be phrased in the form of tautologies. Now, I don't know enough about mathematical logic, but I guess my intuition was/is telling me that if you have a system which is completely described by tautologies, than by (hypothetically) fine-graining these tautologies to cover all options and then breaking the tautologies into alternative theorems, we have an entire "mathematical structure" (i.e., propositio... (read more)

Tell me whether the following seems to capture the spirit of your observation: Let C be the collection of all propositional formulas that are provably true in the propositional calculus whenever you assume that each of their atomic propositions are true. In other words, C contains exactly those formulas that get a "T" in the row of their truth-tables where all atomic propositions get a "T". Note that C contains all tautologies, but it also contains the formula A => B, because A => B is true when both A and B are true. However, C does not contain A => ~B, because this formula is false when both A and B are true. Now consider some physical system S, and let T be the collection of all true assertions about S. Note that T depends on the physical system that you are considering, but C does not. The elements of C depend only on the rules of the propositional calculus. Maybe the observation that you are getting at is the following: For any actual physical system S, we have that T is closed under all of the formulas in C. That is, given f in C, and given A, B, . . . in T, we have that the proposition f(A, B, . . .) is also in T. This is remarkable, because T depends on S, while C does not. Does that look like what you are trying to say?
Though the word "tautology" is often used to refer to statements like (A v ~A), in mathematical logic any true statement is a tautology. Are you talking about the distinction between axioms and derived theorems in a formal system?

Sorry, I caught that myself earlier and added a sidenote, but you must have read before I finished:

Side-note: I suppose these particular examples are all tautological so they don't quite show the full richness of a logical system. However, it would be easy to make theorems, such as "if A AND C, then B" (where C could be specified similar to A or B.) Then we would see not only tautologies but also theorems and other propositions which are all encoded as we would expect from a typical logical system.

Edit: Or, sorry, just to complete, in case yo... (read more)

First, I, at least, am glad that you're asking these questions. Even on purely selfish grounds, it's giving me an opportunity to clarify my own thoughts to myself. Now, I'm having a hard time understanding each of your paragraphs above. B meant "This rock is heavier than this pencil." So, "B or ~B" means "Either this rock is heavier than this pencil, or this rock is not heavier than this pencil." Surely that is something that I can say truthfully regardless of where the pencil's weight lies. So I don't understand why you say that we can't say "B or ~B" if the pencil's weight lies in a certain range. I didn't say that the consequent can imply anything "by logical explosion". On the contrary, since the consequent is a tautology, it only implies TRUE things. Given any tautology T and false proposition P, the implication T => P is false. More generally, I don't understand the principle by which you seem to say that A => ~~A is "too simple", while other tautologies are not. Or are you now saying that all tautologies are too simple, and that you want to focus attention on certain non-tautologies, like "if A AND C, then B" ? But surely this is just a matter of our computational power, just as some arithmetic claims seem "obvious", while others are beyond our power to verify with our most powerful computers in a reasonable amount of time. The collection of "obvious" arithmetic claims grows as our computational power grows. Similarly, the collection of "obvious" tautologies grows as our computational power grows. It doesn't seem right to think of this "obviousness" as having anything to do with the territory. It seems entirely a property of how well we can work with our map.
Little note to self: I guess my original idea (i.e., the idea I had in my very first question in the open thread) was that the physical systems can be phrased in the form of tautologies. Now, I don't know enough about mathematical logic, but I guess my intuition was/is telling me that if you have a system which is completely described by tautologies, than by (hypothetically) fine-graining these tautologies to cover all options and then breaking the tautologies into alternative theorems, we have an entire "mathematical structure" (i.e., propositions and relations between propositions, based on logic) for the reality. And this structure would be consistent, because we had already shown that the tautologies could be formed consistently using the (hypothetically) available data. Then physics would work by seizing on these structures and attempting to figure out which theorems were true, refining the list of theorems down into results, and so on and so forth. I'm beginning to worry I might lose the reader do to the impression I am "moving the goalpost" or something of that nature... If this appears to be the case, I apologize and just have to admit my ignorance. I wasn't entirely sure what I was thinking about to start out with and that was really why I made my post. This is really helping me understand what I was thinking.

I think Pigliucci is somewhat hung up on the technicality of whether a computer system can instantiate an (a) intelligence or (b) a human intelligence. Clearly he is gravely skeptical that it could be a human intelligence. But he seems to conflate or interchange this skepticism with his skepticism in a general computer intelligence. I don't think anybody really thinks an AI will be exactly like a human, so I'm not that impressed by these distinctions. Whereas it seems like Pigliucci thinks that's one of the main talking points? I wish Pigliucci read these comments so we could talk to him... are you out there Massimo?

Thank you for comment, and I hope this reply isn't too long for you to read. I think your last sentence sums up your comment somewhat:

...the territory ought not to be thought of as a logical system of which the features are axioms or theorems.

In support of this, you mention:

What about a tautology such as "A => ~~A"? Tautologies do give us true statements about the territory. But, importantly, such a statement is not true in virtue of any feature of the territory. The tautology would have been true no matter what features the territory h

... (read more)
I'm a little confused by this example. The proposition A => (if A then B) OR (if A then not B) is a logical tautology. It's truth doesn't depend on whether "the pencil does not lie in the weight range 1.5 plus/minus sigma". In fact, just the consequent (if A then B) OR (if A then not B) by itself is a logical tautology. So, I have two questions: (1) Is there a reason why you didn't use just the consequent as your example? Is there a reason why it wouldn't "get to the heart" of your point? (2) Just to be perfectly clear, are you claiming that the truth of some tautologies, such as A => ~~A , is "trivial and just a property of human language", while the truth of some other tautologies is not?

I can see how that phrasing would strike you as being redundant or inaccurate. To try to clarify --

The rocks not occupying the same point in space is a logical contradiction in the following sense: If it wasn't a logical contradiction, there wouldn't be anything preventing it. You might claim this is a "physical" contradiction or a contradiction of "reality", but I am attempting to identify this feature as a signature example of a sort of logic of reality.

Load More