Can someone explain nanotech enthusiasm to me? Like, I get that nanotech is one of the sci-fi technologies that's actually physics-compliant, and furthermore it should be possible because biology.
But I get the impression that among transhumanist types slightly older than me, there's a widespread expectation that it will lead to absolutely magical things on the scale of decades, and I don't get where that comes from, even after picking up Engines of Creation.
I'm thinking of, e.g. Eliezer talking about how he wanted to design nanotechnology before he got into AI, or how he casually mentions nanotechnology as being one of the big ways a super-intelligent AI could take over the world. I always feel totally mystified when I come across something like that, like it's a major gulf between me and slightly older nerds.
Predicting chemistry from physics should be easy with a quantum computer, but appears hard with a classical computer. Often people say that even once you make a classical approximation, ie, assume that the dynamics are easy on a classical computer, that the problem of finding the minimum energy state of a protein is NP-hard. That's true, but a red herring, since the protein isn't magically going to know the minimum energy state. Though it's still possible that there's some catalyst to push it into the right state, so simulating the dynamics in a vacuum won't get you the right answer (cf prions). Anyhow, there's some hope that evolution has found a good toolbox for designing proteins and that if can figure out the abstractions that evolution is using, it will all become easy. In particular, there are building blocks like the alpha helix. Certainly an engineer, whether evolution or us, doesn't need to understand every protein, just know how to make enough.
I think the possibility that a sufficiently smart AI would quickly find an adequate toolbox for designing proteins is quite plausible. I don't know what Eliezer means, but the possibility seems to me adequate for his arguments.
New ligament discovered in the human knee as a result of surgeons trying to figure out why some people didn't recover fully after knee injuries.
I'm tempted to deduce "Keep paying attention, you never know what might have been missed"-- I really would have expected that all the ligaments had been discovered a long time ago.
Another conclusion might be "Try to solve real problems, you're more likely to find out something new that way than by just poking around."
Does someone have the medical knowledge to explain how this is possible? My layperson guess is that once cut up a knee, you can more or less see all the macroscopic structures. Did they just think it was unimportant?
I'm more impressed, actually, in terms of the unevenness of progress - it took ~134 years to confirm his postulate? It's not like corpses were unavailable for dissection in 1879.
The media giveth sensationalism, and the media taketh away.
reddit - "So that "new" ligament? Here's a study from 2011 that shows the same thing. It's not even close to a new development and has been seen many times over the past 100 years." Summary quote: "The significance of the Belgian paper was to link [the ligament's] functionality to what they called "pivot shift", and knee reinjuries after ACL surgery. The significance of this paper, I believe, is that in the near future surgeons performing these operations will have an additional ligament to inspect and possibly repair during ACL surgery, which will hopefully reduce recurrence rates, and likely the rates of developing osteoarthritis in the injured knee down the line."
So I get home from a weekend trip and go directly to the HPMOR page. No new chapter yet. But there is a link to what seems to be a rationalist Death Note.
The way he saw it, the world was a pretty awful place. Corrupt politicians, cruel criminals, evil CEOs and even day-to-day evil acts made it that way, but everyday stupidity ensured it would stay like that. Nobody could make even a simple utility calculation. The only saving grace was that this was as true for the villains as for the heroes.
I am going to read it. Here are my next thoughts:
So, it seems like Eliezer succeeded to create a whole new genre of literature: rationalist fiction. Nice job!
Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature? There is something horribly wrong with this world if this is true.
Discussing with my girlfriend about which stories should be x-rationalizated next, she suggests HPMOR. Someone should make a HPMOR fanfic where the protagonist is even more rational than the rational Harry. Would that lead to a spiral of even more and more rational heroes?
What exactly could the MoreRational!Harry do? It would be pretty awesome if he could someho...
Discussing with my girlfriend about which stories should be x-rationalizated next, she suggests HPMOR. Someone should make a HPMOR fanfic where the protagonist is even more rational than the rational Harry.
An idea came to my mind. Would it be possible to make a story in which Harry is less intelligent, in a way that he would score less in an IQ test for example, but at the same time more rational? HJPEV seems to be a highly intelligent prodigy even without the rationality addition. I would like to see how a more normal boy would do.
Is "a story where the protagonist behaves rationally" really a new genre of literature?
I think what you are referring to here is "a story where the protagonist describes their actions and motivations using rationality terminology" or maybe "a story where the rational thinking of the protagonist motivates the plot or moves it along". At least some of the genre of detective fiction — early examples being Poe's Auguste Dupin stories — would be along these lines.
Stories where protagonists behave rationally (without using rationality terrminology) wouldn't look like stories about rationality. They look like stories where protagonists do things that make sense.
Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature? There is something horribly wrong with this world if this is true.
Yup. At least sort-of. If you haven't read Eliezer's old post Lawrence Watt-Evans's Fiction I recommend it. However, conspicuous failures of rationality in fiction may be mostly an issue with science fiction and fantasy. If you want to keep the characters in your cop story from looking like idiots, you can do research on real police methods, etc. and if you do it right, you have a decent shot at writing a story that real police officers will read without thinking your characters are idiots.
On the other hand, when an author is trying to invent an entire fictional universe, with futuristic technology and/or magic, it can be really hard to figure out what would constitute "smart behavior" in that universe. This may be partly because most authors aren't themselves geniuses, but even more importantly, the fictional universe, if it were real, would have millions of people trying to figure out how to make optimal use of the resources that exist in that universe. It's hard for one person, however smart, to compete with that.
For that matter, it's hard for one author to compete with an army of fans dissecting their work, looking for ways the characters could have been smarter.
which stories should be x-rationalizated next
This leads to another comment on rationalist fiction: Most of it seems to be restricted to fan-fiction. The mold appears to be: "Let's take a story in which the characters underutilized their opportunities and bestow them with intelligence, curiosity, common sense, creativity and genre-awareness". The contrast between the fanfic and the canon is a major element of the story, and the canon an existing scaffold which saves the writer from having to create a context.
This isn't a bad thing necessarily, just an observation.
Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature?
So, the question becomes, how do you recognize "rationalist" stories in non-fan-fic form? Is it simply the presence of show-your-work-smart characters? Is simply behaving rationally sufficient?
Every genre has a theme...romance, adventure, etc.
So where are the stories which are, fundamentally, about stuff like epistemology and moral philosophy?
Have Eliezer's views (or anyone else's who was involved) on the Anthropic Trilemma changed since that discussion in 2009?
Hmm, conditional on that being the case, do you also believe that the closer to physics the mind is the more person it is in it? Example: action potentials encoded in the position of rods in a babbage engine vs. spread over fragmented ram used by a functional programing language using lazy evaluation in the cloud.
Brian Leiter shared an amusing quip from Alex Rosenberg:
...So, the... Nobel Prize for “economic science” gets awarded to a guy who says markets are efficient and there are no bubbles—Eugene Fama (“I don’t know what a credit bubble means. I don’t even know what a bubble means. These words have become popular. I don’t think they have any meaning”—New Yorker, 2010), along with another economist—Robert Shiller, who says that markets are pretty much nothing but bubbles, “Most of the action in the aggregate stock market is bubbles.” (NY Times, October 19, 2013) Imagine the parallel in physics or chemistry or biology—the prize is split between Einstein and Bohr for their disagreement about whether quantum mechanics is complete, or Pauling and Crick for their dispute about whether the gene is a double helix or a triple, or between Gould and Dawkins for their rejection of one another’s views about the units of selection. In these disciplines Nobel Prizes are given to reward a scientist who has established something every one else can bank on. In economics, “Not so much.” This wasn’t the first time they gave the award to an economist who says one thing and another one who asserts its direct d
Ugh. The prize was first and foremost in recognition of Fama, Shiller, and Hansen's empiricism in finance. In the sixties, Fama proposed a model of efficient markets, and it held up to testing. Later, both Fama, Shiller, and Hansen showed further tests didn't hold up. Their mutual conclusion: the efficient market hypothesis is mostly right, and while there is no short-term predictability based on publicly available information, there is some long-term predictability. Since the result is fairly messy, Fama and Shiller have differences about what they emphasize (and are both over-rhetorical in their emphasis). Does "mostly right" mean false or basically true?
What's causing the remaining lack of agreement, especially over bubbles? Lack of data. Shiller thinks bubbles exist, but are rare enough he can't solidly establish them, while Fama is unconvinced. Fama and Shiller have done path-breaking scientific work, even if the story about asset price fluctuation isn't 100% settled.
SPOILERS FOR "FRIENDSHIP IS OPTIMAL"
Why is 'Friendship is optimal' "dark" and "creepy"? I've read many people refer it that way. Only things that are clearly bad are the killings of all the other lifeforms, but otherwise this is scenario is one of the best that humanity could come across. It's not perfect, but it's good enough and much better than the world we have today. I'm not sure if it's realistic to ask for more. Considering how likely it is that humanity will end in some incredibly fucked up way full of suffering, then I would definitely defend this kind of utopia.
(Comment cosmetically edited in response to Kaj_Sotala, and again to replace a chunk of text that fell in a hole somewhere)
OK, I'll have a go (will be incomplete).
People in general will find the Optimalverse unpleasant for a lot of reasons I'll ignore; major changes to status quo, perceived incompatibility with non-reductionist worldviews, believing that a utopia is necessarily unpleasant or Omelas-like (a variant of this fallacy?), and lots of even messier things.
People on LessWrong may be thinking about portions of the Fun Theory Sequence that the Optimalverse conflicts with, and in some cases they may think that these conflicts destroy all of the value of the future, hence horror.
(rot13 some bits that might consitute spoilers)
Humans want things to go well, but they also want things to have been able to go badly, such that they made the difference. Relevant: Living By Your Own Strength, Free to Optimize.
The existence of a superintelligence makes human involvement superfluous, and humans do not want this to happen. Relevant: Amputation of Destiny.
Gur snpg gung gur NV vf pbafgenvarq gb fngvfsl uhzna inyhrf gur cbal jnl zrnaf gung n uhtr nzbhag bs cbffvoyr uhzna rkcrevra
Let's ignore, for now, the creepiness in creating artificial sentients who value being people that make your life better.
No, let's not ignore it. Let's confront it, because I want a better explanation. Surely a person who values being a person that makes my life better, AND who is a person such that I will value making their life better, is absolutely the best kind of person for me to create (if I'm in a situation such that it's moral for me to create anyone at all).
I mean, seriously? Why would I want to mix any noise into this process?
Am I the only one who is bothered that these threads don't start on Monday anymore?
Posting a request from a past open thread again: Does anyone have a table of probabilities for major (negative) life events, like divorse or being in a car accident? I ask this to have a priority list of events to be prepared for, either physically or mentally.
ETA: Apparently a new WHO recommendation for filling death certificates was introduced in 2005-2006 and this caused a significant drop in pneumonia mortality in Finland.
I'm not entirely sure if it works this way in the whole EU, but it probably does. It's more complicated than what I explain below, but it's the big picture that matters.
The most common way to record mortality statistics is that the doctor who was treating the patient fills a death certificate. There are three types of causes of death that can be recorded in a death certificate. There are immediate causes of death and there are underlying causes of death. There are also intermediate causes of death, but nobody really cares about those because recording them is optional. The statistics department in Finland is interested in recording only the underlying causes of death and that's what gets published as mortality statistics. Only one cause of death per patient gets recorded.
If someone with advanced cancer gets pneumonia and dies, a doctor fills the death certificate saying that the underlying cause of death was cancer and the immediate cause of death was pneumonia. Cancer gets recorded as the one and only cause of dea...
In honor of NaNoWriMo, I offer up this discussion topic for fans of HPMOR and rationalist fiction in general:
How many ways can we find that stock superpowers (magical abilities, sci-fi tech, whatever), if used intelligently, completely break a fictional setting? I'm particularly interested in subtly game-breaking abilities.
The game-breaking consequences of mind control, time travel, and the power to steal other powers are all particularly obvious, but I'm interested in things like e.g. Eliezer pointing out that he had to seriously nerf the Unbreakable Vow in HPMOR to keep the entire story from being about that.
Thats because those are among the worst possible ways to use those abilities.
The energy blasts as usually depicted break conservation of energy; with a bit of physics trickery you can get time travel out of that. Even if not, they make you an extremely portable and efficient energy source, perfect for a spaceship where mass is critical and a human needs to come along anyway but it doesn't matter in particular who since it's for PR reasons.
Mind reading is a means of communication that does not require cooperation or any abilities in the target, and cant be lied through. Communication with locked-in patients, interrogation, extraction of testimonials from animals. And if you an find a way to yourself precommit, you also have fully reliable precommitment checking for everyone, lie detection for political promises, and the ultimate forensics tool.
If you combine the strengths of 2 kinds of system, you get something greater than the sum of it's parts. So it is with human senses and digital sensors. The key here is bandwidth, and analysis. Sure, you can get all the same data onto a computer, but it won't do much good there. Someone with true super-senses as flexible and integrated as the...
Does anyone here have any serious information regarding Tulpas? When I first heard of them they immediately seemed to be the kind of thing that is obviously and clearly a very bad idea, and may not even exist in the sense that people describe them. A very obvious sign of a persons who is legitimately crazy, even.
Naturally, my first re-reaction is the desire to create one myself (One might say I'm a bit contrarian by nature). I don't know any obvious reason not to (ignoring social stigma and time consuming initial investment), And there may be some advantag...
I'm reminded of many years ago, a coworker coming into my office and asking me a question about the design of a feature that interacts with our tax calculation.
So she and I created this whole whiteboard flowchart working out the design, at the end of which I said "Hrm. So, at a high level, this seems OK. That said, you should definitely talk to Mark about this, because Mark knows a lot more about the tax code than I do, and he might see problems I missed. For example, Mark will probably notice that this bit here will fail when $condition applies, which I... um... completely failed to notice?"
I could certainly describe that as having a "Mark" in my head who is smarter about tax-code-related designs than I am, and there's nothing intrinsically wrong with describing it that way if that makes me more comfortable or provides some other benefit.
But "Mark" in this case would just be pointing to a subset of "Dave", just as "Dave's fantasies about aliens" does.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.