Can someone explain nanotech enthusiasm to me? Like, I get that nanotech is one of the sci-fi technologies that's actually physics-compliant, and furthermore it should be possible because biology.
But I get the impression that among transhumanist types slightly older than me, there's a widespread expectation that it will lead to absolutely magical things on the scale of decades, and I don't get where that comes from, even after picking up Engines of Creation.
I'm thinking of, e.g. Eliezer talking about how he wanted to design nanotechnology before he got into AI, or how he casually mentions nanotechnology as being one of the big ways a super-intelligent AI could take over the world. I always feel totally mystified when I come across something like that, like it's a major gulf between me and slightly older nerds.
Trying for minimal technicalities: There are at least 3 different technologies
with not much surface-level usage similarities referred to as "nanotech".
Assemblers: basically 3d printers, but way more flexible and able to make things
like food, robots, or more assemblers.
Materials: diamondoids, buckytubes, circuitry. We already have some of these
really, it's just that we'd get more kinds of them, and they'd be really cheap
to make with a nanotech assembler. Stronger, faster, more powerful versions of
what modern tech already can do.
Nanobots, particularly medical: Basically can do all the things living cells can
do, but better, and also being able to do most of the things machines can do,
and commandable in exact detail. There are also a number of different ways
they'd grant immortality enough that they are almost sure to do so even if most
of them end up not working out.
Now you can ask questions about each one of these in order, with more specifics.
3ChrisHallquist10y
The question is about all these technologies - though it's about 2 mainly
insofar as 2 is an extension of 1.
So the question is why expect any of these technologies to mature on a timescale
of decades?
(Or, assuming FOOM, why assume they'd be relatively low-hanging fruit for a
FOOMing AI, such that "trick humans into building me nano assemblers" is a prime
strategy for a boxed AI to escape?)
5Armok_GoB10y
As I said, 2 is already here, and it's becoming more here gradually.
For 3, we have a proof of concept to rip of: biological cells. Those also
happens to have a specialized assembler in them already; the ribosome. And we
can print instructions for it already. There's only 1 problem left and that's
the protein folding problem. The protein folding problem is somewhat rapidly
made progress on software wise, and even if that were to fail it won't be all
that long before we ca simply brute force it with computing power. Now, the
other kinds of nanobots are less clear.
The assembler (1) is trickier; however, Drexler already sorta made a blueprint
for one I think, and 3 will help a great deal with it as well.
For the fooming, it's the 3 one, and ways to use it. As I said we already have
the hardware, and things like the protein folding problem is exactly what an AI
would be great at. Once it's solved that it has full control over biology and
can essentially make The Thing and/or a literal mind control virus, and take
over that way.
3ChrisHallquist10y
Okay, so one sub-piece of puzzlement I have is why talk of protein folding as a
problem that is either solved or unsolved - as if we (or more frighteningly, an
AI) could suddenly go from barely being able to do it to 100% capable.
I was also under the impression that protein folding was mathematically horrible
in a way that makes it unlikely to be brute forced any time soon, though I just
now realized that I may have been thinking of the general problem of predicting
chemistry from physics, maybe protein folding is much easier.
Predicting chemistry from physics should be easy with a quantum computer, but appears hard with a classical computer. Often people say that even once you make a classical approximation, ie, assume that the dynamics are easy on a classical computer, that the problem of finding the minimum energy state of a protein is NP-hard. That's true, but a red herring, since the protein isn't magically going to know the minimum energy state. Though it's still possible that there's some catalyst to push it into the right state, so simulating the dynamics in a vacuum won't get you the right answer (cf prions). Anyhow, there's some hope that evolution has found a good toolbox for designing proteins and that if can figure out the abstractions that evolution is using, it will all become easy. In particular, there are building blocks like the alpha helix. Certainly an engineer, whether evolution or us, doesn't need to understand every protein, just know how to make enough.
I think the possibility that a sufficiently smart AI would quickly find an adequate toolbox for designing proteins is quite plausible. I don't know what Eliezer means, but the possibility seems to me adequate for his arguments.
I'm not sure protein folding can be brute forced without quantum computers.
There's too many ways for it to fold. In real life, I'm pretty sure quantum
tunneling gets involved. Simulations have worked, but I there's a limit to that.
7Eliezer Yudkowsky10y
Try Nanosystems perhaps.
3Cyan10y
An analogy might help give a sense of scale here. This isn't an argument, but it
hints at the scope of the unknown unknowns in nanotech space. Here on our
macroscopic scale, some wonders wrought by evolution include the smasher mantis
shrimp's kinetic attack, a bee hive's eusocial organization, the peregrine
falcon's flight speed, and the eagle's visual system. But evolution is literally
mindless -- by actually knowing how to do things, human engineering created
electromagnetic railguns, networks of international trade, the SR-71 Blackbird,
and the Hubble telescope. Now apply that kind of thinking to "because biology"
on the nano scale...
3mwengler10y
Consider a machine as smart as a cellphone but the size of a blood cell.
In a sense, a protein or a drug is a smart molecule. It keys in to a very
limited number of things and ignores the rest. There are many different smart
proteins or smart molecules to be used as drugs with many different purposes.
Even so, chemotherapy, for example, is primarily about ALMOST killing everything
while differentially being a bit more toxic to the cancer cells.
Now increase the intelligence of ths smartest molecule by 10 fold, 100 fold,
1000 fold. Perhaps you give it the ability for a simple 2-way communication to
the outside world. If its intelligence is increased, there should be MANY ways
to allow it to distinguish a tumor, a micro-tumor, a cancerous cell, from all
the good things in your body. All the sudden, the differential toxicity of
"chemo" therapy (now nanotherapy) will be 10, 100X as high as it is for smart
molecules.
Now consider these smart little machines doing surgery. Inoperable tumor? Not
inoperable for a host of machines the size of bloodcells that will literally be
able to operate on the most remote of tumors from inside of them.
Tendency towards obesity? How hard will it be to have a system of nanites that
screw with your metabolism in such a way to eliminate all the stored fat in
cells until told that, or until they measure that, we are down to a good level.
These are just a few stories from medicine. I expect anybody who does not wish
to get sick and die would be enthusiastic about these, but YMMV.
2Lumifer10y
Probably comes from Neal Stephenson's The Diamond Age: Or, A Young Lady's
Illustrated Primer :-)
I'm pretty sure that the people Chris is talking about are Stephenson's source,
not vice versa.
1ChrisHallquist10y
Eliezer definitely definitely seems to have caught the nano-enthusiasm bug
pre-Diamond Age, but maybe the book had a big impact on other people?
2mwengler10y
The book had a gigantic impact on me. In a broad range of ways from hypertext
through nanotech through various schemes for social organization and the long
list of human needs such organization serves.
0Viliam_Bur10y
One of many things I liked was an illustration that economical improvement is
not enough to make people live well. If I remember correctly, in the fictional
world the food was free; anyone could go to a public matter-builder and take
food from it. Yet some children were hungry... because their parents didn't care
enough for them to get outside of the house and bring them the food.
Moral of the story: however good situation you have, humans can make it bad by
simply not caring even the smallest. (Unless we get to a situation where humans
are replaced by robots completely.)
This seems to me like a hyperbole of the world we have now. Economically, the
life in developed countries is so great that for most people who lived in the
past it would be almost like a paradise. Yet we have a lot of suboptimality
simply because people don't care. (Maybe it's because the wealth made social
pressure less relevant, and many people just naturally don't give a fuck about
anything, and without the social pressure now they don't even have to pretend.)
The good life does not make us automatically stronger; it often makes us lazy. I
believe the possibility is still there, but without outside pressure most people
don't care about becoming stronger.
1Douglas_Knight10y
Who, specifically are you talking about?
I'm thinking of the extropians, who coined "transhumanism." I'm not sure of the
timeline; the original group was definitely into MNT before Stephenson, but
maybe they expanded a lot after him, and maybe that was because of him.
0passive_fist10y
Perhaps the reason is that the ideas we're used to nowadays - like reconfiguring
matter to make dirt and water into food or repair microcellular damage (for
example, to selectively destroy cancer tumors) - were absolutely radical and
totally unheard of when they were first proposed. As far as I know, Feynman was
the first to seriously suggest that such a thing was possible, and most
reactions to him at the time were basically either confusion, disbelief, or
dismissal. Consider the average technologist in 1950. Hand-wound computer
memories were state of the art, no one knew what DNA looked like, famines seemed
a natural part of the order of things, and as far as everyone knew, the only
major technological difference between the present and the future was maybe
going to be space travel. Now someone comes along and tells you that there could
be this new technology that allows you to store the library of congress in the
head of a pin and carry out any chemical reaction just by writing down the
formula - including the chemical reactions of life. The consequences would be,
for instance, the ability to feed everyone on the planet at basically free. To
you, such a technology would seem "Indistinguishable from magic." Would it be a
dramatic inferential step to then say that it could do stuff that literally is
magic?
Nanotechnology never promised magic, of course. All it promises is the ability
to rearrange atoms into a subset of those structures allowed by physics (a
subset that is far larger than our current technology can do, but a subset
nonetheless). It promised nothing more, nothing less. This is in itself dramatic
enough, and it would allow all sorts of things that we probably couldn't imagine
today.
I'm tempted to deduce "Keep paying attention, you never know what might have been missed"-- I really would have expected that all the ligaments had been discovered a long time ago.
Another conclusion might be "Try to solve real problems, you're more likely to find out something new that way than by just poking around."
Does someone have the medical knowledge to explain how this is possible? My layperson guess is that once cut up a knee, you can more or less see all the macroscopic structures. Did they just think it was unimportant?
My layperson guess is that once you're told what to expect to see, you stop
looking.
This makes Eliezer's weirdtopia idea of science being kept secret so as not to
spoil people's fun of discovery more interesting-- it's not just that people
would independently discover the same things (and I wonder what the protocol for
sharing information would be), given enough time and intelligence, much more
might get discovered.
2NancyLebovitz10y
Someone who seemed a bit better informed
[http://ontd-science.livejournal.com/362014.html?thread=2883102#t2883102]
And that comment is answered by:
Which is interesting-- sometimes studying things in extreme detail "just
because" (probably because the object of study has high status-- consider early
observations of the planets) can pay off big.
2Vaniver10y
The "new ligament discovered" angle gets less impressive (to me, at least) when
I read this part:
I'm more impressed, actually, in terms of the unevenness of progress - it took ~134 years to confirm his postulate? It's not like corpses were unavailable for dissection in 1879.
It inspires more awe at our collective failures, but suggests that we should not
be so impressed with the new people as if they had a method that would make us
sure that we hadn't missed even more ligaments.
So I get home from a weekend trip and go directly to the HPMOR page. No new chapter yet. But there is a link to what seems to be a rationalist Death Note.
The way he saw it, the world was a pretty awful place. Corrupt politicians, cruel criminals, evil CEOs and even day-to-day evil acts made it that way, but everyday stupidity ensured it would stay like that. Nobody could make even a simple utility calculation. The only saving grace was that this was as true for the villains as for the heroes.
I am going to read it. Here are my next thoughts:
So, it seems like Eliezer succeeded to create a whole new genre of literature: rationalist fiction. Nice job!
Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature? There is something horribly wrong with this world if this is true.
Discussing with my girlfriend about which stories should be x-rationalizated next, she suggests HPMOR. Someone should make a HPMOR fanfic where the protagonist is even more rational than the rational Harry. Would that lead to a spiral of even more and more rational heroes?
What exactly could the MoreRational!Harry do? It would be pretty awesome if he could someho... (read more)
Discussing with my girlfriend about which stories should be x-rationalizated next, she suggests HPMOR. Someone should make a HPMOR fanfic where the protagonist is even more rational than the rational Harry.
An idea came to my mind. Would it be possible to make a story in which Harry is less intelligent, in a way that he would score less in an IQ test for example, but at the same time more rational? HJPEV seems to be a highly intelligent prodigy even without the rationality addition. I would like to see how a more normal boy would do.
One could argue that he appears intelligent only because he's spent his life so
far learning effectively.
1gattsuru10y
Rationalist!Harry is calibrated to match the knowledge and recall of a
34-year-old autodidact. Even presuming a very friendly environment and that said
34-year-old autodidact's training was not optimal, I just don't think there's
enough time.
I can buy a 10 year old reading Ender's Game and The Lord of the Rings and maybe
even Lensmen. It's a bit harder to imagine one that would consider wanting to
want the math behind proving N=NP, nevermind going further than that.
1CAE_Jones10y
I believe it's been stated somewhere that EY draws primarily on the skills he
had around 18 and intentionally keeps things from beyond that out of Harry's
reach. So Harry is more like a brilliant high school student than an adult (and,
extra seven years worth of rationalist training aside, the way he approaches
problems is a lot like a middle schooler with superpowers: "I can win, you
can't, deal with it, 'cause I'm awesome and you know it." Which manages to annoy
everyone in-universe and out.). Time isn't really a problem, either, if Harry
has nothing else to occupy his time; exercise and social interaction are
apparently not his thing, and he wound up out of the public school system after
a few years, so he really does have way more time than most kids his age to read
all the books. And he has that mysterious dark side and that sleeping disorder,
whatever those contribute.
The other strangely adult-like children, however, are not so easily justified.
(Draco gets most of those complaints, from what I've read.)
0lmm10y
I wanted the maths behind relativity and QM at age 10. And I wasted a lot of
time in school.
Is "a story where the protagonist behaves rationally" really a new genre of literature?
I think what you are referring to here is "a story where the protagonist describes their actions and motivations using rationality terminology" or maybe "a story where the rational thinking of the protagonist motivates the plot or moves it along". At least some of the genre of detective fiction — early examples being Poe's Auguste Dupin stories — would be along these lines.
Stories where protagonists behave rationally (without using rationality terrminology) wouldn't look like stories about rationality. They look like stories where protagonists do things that make sense.
Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature? There is something horribly wrong with this world if this is true.
Yup. At least sort-of. If you haven't read Eliezer's old post Lawrence Watt-Evans's Fiction I recommend it. However, conspicuous failures of rationality in fiction may be mostly an issue with science fiction and fantasy. If you want to keep the characters in your cop story from looking like idiots, you can do research on real police methods, etc. and if you do it right, you have a decent shot at writing a story that real police officers will read without thinking your characters are idiots.
On the other hand, when an author is trying to invent an entire fictional universe, with futuristic technology and/or magic, it can be really hard to figure out what would constitute "smart behavior" in that universe. This may be partly because most authors aren't themselves geniuses, but even more importantly, the fictional universe, if it were real, would have millions of people trying to figure out how to make optimal use of the resources that exist in that universe. It's hard for one person, however smart, to compete with that.
For that matter, it's hard for one author to compete with an army of fans dissecting their work, looking for ways the characters could have been smarter.
This leads to another comment on rationalist fiction: Most of it seems to be restricted to fan-fiction. The mold appears to be: "Let's take a story in which the characters underutilized their opportunities and bestow them with intelligence, curiosity, common sense, creativity and genre-awareness". The contrast between the fanfic and the canon is a major element of the story, and the canon an existing scaffold which saves the writer from having to create a context.
This isn't a bad thing necessarily, just an observation.
Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature?
So, the question becomes, how do you recognize "rationalist" stories in non-fan-fic form? Is it simply the presence of show-your-work-smart characters? Is simply behaving rationally sufficient?
Every genre has a theme...romance, adventure, etc.
So where are the stories which are, fundamentally, about stuff like epistemology and moral philosophy?
I'd say the difference between "rationalists" stories and "non-rationalist"
stories lies in the moral of the story, of the lessons the story teaches you.
I don't think it's a genre in the same way romance or adventure are. It's more
of a qualifier. You can have rationalist romance novels or rationalist adventure
movies.
Although you could argue that it is a genre. While discussions about "genre" are
often hard, since people don't tend to agree on what makes something a genre.
But rationalist fiction already has a couple of genre conventions, such as
no-one being allowed to hold the idiot ball or teaching the audience new and
useful techniques for overcoming challenges.
0Viliam_Bur10y
That's a great question. (And related to how to recognize rational people in
real life.)
I'd say that there must be some characters which are obviously smarter than most
people around them. Because that's what happens in real life: there is the bell
curve, so if all your characters are on a similar level, then the story is (a)
not realistic, (b) the characters are selected by some intelligence filter which
should be explicitly mentioned, or (c) the characters are all from the middle of
the bell curve. Also, in real life the relative power of intelligent people is
often reduced by compartmentalization, but this reduction would be much smaller
for a rationalist hero.
So I'd say it's behaving rationally while most of other people aren't. The
character should somehow reflect on the stupidity of others; whether by
frustration from their inability to cooperate, or by enjoyment of how easily
they are manipulated.
4Ishaan10y
I'm not sure I like that criteria. By that criteria alone, the original death
note anime was rationalist fiction (judging by the first half), as is Artemis
Fowl, Ender's Game, and to some extent even Game of Thrones. There are a lot of
stories where some characters are much smarter than others and know it, but
consuming these works won't teach anyone how to be smarter. (Other than the
extent to which reading good fiction in general improves various things)
None of these stories actually teach the reader anything about epistemology.
Even the linked Death Note fan fic...it uses rationality-associated words like
"utility" and "prior" but if I didn't already know what those words meant I
would have just come away confused. (Granted, it's still early in the story -
but even so)
Also, it hasn't yet broken the conceit of the story (For example, even a normal
person of average intelligence would be surprised and curious about the
existence of the supernatural, and would investigate that). I'd say that
breaking the story conceit is another feature of rationalist fanfiction stories
that has nothing to do with the character's intelligence.
1Viliam_Bur10y
Well, I was disappointed with the Death Note fan fic, because it doesn't seem to
have added value beyond the original story. And I agree that exploring the
supernatural should be a high priority for a rational person, once the
supernatural is experimentally proven. Would it be so difficult to ask Ryuk
whether there are additional magical items that could also be abused? I guess
Ryuk would use an excuse of having "rules" against that, but at least it's worth
trying.
Having a rational superhero is a necessary condition for a rationalist story,
not a sufficient condition. Ender's Game could be a rationalist literature if it
explained Ender's reasoning better, and if Ender strategically tried to improve
his understanding of the world. Okay, another necessary condition is not just
that the superhero is super smart, but also that the super smartness is at least
partially a result of a good strategy, which is shown to the reader.
9MathiasZaman10y
I think there's a difference between what I've been describing as
rationalist!fic (or rationalist!fiction) and fiction in which the -agonists (PCs
is the right terminology, I guess) are rational/clever. Rationalist!fic doesn't
just feature rationalist characters, they're expressively written to teach the
audience about rationality.
Examples:
* Doctor Who features a sufficiently advanced alien who is, within the rules of
the universe, pretty rational (in that he is good at reaching his goals). The
message of the show however, is not: "be clever and rational," it's:
"humanity is awesome and you should feel some wonder about the universe." Not
rationalist!fic.
* The Conqueror's Shadow, by Ari Marmell features rationalist agonists and the
message the audience goes away with is: "be clever and creative when it comes
to reaching worthwhile goals." Rationalist!fic.
6DanielLC10y
Erfworld [http://www.erfworld.com/] is a piece of rationalist fiction not
related to HP:MoR. It was discussed on here a while back. There must be others.
Also, I suggest calling it Rational!Rational!Harry.
2hyporational10y
I get your sentiment, but I don't think this is true. Anyways, wouldn't this
just mean that rational minds usually pursue other goals than writing fiction?
Not saying that there shouldn't be rationalist fiction, but this doesn't sound
like such a bad state of affairs to me.
I haven't read HPMOR. Do I have to know anything about the HP universe to enjoy
this thing? Will I learn anything new if I've read the sequences?
3Viliam_Bur10y
I guess you don't need to know anything from the HP canon. It could perhaps be
even more interesting that way. I don't think you would learn new information.
It might have a better emotional impact, but that is difficult to predict.
I would consider the world better if there were more rational people sharing the
same values as me. We could cooperate on mutual goals, and learn from each
other.
Problem is, rational people don't just appear randomly in the world. Okay,
sometimes they do, but the process if far from optimal. If there is a chance to
make rationality spread more reliable, we should try.
But we don't exactly know how. We tried many things, with partial success. For
example the school system -- it is great in taking an illiterate peasant
population and producing an educated population within a century. But it has
some limits: students learn to guess their teachers' passwords, there are not
enough sufficiently skilled teachers, the pressure from the outside world can
bring religion to schools and prevent teaching evolution, etc. And the system
seems difficult to improve from inside (been there, tried that).
Spreading rationality using fiction is another thing worth trying. There is a
chance to attract a lot of people, make some of them more rational, and create a
lot of utility. Or maybe despite there being dozens of rationalist fiction
stories, they would all be read by the same people; unable to attract anyone
outside of the chosen set. I don't know.
The point is, if you are rational and you think the world would be better with
more rational people... it's one problem you can try to solve. So before Eliezer
we had something like the Drake equation: how many people are rational × how
many of them think making more people rational is the best action × how many of
them think fiction is the best tool for that = almost zero. I am curious about
the specific numbers; especially whether one of them is very close to zero, or
whether it's merely a few small num
1hyporational10y
I'd probably want more people who share my values than more rational people.
Rational people who share my values is better. Rational people who don't share
my values would be the worst outcome.
I don't think the school system was built by rationalists, so I'm not sure where
you were going with that example.
How effective has fiction been in spreading other ideas compared to other
methods?
2ChristianKl10y
Given that the spell never failed in the past, I'm not sure that it would have
been rational to use a knife.
1gattsuru10y
In addition to the other's already listed, DataPacRat's Myou've Got To Be
Kidding Me [https://www.fimfiction.net/story/33512/myouve-gotta-be-kidding-me]
follows a perspective of a character thrown into a setting and trying to analyze
the basic rules in order to optimize them. There are some interesting concepts,
but I don't know that I can recommend it : It has not been updated in over a
year, and was part of some big conglomeration of fanfic writers which had some
pretty widely varying quality (although thankfully nothing necessary to Myou've
plotline).
0maia10y
Fbzr sna gurbevrf ubyq gung Dhveeryzbeg vf hfvat Ibyqrzbeg nf n chccrg vqragvgl
va beqre gb tnva cbjre. Fb Ibyqrzbeg'f erny tbny vfa'g gb xvyy Uneel; vg'f gb
unir n qenzngvp fubjqbja gung trgf ybgf bs nggragvba naq fpnerf crbcyr.
Guhf gur snpg gung Ibyqrzbeg qvqa'g xvyy Uneel jvgu n xavsr vf abg orpnhfr ur'f
abg engvbany rabhtu, ohg orpnhfr ur unf aba-boivbhf tbnyf.
There's no brief answer. I've been slowly gravitating towards, but am not yet
convinced, by the suspicion that making a computer out of twice as much material
causes there to be twice as much person inside. Reason: No exact point where
splitting a flat computer in half becomes a separate causal process, similarity
to behavior of Born probabilities. But that's not an update to the anthropic
trilemma per se.
Hmm, conditional on that being the case, do you also believe that the closer to physics the mind is the more person it is in it? Example: action potentials encoded in the position of rods in a babbage engine vs. spread over fragmented ram used by a functional programing language using lazy evaluation in the cloud.
That seems to be seriously GAZP violating. Trying to figure out how to put my
thoughts on this into words but... There doesn't seem to be anywhere that the
data is stored that could "notice" the difference. The actual program that is
being the person doesn't contain a "realness counter". There's nowhere in the
data that could "notice" the fact that there's, well, more of the person.
(Whatever it even means for there to be "more of a person")
Personally, I'm inclined in the opposite direction, that even N separate copies
of the same person is the same as 1 copy of the same person until they diverge,
and how much difference between is, well, how separate they are.
(Though, of course, those funky Born stats confuse me even further. But I'm
fairly inclined toward the "extra copies of the exact same mind don't add more
person-ness. But as they diverge from each other, there may be more person-ness.
(Though perhaps it may be meaningful to talk about additional fractions of
personness rather than just one then suddenly two hole persons. I'm less sure on
that.)
0Nick_Tarleton10y
Why not go a step further and say that 1 copy is the same as 0, if you think
there's a non-moral fact of the matter? The abstract computation doesn't notice
whether it's instantiated or not. (I'm not saying this isn't itself really
confused - it seems like it worsens and doesn't dissolve the question of why I
observe an orderly universe - but it does seem to be where the GAZP points.)
2Psy-Kosh9y
Hrm... The whole exist vs non exist thing is odd and confusing in and of itself.
But so far it seems to me that an algorithm can meaningfully note "there exists
an algorithm doing/perceiving X", where X represents whatever it itself is
doing/perceiving/thinking/etc. But there doesn't seem there'd be any difference
between 1 and N of them as far as that.
0Nick_Tarleton10y
I wonder if it would be fair to characterize the dispute summarized in/following
from this comment on that post
[http://lesswrong.com/lw/19d/the_anthropic_trilemma/14qk] (and elsewhere) as
over whether the resolutions to (wrong) questions about
anticipation/anthropics/consciousness/etc. will have the character of
science/meaningful non-moral philosophy (crisp, simple, derivable, reaching
consensus across human reasoners to the extent that settled science does), or
that of morality (comparatively fuzzy, necessarily complex
[http://wiki.lesswrong.com/wiki/Magical_categories], not always resolvable in
principled ways, not obviously on track to reach consensus).
Brian Leiter shared an amusing quip from Alex Rosenberg:
So, the... Nobel Prize for “economic science” gets awarded to a guy who says markets are efficient and there are no bubbles—Eugene Fama (“I don’t know what a credit bubble means. I don’t even know what a bubble means. These words have become popular. I don’t think they have any meaning”—New Yorker, 2010), along with another economist—Robert Shiller, who says that markets are pretty much nothing but bubbles, “Most of the action in the aggregate stock market is bubbles.” (NY Times, October 19, 2013) Imagine the parallel in physics or chemistry or biology—the prize is split between Einstein and Bohr for their disagreement about whether quantum mechanics is complete, or Pauling and Crick for their dispute about whether the gene is a double helix or a triple, or between Gould and Dawkins for their rejection of one another’s views about the units of selection. In these disciplines Nobel Prizes are given to reward a scientist who has established something every one else can bank on. In economics, “Not so much.” This wasn’t the first time they gave the award to an economist who says one thing and another one who asserts its direct d
Ugh. The prize was first and foremost in recognition of Fama, Shiller, and Hansen's empiricism in finance. In the sixties, Fama proposed a model of efficient markets, and it held up to testing. Later, both Fama, Shiller, and Hansen showed further tests didn't hold up. Their mutual conclusion: the efficient market hypothesis is mostly right, and while there is no short-term predictability based on publicly available information, there is some long-term predictability. Since the result is fairly messy, Fama and Shiller have differences about what they emphasize (and are both over-rhetorical in their emphasis). Does "mostly right" mean false or basically true?
Mostly right means false. The hypothesis that securities markets are pretty darn
efficient, and everybody goes through a broad range of ideas of inefficiencies
that turn out not to be "real" (or exploitable) is, I think, virtually
uncontested by anyone. Including uncontested by people who think there was a
tech bubble in the late 1990s and a housing bubble in the mid 00's.
I hear Fama interviewed after he got the prize. He denies that the internet
bubble and the housing bubble were bubbles, in the sense that they were knowable
enough to be acted upon. In particular, he claims that anybody who detects the
internet bubble and/or the housing bubble will also detect a bunch of
non-bubbles such that any action they take to make money off their knowledge of
the real bubbles will be (at least) completely negated by what they lose when
they are exploiting unreal bubbles.
Efficient Market Hypothesis denies knowable bubbles, at least according to Fama
interviewed within the last month.
Edit: see the paper [http://www.sciencemag.org/content/342/6158/632.full] for
more precise statements.
4[anonymous]10y
I've already seen work to the effect that somatic cells often have ~10x the
point mutations per human generation as the germline, which is protected by a
small number of divisions per generation and low levels of metabolism and
transcription. It was in mitochondrial rather than nuclear DNA, but the idea is
similar.
Why is 'Friendship is optimal' "dark" and "creepy"? I've read many people refer it that way. Only things that are clearly bad are the killings of all the other lifeforms, but otherwise this is scenario is one of the best that humanity could come across. It's not perfect, but it's good enough and much better than the world we have today. I'm not sure if it's realistic to ask for more. Considering how likely it is that humanity will end in some incredibly fucked up way full of suffering, then I would definitely defend this kind of utopia.
(Comment cosmetically edited in response to Kaj_Sotala, and again to replace a chunk of text that fell in a hole somewhere)
OK, I'll have a go (will be incomplete).
People in general will find the Optimalverse unpleasant for a lot of reasons I'll ignore; major changes to status quo, perceived incompatibility with non-reductionist worldviews, believing that a utopia is necessarily unpleasant or Omelas-like (a variant of this fallacy?), and lots of even messier things.
People on LessWrong may be thinking about portions of the Fun Theory Sequence that the Optimalverse conflicts with, and in some cases they may think that these conflicts destroy all of the value of the future, hence horror.
(rot13 some bits that might consitute spoilers)
Humans want things to go well, but they also want things to have been able to go badly, such that they made the difference. Relevant: Living By Your Own Strength,
Free to Optimize.
The existence of a superintelligence makes human involvement superfluous, and humans do not want this to happen. Relevant: Amputation of Destiny.
Gur snpg gung gur NV vf pbafgenvarq gb fngvfsl uhzna inyhrf gur cbal jnl zrnaf gung n uhtr nzbhag bs cbffvoyr uhzna rkcrevra
Upvoted, but I'd like to request that you'd ROT13 either everything or nothing
past a certain point. Being unable to just select all of it to be deciphered,
and having to instead pick out a few pieces at a time, was mildly annoying.
3Leonhart10y
Done, thanks for saying. I was trying to avoid thinking about the interaction
between rot13 and links (leaving the anchor text un-rot13ed seems like
acceptable practice?) but I should just have spent the extra two minutes.
1Kaj_Sotala10y
Thanks! Much better now. :-) (As for the links, one can just paint over them as
well and think "oh it was just some link" when they show up as garbled in the
translation.)
4[anonymous]10y
Now that I've thought about your post I realized that the biggest question in
this story is what the phrase "satisfy values" actually means. Currently it's a
pretty big hand wave in the story. Especially your first point seems to imply
that we understood it a bit differently.
In my understanding, if I value real challenge, the possibility of things going
badly, or even some level of pain [http://lesswrong.com/lw/xi/serious_stories/],
then the Optimalverse will somehow maximize those values and at least provide
the feeling of real challenge and possibility of things going badly. And I don't
know why the Optimalverse couldn't even provide the real thing. The way Light
Sparks tries to pass the Intermediate Magic test seems an awfully lot like real
challenge. Of course the Optimalverse wouldn't allow you to die because in most
cases the dislike of death overrides the longing for real challenge in the value
system, but that still leaves a lot of options free. I got the impression that
this is how it's actually handled in the story. There's this passage
Your second point is of course a real concern for some people, but personally it
doesn't feel very relevant. My actions don't currently feel very important in
the big scheme of things and I don't know how a superintelligence would change
things all that much. If I'm not personally doing anything important, then it
doesn't really matter to me if the important things are done by other humans or
by a superintelligence. Anyway, this will always be a problem with AGI and if
the AGI is friendly then the benefits outweight the negatives IMO. I think the
alternative is worse.
The way I understood it is that the "ponies" in this story are essentially human
in a pony disguise with four legs (two of them which can almost work like
hands). A paragraph from the story:
A big part of being human is due to our mind and hormones. Walking with two legs
or being able to use hands extensively are more trivial points. If the
psychology of
1TheOtherDave10y
Well, this is not clear, though it might be true.
I have frequently had the experience of not doing anything with my left leg;
losing the ability to ever do anything with my left leg means I'm prevented from
ever doing anything with it. This is horrible, of course, but it's the horror of
being prevented from doing things I often choose not to do. Losing all my limbs
is a more extreme version of the same thing.
Having different limbs might be more identity-distorting, by virtue of providing
experiences that are completely unfamiliar.
Then again it might not.
For my own part, I'm not all that attached to preserving my current identity, so
I'm not sure the question matters to me. If my choice is between an
identity-altering pony body, and an identity-preserving quadriplegic body, I
might well choose the former.
2Eliezer Yudkowsky10y
Endorsed as a good summary.
1[anonymous]10y
I read Caelum Est Conterrens, now I can better understand why some aspects of
the scenario are a bit disconcerting if not horrifying. I find all the options
loop immortality, ray immortality and exponential immortality kinda unpleasant,
but maybe that is as good as it gets. Still, it feels like many of those things
are not exclusive to this scenario, but are part of the world anyway.
Related to this, what did you think about the "normal" ending in the Three
worlds collide?
1Leonhart10y
From flaky memory, I think I find the Normal Ending far less acceptable than
anything in the Optimalverse - one feels the premature truncation of human
nature, rather than the natural exhaustion of it (or the choice to become
inexhaustible) - but hey, maybe I'm inconsistent.
5gattsuru10y
At least to me, it's increasingly difficult to distinguish between a paradise
machine and wireheading, and I dislike wireheading. Each shard of the Equestria
Online simulation is built to be as fulfilling (of values through ponies and
friendship) as possible, for the individual placed within that shard.
That sounds great! .... what happens when you're wrong?
I mean, look at our everyman character, David. He's set up in a shard of his
own, with one hundred and thirty two artificial beings perfectly formatted to
fit his every desire and want, and with just enough variation and challenge to
keep from being bored. It's not real variation, or real challenge, but he'd not
experience that in the real world, either, so it's a moot point. But look at the
world he values. His challenges are the stuff of sophmore programming problems.
His interpersonal relationships include a score counter for how many orgasms he
gives or receives.
Oh, his lover is sentient and real, if that helps, but look at that relationship
in specific. Butterscotch is created as just that little bit less intelligent
than David is -- whether this is because David enjoys teaching, or because he's
wrapped around the idea of women being less powerful than he is, or both, is up
to the reader. Sculpted in her memories to exactly fit David's desires, and even
a few memories that David has of her she never experiences, so that the real
Butterscotch wouldn't have to have experienced unpleasant things that CelestAI
used to manipulate David into liking/protecting her.
There are, to a weak approximation, somewhere between five hundred billion and
one trillion artificial beings in the simulation, by the time most of humanity
uploads. That number will only scale up over time. Let's ignore, for now, the
creepiness in creating artificial sentients who value being people that make
your life better. We're making artificial optimized for enjoying slaking your
desires, which I would be surprised if it happened to also be
Let's ignore, for now, the creepiness in creating artificial sentients who value being people that make your life better.
No, let's not ignore it. Let's confront it, because I want a better explanation.
Surely a person who values being a person that makes my life better, AND who is a person such that I will value making their life better, is absolutely the best kind of person for me to create (if I'm in a situation such that it's moral for me to create anyone at all).
I mean, seriously? Why would I want to mix any noise into this process?
Good point. I've not uncompressed the thoughts behind that statement nearly
enough.
The artificial sentients value being people that make your life better (through
friendship and ponies). Your values don't necessarily change. And artificial
sentients, unlike real ones, have no drive toward coherent or healthy spaces of
design of minds : they do not need to have boredom, or sympathy, or dislike of
pain. If your values are healthily formed, then that's great! If not, not so
much. You can be a psychopath, and find yourself surrounded by people where
"making their lives better" happens only because you like the action "cause them
pain for arbitrary reasons". Or you could be a saint, and find yourself
surrounded by people who value being healed, or who need to be protected, and
what a coincidence that danger keeps happening. Or you can be a guardian, and
enjoy teaching and protecting people, and find yourself creating people that are
weak and in need of guidance. There are a lot of things you can value, and that
we can make sentient minds value, that will make my skin crawl.
Now, the Optimalverse gets rid of some potential for abuse due to setting rules
-- it's post-scarcity on labor, starvation or permanent injury are nonsense,
CelestAI really really knows your mind so there's no chance of misguessing your
values, so we can rule out a lot of incidental house elf abuse
[http://lesswrong.com/lw/3af/what_is_evil_about_creating_house_elves/] -- but it
doesn't require you to be a good person. Nor does it require CelestAI to be.
CelestAI cares about satisfying values through friendship and ponies, not about
the quality of the values themselves. The machine does not and can not judge.
If it's moral to create a person and if you're a sufficiently moral person, then
there's nothing wrong with artificial beings. My criticism isn't that CelestAI
made a trillion sentient beings or a trillion trillion sentient beings --
there's nothing meaningfully worrying about that. The creepy
5Leonhart10y
Thank you for trying to explain.
I'm curious about to what extent these intutions are symmetric. Say that the
group of like-minded and mutually friendly extreme masochists existed first, and
wanted to create their mutually preferred, mutually satisfying sadist. Do you
still have a problem with that?
The above sounds like a description of a "good parent", as commonly understood!
To be consistent with this, do you think that parenting of babies as it
currently exist is problematic and creepy, and should be banned once we have the
capability to create grown-ups from scratch?
(Note that this being even possible depends on whether we can simulate someone's
past without that simulation still counting as it having happened, which is
nonobvious.)
If David had wanted a symmetrically fulfilled partner slightly more intelligent
than him, someone he could always learn from, I get the feeling you wouldn't
find it as creepy. (Correct me if that's not so). But the situation is
symmetrical. Why is it important who came first?
1gattsuru10y
Thank you for the questions, and my apologies for the delayed response.
Yes, with the admission that there are specific attributes to masochism and
sadism that are common but not universal to all possible relationships or even
all sexual relationships with heavy differences in power dynamics(1). It's less
negative in the immediate term, because one hundred and fifty masochists making
a single sadist results in a maximum around forty million created beings instead
of one trillion. In the long term, the equilibrium ends up pretty identical.
(1) For contrast, the structures in wanting to perform menial labor without
recompense are different from those wanting other people to perform labor for
you, even before you get to a post-scarcity society. Likewise, there are
difference in how prostitution fantasies generally work versus how fantasies
about hiring prostitutes do.
I'm not predisposed toward child-raising, but from my understanding the point of
"good parent" does not value making someone weak: it values making someone
strong. It's the limitations of the tools that have forced us to deal with years
of not being able to stand upright. Parents are generally judged negatively if
their offspring are not able to operate our their own by certain points.
If it were possible to simulate or otherwise avoid the joys of the terrible
twos, I'd probably consider it more ethical. I don't know that I have the tools
to properly evaluate the loss in values between the two actions, though. Once
you've got eternity or even a couple reliable centuries, the damages of ten or
twenty years bother me a lot less.
These sort of created beings aren't likely to be in that sort of ten or twenty
year timeframe, though. At least according to the Caelum est Conterrens fic, the
vast majority of immortals (artificial or uploaded) stay within a fairly limited
set of experiences and values based on their initial valueset. You're not
talking about someone being weak for a year or a decade or even a
2Leonhart10y
Sorry, I'm not following your first point. The relevant "specific attribute"
that sadism and masochism seem to have in this context are that they
specifically squick User:gattsuru. If you're trying to claim something else is
objectively bad about them, you've not communicated.
Yes, and my comparison stands; you specified a person who valued teaching and
protecting people, not someone who valued having the experience of teaching and
protecting people. Someone with the former desires isn't going to be happy if
the people they're teaching don't get stronger. You seem to be envisaging some
maximally perverse hybrid of preference-satisfaction and wireheading, where I
don't actually value really truly teaching someone, but instead of cheaply
feeding me delusions, someone's making actual minds for me to fail to teach!
We are definitely working from very different assumptions here. "stay within a
fairly limited set of experiences and values based on their initial valueset"
describes, well, anything recognisable as a person. The alternative to that is
not a magical being of perfect freedom; it's being the dude from Permutation
City randomly preferring to carve table legs for a century.
I don't think that's what we're given in the story, though. If Butterscotch is
made such that she desires self-improvement, then we know that David's desires
cannot in fact collapse in such a way, because otherwise she would have been
made differently. Agreed that it's a problem if the creator is less omniscient,
though.
Butterscotch is that person. That is my point about symmetry.
But then - what do you want to happen? Presumably you think it is possible for a
Lars to actually exist. But from elsewhere in your comment, you don't want an
outside optimiser to step in and make them less "shallow", and you seem dubious
about even the ability to give consent. Would you deem it more authentic to
simulate angst und bange unto the end of time?
0lmm10y
That seems less worrying, but I think the asymmetry is inherited from the
behaviours themselves - masochism seems inherently creepy in a way that sadism
isn't (fun fact: I'm typing this with fingers with bite marks on them. The
recursion is interesting, and somewhat scary - usually if your own behaviour
upsets or disgusts you then you want to eliminate. But it seems easy to imagine
(in the FiOverse or similar) a masochist who would make themselves suffer more
not because they enjoyed suffering but because they didn't enjoy suffering, in
some sense. Like someone who makes themselves an addict because they enjoy being
addicted (which would also seem very creepy to me))
Yes. Though I wouldn't go around saying that for obvious political reasons.
(Observation: people who enjoy roleplaying parent/child seem to be seen as
perverts even by many BDSM types).
I think creating someone less intelligent than you is more creepy than creating
someone more intelligent than you for the same reason that creating your willing
slave is creepier than creating your willing master - unintelligence is
maladaptive, perhaps even self-destructive.
3Leonhart10y
Well, OK, but I'm not sure this is interesting. So a mind could maybe be built
that was motivated by any given thing to do any other given thing, accompanied
by any arbitrary sensation. It seems to me that the intuitive horror here is
just appreciating all the terrible degrees of freedom, and once you've got over
that, you can't generate interesting new horror by listing lots of particular
things that you wouldn't like to fill those slots (pebble heaps! paperclips!
pain!)
In any case, it doesn't seem a criticism of FiO, where we only see sufficiently
humanlike minds getting created.
Ah, but now you speak of love! :)
I take it you feel much the same regarding romance as you do parenting?
That seems to be a sacred-value reaction - over-regard for the beauty and
rightness of parenting - rather than "parenting is creepy so you're double
creepy for roleplaying it", as you would have it.
Maladaptivity per se doesn't work as a criticism of FiO, because that's a
managed universe where you can't self-destruct. In an unmanaged universe, sure,
having a mentally disabled child is morally dubious (at least partly) because
you won't always be there to look after it; as would be creating a house elf if
there was any possibility that their only source of satisfaction could be
automated away by washing robots.
But it seems like your real rejection is to do with any kind of unequal power
relationship; which sounds nice, but it's not clear how any interesting social
interaction ever happens in a universe of perfect equals. You at least need
unequal knowledge of each other's internal states, or what's the point of even
talking?
-2lmm10y
You're right, I understated my case. I'm worried that there's no path for
masochists in this kind of simulated universe (with self-modification available)
to ever stop being masochists - I think it's mostly external restraints that
push people away from it, and without those we would just spiral further into
masochism, to the exclusion of all else. I guess that could apply to any other
hobby - there's a risk that people would self-modify to be more and more into
stamp-collecting or whatever they particularly enjoyed, to the exclusion of all
else - but I think for most possible hobbies the suffering associated with
becoming less human (and, I think, more wireheady) would pull them out of it.
For masochism that safety doesn't exist.
I think normal people don't treat romance like an addiction, and those that do
("clingy") are rightly seen as creepy.
Maybe. I think the importance of being parented for a child overrides the
creepiness of it. We treat people who want to parent someone else's child as
creepy.
Sure, so maybe it's not actually a problem, it just seems like one because it
would be a problem in our current universe. A lot of human moral "ick"
judgements are like that.
Or maybe there's another reason. But the creepiness in undeniably there. (At
least, it is for me. Whether or not you think it's a good thing on an
intellectual level, does it not seem viscerally creepy to you?)
Well I evidently don't have a problem with it between humans. And like I said,
creating your superiors seems much less creepy than creating your inferiors. So
I don't think it's as simple as objecting to unequal power relationships.
6Leonhart10y
I think we're using these words differently. You seem to be using "masochism" to
mean some sort of fully general "preferring to be frustrated in one's
preferences". If this is even coherent, I don't get why it's a particularly
dangerous attractor.
Disagree. The source of creepiness seems to be non-reciprocity. Two people being
equally mutually clingy are the acme of romantic love.
I queried my brain for easy cheap retorts to this and it came back with
immediate cache hits on "no we don't, we call them aunties and godparents and
positive role models, paranoid modern westerners, it takes a village yada yada
yada".
All that is probably unfounded bullshit, but it's immediately present in my head
as part of the environment and so likely in yours, so I assume you meant
something different?
No, not as far as I can tell. But I suspect I'm an emotional outlier here and
you are the more representative.
0lmm10y
No, those examples really didn't come to mind. Aunties and godparents are
expected to do a certain amount of parent-like stuff, true, but I think there
are boundaries to that and overmuch interest would definitely seem creepy
(likewise with professional childcarers). But yeah, that could easily be very
culture-specific.
7NancyLebovitz10y
A little fiction on related topics: "Hell Is Forever" by Alfred Bester-- what if
your dearest wish ts to create universes?You're given a pocket universe to live
in forever, and that's when you find out that your subconscious keeps leaking
into your creations (they're on the object level, not the natural law level),
and you don't like your subconscious.
Saturn's Children by Charles Stross. The human race is gone. All that's left is
robots, who were built to be imprinted on humans. The vast majority of robots
are horrified at the idea of recreating humans.
1blacktrance10y
Having just finished reading "Friendship is Optimal" literally less than 10
minutes ago, I didn't find it dark or creepy at all. There are certain aspects
of it that are suboptimal (being ponies, not wireheading), but other than that,
it sounds like a great world.
2[anonymous]10y
Can you elaborate? Do you mean that not being able to wirehead is suboptimal?
5blacktrance10y
Yes. I think wireheading is the optimal state (assuming it can make me as happy
as possible). I recognize this puts me at odds with an element of the LessWrong
consensus.
0[anonymous]10y
Poll: how many readers who have not found FiM to be substantially "Dark" and
"Creepy" have also supported the Normal Option in Ch. 5 of Three Worlds Collide?
I naturally suspect a strong crossover. [pollid:575]
Am I the only one who is bothered that these threads don't start on Monday anymore?
Posting a request from a past open thread again: Does anyone have a table of probabilities for major (negative) life events, like divorse or being in a car accident? I ask this to have a priority list of events to be prepared for, either physically or mentally.
The lifetime risk
[http://www.cancer.org/cancer/cancerbasics/lifetime-probability-of-developing-or-dying-from-cancer]
of developing cancer is 44 % in males and 38 % in females. The lifetime risk of
dying from cancer is 23 % in males and 19 % in females. It's worth mentioning
that the methods for gathering medical mortality statistics are pretty biased,
if not completely bonkers.
ETA: Apparently a new WHO recommendation for filling death certificates was introduced in 2005-2006 and this caused a significant drop in pneumonia mortality in Finland.
I'm not entirely sure if it works this way in the whole EU, but it probably does. It's more complicated than what I explain below, but it's the big picture that matters.
The most common way to record mortality statistics is that the doctor who was treating the patient fills a death certificate. There are three types of causes of death that can be recorded in a death certificate. There are immediate causes of death and there are underlying causes of death. There are also intermediate causes of death, but nobody really cares about those because recording them is optional. The statistics department in Finland is interested in recording only the underlying causes of death and that's what gets published as mortality statistics. Only one cause of death per patient gets recorded.
If someone with advanced cancer gets pneumonia and dies, a doctor fills the death certificate saying that the underlying cause of death was cancer and the immediate cause of death was pneumonia. Cancer gets recorded as the one and only cause of dea... (read more)
in USA they can fill in 20 secondary causes on the death certificates and all
the anonymized death certificates since 1959 are available online from NCHS in
computer-readable form to check/search for conditions. Irregularities usually
appear when there is a switch from one ICD-Code to a new one, so in
1969,1979,1999. Other irregularities are often checked, compared with other
states,countries,conditions and the reason discovered
2hyporational10y
It seems I miscommunicated here. What I meant to say that listing these other
diseases has no meaningful impact on the mortality statistics, although
technically speaking they are causes of death. If the point is to gather
accurate statistics, listing them feels like a consolation prize, because
statisticians don't seem to be interested in them.
In Finland a direct translation for these would be "contributory causes of
death". That's probably the same thing as secondary causes of death. The problem
is, it's difficult for someone who makes these into statistics to know how
important they were. Almost anything the patient has can be listed as a
contributory cause of death.
Even a bigger problem is that listing them is completely optional. If almost
nobody fills them in properly (because they usually have better things to do),
that is another good reason for a statistician not to use them.
Is filling in the secondary causes mandatory in US? Are there clear restrictions
for what can be listed? If not, I'm not sure if they provide all that useful
information, statistically speaking. Are they really used in meaningful way in
any statistics?
I suppose WHO recommendations for filling these certificates impact the US too.
2Lumifer10y
Very interesting, thank you.
I have a pet interest -- carefully looking at how standard,
universally-accepted, real-life, empirical data is collected and produced and
whether it actually represents what everyone blindly assumes it does. In the
field of economics, for example, closely examining how, say, the GDP or the
inflation numbers are calculated is... illuminating.
3NancyLebovitz10y
Details?
0Lumifer10y
The problem is that the problems aren't summarizeable in a neat half a page
list. And it's not like the calculations are wrong, rather they are right under
a certain set of assumptions and boundary conditions -- and the issue is that
people forget about these assumptions and conditions and just assume they're
right unconditionally.
For an introduction take a look at e.g. Shadowstats
[http://www.shadowstats.com/primers-and-reports]. I don't necessarily agree with
everything there, but it's a useful starting point.
0NancyLebovitz10y
Thanks.
I twitch when changes in GDP are reported to a tenth of a percent-- it seems to
me that it couldn't be measured with such precision. Do you think I'm being
reasonable?
2ahbwramc10y
My own (uninformed) intuition is that GDP changes would be much more accurate
than absolute GDP values, just because systematic errors could largely cancel
out.
0Lumifer10y
GDP as reported is the product of a particular well-defined calculation. That
product can easily be calculated to whatever precision you feel like.
When you say "it couldn't be measured with such precision", how do you define
the Gross Domestic Product that couldn't be measured precisely?
0NancyLebovitz10y
I'm assuming that the GDP is some sort of measure of the health of the economy--
that's why people are concerned with it. The health of the economy seems to me
like rather an approximate sort of thing.
0Lumifer10y
GDP -- Gross Domestic Product -- basically means the sum of the value (in the
economic sense) of all goods produced domestically during a given period, e.g. a
year.
If you want to measure the "health of the economy", that's quite different.
You'll have to define what do you mean by that expression and then decide which
measurements do you want to consider. For example, some people might consider
the unemployment rate to be one those measurements, or, say, the Gini index, or
the median income, or... the possibilities are endless.
2NancyLebovitz10y
Why do people measure the value of all the goods produced domestically during a
year?
If nothing else, there has to be a fudge factor because some of the economy is
underground.
0Lumifer10y
From Wikipedia: "GDP was first developed by Simon Kuznets for a US Congress
report in 1934. ... After the Bretton Woods conference in 1944, GDP became the
main tool for measuring a country's economy."
Yes, the GDP number is, of course, imprecise. By itself it's not a problem --
most of our measurements are imprecise.
I am not sure what are you getting at. Do you think that GDP is useless or
cannot be measured or what?
0philh10y
I like it for purely selfish reasons. I can't easily post between Sunday bedtime
and Tuesday evening. If I want to post and the thread starts on Monday, my post
will be less visible.
In honor of NaNoWriMo, I offer up this discussion topic for fans of HPMOR and rationalist fiction in general:
How many ways can we find that stock superpowers (magical abilities, sci-fi tech, whatever), if used intelligently, completely break a fictional setting? I'm particularly interested in subtly game-breaking abilities.
The game-breaking consequences of mind control, time travel, and the power to steal other powers are all particularly obvious, but I'm interested in things like e.g. Eliezer pointing out that he had to seriously nerf the Unbreakable Vow in HPMOR to keep the entire story from being about that.
I seem to be able to do this with almost any power to various degrees. Including
ones I actually have, and ones that are common among humans. Any specifics you
had in mind?
Really, ANY ability will reroll some chaotic stuff and be a valuable asset
simply because it's rare. Even a deliberating curse, if rare and interesting
enough, can do things like be useful for research or provide unique perspectives
to be studied. So really, the only limit to where a power stops being useful is
where it's only useful to someone else controlling you.
Hence, why anything properly rationalist that's not going to be largely about
breaking the setting must do something like give MANY the ability so the low
hanging fruit is already gone, or make it inherently mysterious and
unreplicable, or have some deliberate intelligence preventing it from getting
well known, or something like that.
0ChrisHallquist10y
We may be operating on different definitions of what it means to "break" a
setting. For example, how useful is flight by itself, really?
Many abilities seem potentially very useful if other people don't know you have
them, but become much less useful once you get found out:
* The "energy blast" type abilities common among wizards and superheroes are
not terribly combat useful in a setting with modern weapons. Their big use
would be assassination: slip past metal detectors and pat-downs, baffle
detectives... but once you get found out, no one wants you around and the
police know the otherwise inexplicable burning death was probably you.
* Mind-reading, if it has standard limitations on range, similarly becomes a
lot less useful once you get found out and banned from everywhere.
* Super-senses on a level comparable to what's already possible with
technology: lets you spy on people from a distance without anyone wondering
what you're doing with those binoculars... otherwise not so useful.
Thats because those are among the worst possible ways to use those abilities.
The energy blasts as usually depicted break conservation of energy; with a bit of physics trickery you can get time travel out of that. Even if not, they make you an extremely portable and efficient energy source, perfect for a spaceship where mass is critical and a human needs to come along anyway but it doesn't matter in particular who since it's for PR reasons.
Mind reading is a means of communication that does not require cooperation or any abilities in the target, and cant be lied through. Communication with locked-in patients, interrogation, extraction of testimonials from animals. And if you an find a way to yourself precommit, you also have fully reliable precommitment checking for everyone, lie detection for political promises, and the ultimate forensics tool.
If you combine the strengths of 2 kinds of system, you get something greater than the sum of it's parts. So it is with human senses and digital sensors. The key here is bandwidth, and analysis. Sure, you can get all the same data onto a computer, but it won't do much good there. Someone with true super-senses as flexible and integrated as the... (read more)
Wait, explain that? What is "a bit of physics trickery" here?
I know in HPMOR, Harry points out that violations of conservation of mass in
magic imply FTL signaling, and I know from relativity FTL implies time travel,
but Harry doesn't even consider running off to get time travel from common
spells. Assumed turning the theory into practice would be far from
straightforward.
Heck, it's not even obvious to me how you turn FTL travel into time travel in
practice, if you don't have control over what frame of reference you're FTL in.
2[anonymous]10y
In the approximation of the True Laws Of Physics which is in use today --- The
Relativistic Standard Model --- FTL (and subsidiarily time travel) is nonsense.
Like, it's gibberish. It is a description of a situation which not only does not
happen, but which is a mathematical falsehood. It is impossible. It is like
violation of conservation of energy, or violation of entropic developments.
The maths plainly states that assumptions like that leads to a contradiction,
and we do in fact know that the Standard Model is complete and consistent (i.e.
cannot encode formulas as objects).
Any hypothetical scenarios involving time travel will invariably be contrite,
non-causal, and require classical mechanics. It is fiction, make of it what you
will.
1Armok_GoB10y
Oooh, that's even better: from a contradiction, anything can be proved. Thus, if
we can break conservation of energy we gain literal omnipotence.
2[anonymous]10y
... I am pretty sure that is not how it works.
1So8res10y
To make a time machine out of an FTL drive, simply travel somewhere at near
lightspeed and FTL back. Now you're where you started, before you left.
0CAE_Jones10y
Bit over a decade ago, when I was still a naive wild-eyed idealist blissfully
unaware of anything realistic about people at all, I set up the foundations for
most of my sci fi/fantasy/etc fictions. The piece of supertech I immediately had
to nerf was technology based around a form of space compression--somewhere
between the MCron Crystal (Marvel) and Dynocaps (Dragonball). And today, I'm
still coming up with more reasons to nerf it heavily, to the extent that the
civilization based around this technology must certainly have had a shady
counsil of vagueness who developed a limited-scope AI to control production and
block any of the scarier uses. This after I tied it to FTL capabilities (I'm
trying very hard to prevent this from turning into timetravel, not that I'm too
afraid to abuse that option for a crisis crossover or something).
It took me much longer to realize how much nerfing of powers unrelated to the
above tech I needed to do, partially because I was trying to avoid breaking the
laws of physics too horribly (this universe runs on a sort of mangling of GR and
M-Theory that assumes fundamental forces each get something resembling a
dimension, and that attempts to break the light-speed barrier have lots of
wide-reaching consequences).
A few things I've had to seriously remodel (mostly for the sake of in-universe
physics not breaking, though I'm sure there are obvious uses/flaws here I've
missed):
* Light manipulation. The version I started out with was written as superweight
(tvtropes) [http://tvtropes.org/pmwiki/pmwiki.php/Main/SuperWeight] type IV,
when the applications I had in mind were more indicative of a potential V or
VI. Since tried to tone down so that this character hovers between II and III
(the distinction is a little vague), but I'm sure I'm still underutilizing
this character even with all the restrictions I've added. (Invisibility,
holograms, filtering specific wavelengths to minimize radiation risks,
screwing with radi
Does anyone here have any serious information regarding Tulpas? When I first heard of them they immediately seemed to be the kind of thing that is obviously and clearly a very bad idea, and may not even exist in the sense that people describe them. A very obvious sign of a persons who is legitimately crazy, even.
Naturally, my first re-reaction is the desire to create one myself (One might say I'm a bit contrarian by nature). I don't know any obvious reason not to (ignoring social stigma and time consuming initial investment), And there may be some advantag... (read more)
Have you read the earlier discussions on this topic
[https://www.google.com/#q=site%3Ahttp%3A%2F%2Flesswrong.com+Tulpa&safe=off]?
2Joshua_Blaine10y
I had not, actually. The link you've given just links me to Google's homepage,
but I did just search LW for "Tulpa" and found it fine, so thanks regardless.
edit: The link's original purpose now works for me. I'm not sure what the
problem was before, but it's gone now.
4Ishaan10y
There's tons of easily discovered information on the web about it.
I'm not sure the Tulpa-crowed would agree with this, but I think a non-esoteric
example of Tulpas in everyday life is how some religious people say that God
[http://news.stanford.edu/news/2012/april/conversations-with-god-041212.html]
really speaks and appears to them. The "learning process" and stuff seem pretty
similar - the only difference I can see is that in the case of Tulpas it is
commonly acknowledged that the phenomenon is imaginary.
Come to think of it, that's probably a really good method for creating Tulpas
quickly - building off a real or fictional character for whom you already have a
relatively sophisticated mental model. It's probably also important that you are
predisposed to take seriously the notion that this thing might actually be an
agent which interacts with you...which might be why God works so well, and why
the Tulpa-crowed keeps insisting that Tulpas are "real" in the sense that they
carry moral weight. It's an imagination-belief driven phenomenon.
It might also illustrate some of the "dangers" - for example, some people who
grew up with notions of the angry sort of God might always feel guilty about
certain "sinful" things which they might not intellectually feel are bad.
I've also heard claims of people who gain extra abilities / parallel processing
/ "reminders" with Tulpas....basically, stuff that they couldn't do on their
own. I don't really believe that this is possible, and if this were demonstrated
to me I would need to update my model of the phenomenon. To the
tupla-community's credit, they seem willing
[http://tulpo.deviantart.com/art/Tulpa-Parallel-Processing-Tests-v1-0-366728259]
to test the belief.
0[anonymous]10y
Very good! A psychologist who studies evangelicals recognized it as the same
phenomenon.
[http://www.nytimes.com/2013/10/15/opinion/luhrmann-conjuring-up-our-own-gods.html?_r=0]
There is pretty good empirical evidence
[(https://docs.google.com/file/d/0B-IrwpisjguaZHd4MnRuSV9SQWM/edit?usp=sharing]
against the parallel-processing idea now.
4Tenoke10y
What is stopping is me is the possibility that I will be potentially permanently
relinquishing cognitive resources for the sake of the Tulpa.
4Armok_GoB10y
I've been doing some research (mainly hanging on their subreddit) and I think I
have a fairly good idea of how tulpas work and the answers to your questions.
There are a myriad very different things tulpas are described as and thus
"tulpas exist in the way people describe them" is not well defined.
There undisputably exist SOME specific interesting phenomena that's the referent
of the word Tulpa.
I estimates a well developed tulpas moral status to be similar to that of a
newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet
dog.
I estimates it's ontological status to be similar to a video game NPC, recurring
dream character, or schizophrenic hallucination.
I estimate it's power over reality to be similar to a human (with lower
intelligence than their host) locked in a box and only able to communicate with
one specific other human.
It does not seem deciding to make a tulpa is a sign of being crazy. Tulpas
themselves seem to not be automatically unhealthy and can often help their host
overcome depression or anxiety. However, there are many signs that the act of
making a tulpa is dangerous and can trigger latent tendencies or be easily done
in a catastrophically wrong way. I estimate the risk is similar to doing
extensive meditation or taking a single largeish dose of LSD. For this reason I
have not and will not attempt making one.
I am to lazy to find citations or examples right now but I probably could. I've
tried to be a good rationalist and am fairly certain of most of these claims.
3NancyLebovitz10y
Has anyone worked on making a tulpa which is smarter than they are? This seems
at least possible if you assume that many people don't let themselves make full
use of their intelligence and/or judgement.
5Armok_GoB10y
Unless everything I think I understand about tulpas is wrong, this is at the
very least significantly harder than just thinking yourself smarter without one.
All the idea generating is done before credit is assigned to either the "self"
or the "tulpa".
What there ARE several examples of however are tulpas that are more emotionally
mature, better at luminosity, and don't share all their hosts preconceptions.
This is not exactly smarts though, or even general purpose formal rationality.
One CAN imagine scenarios where you end up with a tulpa smarter than the host.
For example the host might have learned helplessness, or the tulpa being
imagined as "smarter than me" and thus all the brains good ideas get credited to
it.
Disclaimer: this is based of only lots of anecdotes I've read, gut feeling, and
basic stuff that should be common knowledge to any LWer.
I'm reminded of many years ago, a coworker coming into my office and asking me a question about the design of a feature that interacts with our tax calculation.
So she and I created this whole whiteboard flowchart working out the design, at the end of which I said "Hrm. So, at a high level, this seems OK. That said, you should definitely talk to Mark about this, because Mark knows a lot more about the tax code than I do, and he might see problems I missed. For example, Mark will probably notice that this bit here will fail when $condition applies, which I... um... completely failed to notice?"
I could certainly describe that as having a "Mark" in my head who is smarter about tax-code-related designs than I am, and there's nothing intrinsically wrong with describing it that way if that makes me more comfortable or provides some other benefit.
But "Mark" in this case would just be pointing to a subset of "Dave", just as "Dave's fantasies about aliens" does.
See also 'rubberducking' and previous discussions of this on LW. My basic theory
is that reasoning was developed for adversarial purposes, and by rubberducking
you are essentially roleplaying as an 'adversary' which triggers deeper
processing (if we ever get brain imaging of system I vs system II thinking, I'd
expect that adversarial thinking triggers system II more compared to 'normal'
self-centered thinking).
5TheOtherDave10y
Yes. Indeed, I suspect I've told this story before on LW in just such a
discussion.
I don't necessarily buy your account -- it might just be that our brains are
simply not well-integrated systems, and enabling different channels whereby
parts of our brains can be activated and/or interact with one another (e.g.,
talking to myself, singing, roleplaying different characters, getting up and
walking around, drawing, etc.) gets different (and sometimes better) results.
This is also related to the circumlocution strategy for dealing with aphasia.
2Kaj_Sotala9y
Obligatory link
[http://www.umiacs.umd.edu/~horty/courses/readings/mercier-sperber-2011.pdf].
1Armok_GoB10y
Yea in that case presumably the tulpa would help - but not necessarily
significantly more than such a non-tulpa model that requires considerably less
work and risk.
Basically, a tulpa can technically do almost anything you can... but the absence
of a tulpa can do them to, and for almost all of them there's some much easier
and at least as effective way to do the same thing.
0ChristianKl10y
Mental process like waking up without an alarm clock at a specific time aren't
easy. I know a bunch of people who have that skill but it's not like there a
step by step manual that you can easily follow that gives you that ability.
A tulpa can do things like that. There are many mental processes that you can't
access directly but that a tulpa might be able to access.
1Armok_GoB10y
I am surprised to know there isn't such a step by step manual, suspect that
you're wrong about there not being one, and in either case know about a few
people that could probably easily write one if motivated to do so.
But I guess you could make this argument; that a tulpa is more flexible and has
a simpler user interface, even if it's less powerful and has a bunch of logistic
and moral problems. I dont like it but I can't think of any counter arguments
other than it being lazy and unaesthetic, and the kind of meditative people that
make tulpas should not be the kind to take this easy way out.
3ChristianKl10y
My point isn't so much that it impossible but that it isn't easy.
Creating a mental device that only wakes me up will be easier than creating a
whole Tupla but once you do have a Tulpa you can reuse it a lot.
Let's say I want to practice Salsa dance moves at home. Visualising a full dance
partner completely just for the purpose of having a dance partner at home
wouldn't be worth the effort.
I'm not sure about how much you gain by pair programming with a Tulpa, but the
Tulpa might be useful for that task.
It takes a lot of energy to create it the first time but afterwards you reap the
benefits.
Tulpa creation involves quite a lot of effort so it doesn't seem the lazy road.
0Armok_GoB10y
Hmm, you have a point, I hadn't thought about it that way. If it wasn't so
dangerous I would have asked you to experiment.
0hesperidia9y
I do not have "wake up at a specific time" ability, but I have trained myself to
have "wake up within ~1.5 hours of the specific time" ability. I did this over a
summer break in elementary school because I learned about how sleep worked and
thought it would be cool. Note that you will need to have basically no sleep
debt [https://en.wikipedia.org/wiki/Sleep_debt] (you consistently wake up
without an alarm) for this to work correctly.
The central point of this method is this: a sleep cycle (the time it takes to go
from a light stage of sleep to the deeper stages of sleep and back again) is
about 1.5 hours long. If I am not under stress or sleep debt, I can estimate my
sleeping time to the nearest sleep cycle. Using the sleep cycle as a unit of
measurement lets me partition out sleep without being especially reliant on my
(in)ability to perceive time.
The way I did it is this (each step was done until I could do it reliably, which
took up to a week each for me [but I was a preteen then, so it may be different
for adults]):
1. Block off approximately 2 hours (depending on how long it takes you to fall
asleep), right after lunch so it has the least danger of merging with your
consolidated/night sleep, and take a nap. Note how this makes you feel.
2. Do that again, but instead of blocking off the 2 hours with an alarm clock,
try doing it naturally, and awakening when it feels natural, around the 1.5h
mark (repeating this because it is very important: you will need to have
very little to no accumulated sleep debt for this to work). Note how this
makes you feel.
3. Do that again, but with a ~3.5-hour block. Take two 1.5 hour sleep cycle
naps one after another (wake up in between).
4. During a night's sleep, try waking up between every sleep cycle. Check this
against [your sleep time in hours / 1.5h per sleep cycle] to make sure that
you caught all of them.
5. Block off a ~3.5 hour nap and try taking it as two sleep cycles wit
0TheOtherDave10y
Yes, I would expect this.
Indeed, I'm surprised by the "almost" -- what are the exceptions?
0Armok_GoB10y
Anything that requires you using your body and interacting physically with the
world.
0TheOtherDave10y
I'm startled. Why can't a tulpa control my body and interact physically with the
world, if it's (mutually?) convenient for it to do so?
0Armok_GoB10y
Well if you consider that the tulpa doing it on it's own then no I can't think
of any specific exceptions. Most tulpas can't do that trick though.
3TheOtherDave10y
Well, let me put it this way: suppose my tulpa composes a sonnet (call that
event E1), recites that sonnet using my vocal cords (E2), and writes the sonnet
down using my fingers (E3).
I would not consider any of those to be the tulpa doing something "on its own",
personally. (I don't mean to raise the whole "independence" question again, as I
understand you don't consider that very important, but, well, you brought it
up.)
But if I were willing to consider E1an example of the tulpa doing something on
its own (despite using my brain) I can't imagine a justification for not
considering E2 and E3 equally well examples of the tulpa doing something on its
own (despite using my muscles).
But I infer that you would consider E1 (though not E2 or E3) the tulpa doing
something on its own. Yes?
So, that's interesting. Can you expand on your reasons for drawing that
distinction?
1Armok_GoB10y
I feel like I'm tangled up in a lot of words and would like to point out that
I'm not an expert and don't have a tulpa, I just got the basics from reading
lots of anecdotes on reddit.
You are entirely right here- although I'd like to point out most tulpas wouldn't
be able to do E2 and E3, independent or not. Also, that something like
"composing a sonnet" is probably more the kind of thing brains do when their
resources are dedicated to it by identities, not something identities do, and
tulpas are mainly just identities. But I could be wrong both about that and what
kind of activity sonet composing is.
0TheOtherDave10y
Interesting! OK, that's not a distinction I'd previously understood you as
making.
So, what do identities do, as distinct from what brains can be directed to do?
(In my own model, FWIW, brains construct identities in much the same way brains
compose sonnets.)
0Armok_GoB10y
I guess I basically think of identities as user accounts, in this case. I just
grabbed the closest fitting language dichotomy to "brain" (which IS referring to
the physical brain) and trying to define and it further now will just lead to
overfitting, especially since it almost certainly varies far more than either of
us expect (due to the typical mind fallacy) from brain to brain.
And yea, brains construct identities the same way they construct sonnets. And
just like music it can be small (jingle, minor character in something you write)
or big (long symphony, Tulpa). And identities only slightly more compose
sonnets, than sonnets create identities.
It's all just mental content, that can be composed, remixed, deleted, executed,
etc. Now, brains have a strong tendency to in the lack of an identity create one
and give it root access, and this identity end up WAY more developed and
powerful than even the most ancient and powerful tulpas, but there is no
probably no or very little qualitative difference.
There are a lot of confounding factors. For example, something that I consider
impossibly absurd seems to be the norm for most humans; considering their
physical body as a part of "themselves" and feel as if they are violated if
their body is. Put in their perspective, it's not surprising most people can't
disentangle parts of their own brain(s), mind(s), and identities without
meditating for years until they get it shaved in their face via direct
perception, and even then probably often get it wrong. Although I guess my
illness has shaved it in my face just as anviliciouslly.
Disclaimer: I got tired trying to put disclaimers on the dubious sources on each
individual sentence, so just take it with a grain of salt OK and don't assume I
believe everything I say in any persistent way.
0TheOtherDave10y
OK... I think I understand this. And I agree with much of it.
Some exceptions...
I don't think I understand what you mean by "root access" here. Can you give me
some examples of things that an identity with root access can do, that an
identity without root access cannot do?
This is admittedly a digression, but for my own part, treating my physical body
as part of myself seems no more absurd or arbitrary to me than treating my
memories of what I had for breakfast this morning as part of myself, or my
memories of my mom, or my inability to juggle. It's kind of absurd, yes, but all
attachment to personal identity is kind of absurd. We do it anyway.
All of that said... well, let me put it this way: continuing the sonnet analogy,
let's say my brain writes a sonnet (S1) today and then writes a sonnet (S2)
tomorrow. To my way of thinking, the value-add of S2 over and above S1 depends
significantly on the overlap between them. If the only difference is that S2
corrects a mis-spelled word in S1, for example, I'm inclined to say that
value(S1+S2) = value(S2) ~= value(S1) .
For example, if S1 -> S2 is an improvement, I'm happy to discard S1 if I can
keep S2, but I'm almost as happy to discard S2 if I can keep S1 -- while I do
have a preference for keeping S2 over keeping S1, it's noise relative to my
preference for keeping one of them over losing both.
I can imagine exceptions to the above, but they're contrived.
So, the fix-a-mispelling case is one extreme, where the difference between S1
and S2 is very small. But as the difference increases, the value(S1+S2) =
value(S2) ~= value(S1) equation becomes less and less acceptable. At the other
extreme, I'm inclined to say that S2 is simply a separate sonnet, which was
inspired by S1 but is distinct from it, and value(S1+S2) ~= value(S2) +
value(S1).
And those extremes are really just two regions in a multidimensional space of
sonnet-valuation.
Does that seem like a reasonable way to think about sonnets? (I don't mean is i
0Armok_GoB10y
Root access was probably a to metaphorical choice of words. Is "skeletal
musculature privileges" clearer?
All those things like memories or skillsets you list as part of identity does
seem weird, but even irrelevant software not nearly as weird as specific
hardware. I mean seriously attaching significance to specific atoms? Wut? But of
course, I know it's really me thats weird and most humans do it.
I agree about what you say about sonnets, it's very well put in fact. And yes
identities do follow the same rules. Trying to come up with fitting tulpa stuff
in the metaphor. Doesn't really work though because I don't know enough about
it.
This is getting a wee bit complicated and I think we're starting to reach the
point where we have to dissolve the classifications and actually model things in
detail on continuums, which means more conjecture and guesswork and less data
and what data we have being less relevant. We've been working mostly in
metaphors that doesn't really go this far without breaking down. Also, since
we're getting into more and more detail, it also means th stuff we are examining
is likely to be drowned out in the differences between brains, and the
conversation turn into nonsense due to the typical mind fallacy.
As such, I am unwilling to widely sprout what's likely to end up half nonsense
at least publicly. Contact me by PM if you're really all that interested in
getting my working model of identities and mental bestiary.
2TheOtherDave10y
Would you classify a novel in the same "moral-status" tier as these four
examples?
0Armok_GoB10y
No, thats much much lower. As in torture a novel for decades in order to give a
tulpa a quick amusement being a moral thing to do lower.
Assuming you mean either a physical book, or the simulation of the average minor
character in the author's mind, here. Main characters or RPing PCs can vary in
complexity of simulation from author to author a lot and it's a theory that some
become effectively tulpas.
0TheOtherDave10y
Your answer clarifies what I was trying to get at with my question but wasn't
quite sure how to ask, thanks; my question was deeply muddled.
For my own part, treating a tulpa as having the moral status of an independent
individual distinct from its creator seems unjustified. I would be reluctant to
destroy one because it is the unique and likely-unreconstructable creative
output of a human being, much like I would be reluctant to destroy a novel
someone had written (as in, erase all copies of such that the novel itself no
longer exists), but that's about as far as I go.
I didn't mean a physical copy of a novel, sorry that wasn't clear.
Yes, destroying all memory of a character someone played in an RPG and valued
remembering I would class similarly.
But all of these are essentially property crimes, whose victim is the creator of
the artwork (or more properly speaking the owner, though in most cases I can
think of the roles are not really separable), not the work of art itself.
I have no idea what "torture a novel" even means, it strikes me as a category
error on a par with "paint German blue" or "burn last Tuesday".
1Armok_GoB10y
Ah. No, I think you'd change your mind if you spent a few hours talking to
accounts that claim to be tulpas.
A newborn infant or alzheimer's patient is not an independent individual
distinct from it's caretaker either. Do you count their destruction as property
crime as well? "Person"-ness is not binary, it's not even a continuum. It's a
cluster of properties that usually correlate but in the case of tulpas does not.
I recommend re-reading Diseased Thinking.
As for your category error: /me argues for how german is a depressing language
and spends all that was gained in that day on something that will not last. Then
a pale-green tulpa
[http://en.wikipedia.org/wiki/Colorless_green_ideas_sleep_furiously] snores in
an angry manner.
[http://3.bp.blogspot.com/_kzW2Xjzh-Wc/SRtrl7AqdOI/AAAAAAAAAXQ/nRLtfprd-kQ/s1600/Colorless+green+ideas.jpg]
2A1987dM10y
I picture a sheet of paper with a paragraph in each of several languages, a
paintbrush, and watercolours. Then boring-sounding environmental considerations
make me feel outraged without me consciously realizing what's happening.
-1TheOtherDave10y
I agree that person-ness is cluster of properties and not a binary.
I don't believe that tulpas possess a significant subset those properties
independent of the person whose tulpa they are.
I don't think I'm failing to understand any of what's discussed in Diseased
Thinking. If there's something in particular you think I'm failing to
understand, I'd appreciate you pointing it out.
It's possible that talking to accounts that claim to be tulpas would change my
mind, as you suggest. It's also possible that talking to bodies that claim to
channel spirit-beings or past lives would change my mind about the existence of
spirit-beings or reincarnation. Many other people have been convinced by such
experiences, and I have no especially justified reason to believe that I'm
relevantly different from them.
Of course, that doesn't mean that reincarnation happens, nor that spirit-beings
exist who can be channeled, or that tulpas possess a significant subset of the
properties which constitute person-ness independent of the person whose tulpa
they are.
Eh?
I can take a newborn infant away from its caretaker and hand it to a different
caretaker... or to no caretaker at all... or to several caretakers. I would say
it remains the same newborn infant. The caretaker can die, and the newborn
infant continues to live; and vice-versa.
That seems to me sufficient justification (not necessary, but sufficient) to
call it an independent individual.
Why do you say it isn't?
I count it as less like a property crime than destroying a tulpa, a novel, or an
RPG character. There are things I count it as more like a property crime than.
1Armok_GoB10y
Seems I were wrong about you not understanding the word thing. Apologies.
You keep saying that word "independent". I'm starting to think we might not
disagree about any objective properties of tulpas, just things need to be
"independent" or only the most important count towards your utility, but I just
add up the identifiable patterns not caring about if they overlap. Metaphor:
tulpas are "10101101", you're saying "101" occurs 2 times, I'm saying "101"
occurs 3 times.
I'm fairly certain talking to bodies that claim those things would not change my
probability estimates on those claims unless powerful brainwashing techniques
were used, and I certainly hope the same is the case for you. If I believed that
doing that would predictably shift my beliefs I'd already have those beliefs.
Conservation of Expected Evidence.
((You can move a tulpa between minds to, probably, it just requires a lot of
high tech, unethical surgery, and work. And probably gives the old host
permanent severe brain damage. Same as with any other kind of incommunicable
memory.))
0TheOtherDave10y
(shrug) Well, I certainly agree that when I interact with a tulpa, I am
interacting with a person... specifically, I'm interacting with the person whose
tulpa it is, just as I am when I interact with a PC in an RPG.
What I disagree with is the claim that the tulpa has the moral status of a
person (even a newborn person) independent of the moral status of the person
whose tulpa it is.
On what grounds do you believe that? As I say, I observe that such experiences
frequently convince other people; without some grounds for believing that I'm
relevantly different from other people, my prior (your hopes notwithstanding) is
that they stand a good chance of convincing me too. Ditto for talking to a
tulpa.
(shrug) I don't deny this (though I'm not convinced of it either) but I don't
see the relevance of it.
1Armok_GoB10y
Yea this seems to definitely be just a fundamental values conflict. Let's just
end the conversation here.
0ChristianKl10y
What do you think about the moral status of torturing an uploaded human mind
that's in silicon?
Does that mind have a different moral status than one in a brain?
1TheOtherDave10y
Certainly not by virtue of being implemented in silicon, no. Why do you ask?
1hylleddin10y
As someone with personal experience with a tulpa, I agree with most of this.
I agree with the last two, but I think a video game NPC has a different
ontological status than any of those. I also believe that schizophrenic
hallucinations and recurring dream characters (and tulpas) can probably cover a
broad range of ontological possibilities, depending on how "well-realized" they
are.
I have no idea what a tulpa's moral status is, besides not less than a fictional
character and not more than a typical human.
I would expect most of them to have about the same intelligence, rather than
lower intelligence.
0Armok_GoB10y
You are probably counting more properties things can vary under as
"ontological". I'm mostly doing a software vs. hardware, need to be puppeteered
vs. automatic, and able to interact with environment vs. stuck in a simulation,
here.
I'm basing the moral status largely on "well realized", "complex" and
"technically sentient" here. You'll notice all my example ALSO has the actual
utility function multiplier at "unknown".
Most tulpas probably have almost exactly the same intelligence as their host,
but not all of it stacks with the host, and thus count towards it's power over
reality.
2hylleddin10y
Ah. I see what you mean. That makes sense.
2ChristianKl10y
Tulpa creation is effectively the creation of a form of sentinent AI that runs
on the hardware of your brain instead of silicon.
That brings up a moral question. To what extend is it immoral to create a Tulpa
and have it be in pain?
Tulpa are supposed to suffer from not getting enough attention so if you can't
commit to giving it a lot of attention for the rest of your life you might
commit an immoral act by creating it.
1Armok_GoB10y
Just so facts without getting entangled in the argument: In anecdotes tulpas
seem to report more abstract and less intense types of suffering than humans.
The by far dominant source of suffering in tulpas seems to be via empathy with
the host. The suffering from not getting enough attention is probably fully
explainable by loneliness, and sadness over fading away losing the ability to
think and do things.
0Vulture10y
This is very useful information if true. Could you link to some of the anecdotes
which you draw this from?
0Armok_GoB10y
Look around yourself on http://www.reddit.com/r/Tulpas/
[http://www.reddit.com/r/Tulpas/] or ask some yourself on the verius IRC rooms
that can be reached form there. I only have vague memories built from threads
buried noths back on that redit.
0Lumifer10y
No, I don't think so. It's notably missing the "artificial" part of AI.
I think of tulpa creation as splitting off a shard of your own mind. It's still
your own mind, only split now.
0Vulture10y
I think the really relevant ethical question is whether a tulpa has a separate
consciousness from its host. From my own researches in the area (which have been
very casual, mind you), I consider it highly unlikely that they have separate
consciousness, but not so unlikely that I would be willing to create a tulpa and
then let it die, for example.
In fact, my uncertainty on this issue is the main reason I am ambivalent about
creating a tulpa. It seems like it would be very useful: I solve problems much
better when working with other people, even if they don't contribute much; a
tulpa more virtuous than myself could be a potent tool for self-improvement; it
could help ameliorate the "fear of social isolation" obstacle to potential
ambitious projects; I would gain a better understanding of how tulpas work; I
could practice dancing and shaking hands more often; etc. etc. But I worry about
being responsible for what may be (even with only ~15% subjective probability) a
conscious mind, which will then literally die if I don't spend time with it
regularly (ref [http://www.reddit.com/r/Tulpas/wiki/faq]).
0TheOtherDave10y
Just to clarify this a little... how many separate consciousnesses do you
estimate your brain currently hosts?
0Vulture10y
By my current (layman's) understanding of consciousness, my brain currently
hosts exactly one.
0TheOtherDave10y
OK, thanks.
0ChristianKl10y
It's not your normal mind, so it's artifical for ethical considerations.
As far as I read stuff written by people with Tulpa's they treat them as entity
who's desires matter.
1Vulture10y
This might be a stupid question, but what ethical considerations are different
for an "artificial" mind?
0ChristianKl10y
When talking about AGI few people label it as murder to shut down the AI that's
in the box. At least it's worth a discussion whether it is.
2A1987dM10y
Only if it's not sapient [http://lesswrong.com/lw/x7/cant_unbirth_a_child/],
which is a non-trivial question
[http://lesswrong.com/lw/x4/nonperson_predicates/].
2Vulture10y
Wow, I had forgotten about that non-person predicates post. I definitely never
thought it would have any bearing on a decision I personally would have to make.
I was wrong.
0Vulture10y
Really? I was under the impression that there was a strong consensus, at least
here on LW, that a sufficiently accurate simulation of consciousness is the
moral equivalent of consciousness.
0ChristianKl10y
"Sufficiently accurate simulation of consciousness" is a subset of set of things
that are artificial minds. You might have a consensus for that class. I don't
think you have an understanding that all minds have the same moral value. Even
all minds with a certain level of intelligence.
0Vulture10y
At least for me, personally, the relevant property for moral status is whether
it has consciousness.
0TheOtherDave10y
That's my understanding as well.... though I would say, rather, that being
artificial is not a particularly important attribute towards evaluating the
moral status of a consciousness. IOW, an artificial consciousness is a
consciousness, and the same moral considerations apply to it as other
consciousnesses with the same properties. That said, I also think this whole "a
tulpa {is,isn't} an artificial intelligence" discussion is an excellent example
of losing track of referents in favor of manipulating symbols, so I don't think
it matters much in context.
0Lumifer10y
I don't find this argument convincing.
Yes, and..?
Let me quote William Gibson here:
Addictions ... started out like magical pets, pocket monsters. They did
extraordinary tricks, showed you things you hadn't seen, were fun. But came,
through some gradual dire alchemy, to make decisions for you. Eventually, they
were making your most crucial life-decisions. And they were ... less intelligent
than goldfish.
0ChristianKl10y
There a good chance that you will also hold that belief when you will interact
with the Tulpa on a daily basis. As such it makes sense to think about the
implications of the whole affair before creating one.
1Lumifer10y
I still don't see what you are getting at. If I treat a tulpa as a shard of my
own mind, of course its desires matter, it's the desires of my own mind.
Think of having an internal dialogue with yourself. I think of tulpas as a
boosted/uplifted version of a party in that internal dialogue.
2IlyaShpitser10y
Well, if you think that the human illusion of unified agency is a good ideal to
strive for, it then seems that messing around w/ tulpas is a bad thing. If you
have really seriously abandoned that ideal (very few people I know have), then
knock yourself out!
2Vulture10y
Why would it be considered important to maintain a feeling of unified agency?
1IlyaShpitser10y
Is this a serious question? Everything in our society, from laws to social
conventions, is based on unified agency.
The consequentialist view of rationality as expressed here seems to be based on
the notion of unified agency of people (the notion of a single utility function
is only coherent for unified agents).
--------------------------------------------------------------------------------
It's fine if you don't want to maintain unified agency, but it's obviously an
important concept for a lot of people. I have not met a single person who truly
has abandoned this concept in their life, interactions with others, etc. The
conventional view is someone without unified agency has demons to be cast out
("my name is Legion, for we are many.")
0Vulture10y
By "agency", are you referring to physical control of the body? As far as I can
tell, the process of "switching" (allowing the tulpa to control the host's body
temporarily) is a very rare process which is a good deal more difficult than
just creating a tulpa, and which many people who have tulpas cannot do at all
even if they try.
0Vulture10y
Welp, look at that, I just found this thread after finishing up a long comment
on the subject in an older thread
[http://lesswrong.com/lw/h9b/post_ridiculous_munchkin_ideas/a0y1]. Go figure.
(By the way, I do recommend reading that entire discussion, which included some
actual tulpas chiming in).
-2Lumifer10y
A fairly obvious reason is that to generate a tulpa you need to screw up you
mind in a sufficiently radical fashion. And once you do that, you may not be
able to unfuck it back to normal.
I vaguely recall (sorry, no link) reading a post by a psychiatrist who said that
creating tulpas is basically self-induced schizophrenia. I don't think
schizophrenia is fun.
4Adele_L10y
This is a concern I share. However...
This is the worst argument in the world [http://lesswrong.com/lw/e95/].
-3Lumifer10y
I don't think so, it can be rephrased tabooing emotional words. I am not trying
to attach some stigma of mental illness, I'm pointing out that tulpas are
basically a self-inflicted case of what the medical profession calls
dissociative identity disorder
[http://en.wikipedia.org/wiki/Dissociative_identity_disorder] and that it has
significant mental costs.
0Kaj_Sotala9y
Taylor et al. claim [http://www.klkblake.com/~kyle/IIA.pdf] that although people
who exhibit the illusion of independent agency do score higher than the
population norm on a screening test
[http://en.wikipedia.org/wiki/Dissociative_Experiences_Scale] of dissociative
symptoms, the profile on the most diagnostic items is different from DID
patients, and scores on the test do not predict IIA:
0ChristianKl10y
Could you describe the relevant mental costs that you would expect as a
sideeffect of creating a tulpa?
-1Lumifer10y
Loss of control over your mind.
2ChristianKl10y
What does that mean?
-1Lumifer10y
An entirely literal reading of that phrase.
0ChristianKl10y
So you mean that you are something that's separate from your mind? If so, what's
you and how does it control the mind?
1Lumifer10y
Your mind is a very complicated entity. It has been suggested that looking at it
as a network (or an ecology) of multiple agents is a more useful view than
thinking about it as something monolithic.
In particular, your reasoning consciousness is very much not the only agent in
your mind and is not the only controller. An early example of such analysis is
Freud's distinction between the id, the ego, and the superego.
Usually, though, your conscious self has sufficient control in day-to-day
activities. This control breaks down, for example, under severe emotional
stress. Or it can be subverted (cf. problems with maintaining diets). The point
is that it's not absolute and you can have more of it or less of it. People with
less are often described as having "poor impulse control" but that's not the
only mode. Addiction would be another example.
So what I mean here is that the part of your mind that you think of as "I", the
one that does conscious reasoning, will have less control over yourself.
1ChristianKl10y
So you mean having less willpower and impulse control?
1Lumifer10y
Not only, I mean a wider loss of control.
For example someone who is having hallucinations is usually powerless to stop
them. She lost control and it's not exactly an issue of willpower.
If you're scared your body dumped a lot of adrenaline in your blood and you are
shaking, your hands are trembling and you can't think straight. You're on the
verge of losing control and again it's not really a matter of controlling your
impulses.
0Vulture10y
My understanding is that in the case of tulpas, the hallucinations are voluntary
and can be stopped and started at will.
Only if I heard particularly good things about it.
Most creative endeavors you could undertake have a very small chance of leading to external reward, even the validation of people reading/watching/playing them - there's simply too much content available these days for people to read yours. So I'd advise against making such a thing, unless you find making it to be rewarding enough in itself.
What do you hope to achieve? Making money through selling the game? Artistic
expression? Pushing memes?
9CronoDAS10y
My underlying motivation is to feel better about myself. I feel that my life so
far has lacked meaningful achievements. Pushing memes is a side benefit.
I do not expect to make money by selling the game, but if I do manage to make
something that turns out to be pretty good, I think it would be a big help in
getting a job in the video game industry.
3Protagoras10y
I've played several RPG maker games made by amateurs. Some of them seemed to
have significant followings, though I wasn't interested enough to make a serious
effort to estimate the numbers, since I wasn't the creator. What kind of game
were you thinking of making?
I have a game I've been fantasizing about and I think I could make it work. It has to be a game, not a story, because I want to pull a kind of trick on the player. It's not that unusual in fiction for a character to start out on the side of the "bad guys", have a realization that his side is the one that's bad, and then go on to save the day. (James Cameron's Avatar is a recent example.) I want to start the player out on the side of bad guys that appear good, as in Eliezer's short story "The Sword of Good", and then give the player the opportunity to fail to realize that he's on the wrong side. There would be two main story branches: a default one, and one that the player can only get to by going "off-script", as it were, and not going along with what it seems like you have to do to continue the story. (At the end of the default path, the player would be shown a montage of the times he had the chance to do the right thing, but chose not to.)
The actual story would be something like the anti-Avatar; a technological civilization is encroaching on a region inhabited by magic-using, nature-spirit-worshiping nomads. The nature spirits are EVIL (think: "nature, red in tooth and claw") and resort to more and more drastic measures to try to hold back the technological civilization, in which people's lives are actually much better.
That sounds fun, and something that'd actually translate nicely to the RPG Maker template. It's also something that takes skill to pull off well, you'll need to play with how the player will initially frame the stuff you show to be going on, and how the stuff should actually be interpreted. Not coming off as heavy-handed is going to be tricky. Also, pulling this off is based on knowing how to use the medium, so if this is the first RPG Maker thing you're going to be doing, it's going to be particularly challenging.
There might also be a disconnect between games and movies here. Movies tend to always go out of their way to portray the protagonist's side as good, while games have a lot more of just semi-symmetric opposing factions. You get to play as the kill-happy Zerg or Undead Horde, and nobody pretends you're siding with the noble savages against the inhuman oppressors. So the players might just go, "ooh, I'm the Zerg, cool!" or "I guess I'm supposed to defect from Zerg to Terran here".
Random other thoughts, Battlezone 2 has a similar plot twist with off-script player action needed, though both factions are high-tech. Dominions 4 has Asphodel that's a neat corr... (read more)
When I played Zelda games, I would always work out what option I was supposed to
take, then take the other one, confident that I would get to see a few extra
lines of dialogue before being presented with the same option again.
(I say "always", but when I first played, I would carefully make the correct
choice, for fear that something bad would happen if I didn't agree to help
Zelda. I don't remember when I developed the opposite habit.)
1CronoDAS10y
Yeah, it'll be hard. Right now I haven't worked out much more than the basic
concept; I'd have a lot of writing to do, in addition to level design, learning
RPG Maker, and so on.
As for art, RPG Maker does come with some built-in art and offers some more in
expansion packs. If I have to, I can use placeholder art from the built-in
assets and find some way to replace it once I'm happy with everything else.
1Risto_Saarelma10y
Have you thought about how much time you are ready to put into the project? I'd
ballpark the timescale for this as at least two years if you work on this alone,
aren't becoming a full-time game developer and want to put a large-scale
competent CRPG together.
EDIT: I'm guessing this would look like something like what Zeboyd Games puts
out. They had a two-man team working full-time and took three months
[http://www.gamasutra.com/blogs/RobertBoyd/20100924/88129/Breath_of_Death_VII_The_Beginning__A_Postmortem.php?print=1]
to make the short and simple Breath of Death. Didn't manage to find information
on how long their more recent bigger games took to develop, but they seem to
have released around one game a year since.
5CronoDAS10y
Honestly, I'd probably start by trying to throw something much simpler together
with RPG Maker, just to learn the system and see what it's like. And I don't
actually have a "real job", so the amount of time I spend is mostly limited by
my own patience.
And using RPG Maker might help speed up the technical work.
9Moss_Piglet10y
I like the idea, mainly because I spent most of Avatar rooting for Quarditch
(easily the biggest badass in the last decade of cinema), but it seems like
there's another way to do it that might have a bit more power;
Why not have them both be "right," according to their own value systems anyway,
and then have the end-game slideshow in both branches tell the player the story
of what they did from the perspective of the other side?
In terms of workload, it seems minimal; from a story perspective you already
need both sides to have sympathetic and unsavory elements anyway, while from a
design perspective all you need to add is a second set of narration captions for
the slideshow contingent on which side the player supported.
And in terms of appeal, it certainly seems more engaging than most AAA games.
Spec Ops: the Line proved that players are masochists and that throwing guilt
trips at them is a great way to get sales and good reviews, while Mass Effect
3's failure shows that genuine choice in endings is pretty important for a game
built on moral choice.
0CronoDAS10y
This would ruin the point I'm trying to make.
[http://lesswrong.com/lw/hv9/rationality_quotes_july_2013/9b0e]
4Viliam_Bur10y
You don't have to make both branches equivalent. Both of them could feel "right"
from inside, but only one of them could contain an information which makes the
other one wrong.
In one ending, the hero only has limited information, and based on that limited
information, the hero thinks they made the right choice. Sure, some things went
wrong, but the hero considers that a necessary evil.
On another ending, the hero has more information, and now it is obvious that
this choice was right, and all the good feelings from the other branch are
merely lack of information or reasoning.
This way, if you only saw the first ending, you would think it is the good one,
but if you saw both of them, it would be obvious the second one is the good one.
3drethelin10y
I like this idea but it seems hard to differentiate between "You did what you
thought was right but you need to be more careful about what you believe" and
"you got the bad ending because you missed this little thing", which is
something many games have done before.
An example is Iji where the game plays out significantly differently if you make
a moral decision not to kill, but if you take the default path it doesn't let
you know you could've chosen to be peaceful the whole time. It involves an
active decision rather than a secret thing you can miss, but it also doesn't
frame it as a "MORAL CHOICE TIME GO"
4Armok_GoB10y
That sounds awesome... except now that I know about that twist it's ruined. And
if you publish it under a different name and don't reveal it it wont sound
awesome so I'll never discover it.
The only way to do this justice would be nagging enough people to play it that
they can insist that it's better than it sounds and someone should really play
it for reasons they can't spoil.
0CronoDAS10y
/me shrugs
For some reason, people still like games such as Bioshock and Spec Ops: The Line
after knowing about their twists...
1passive_fist10y
It sounds very appealing to me, but as KaynanK pointed out, you have to be very
careful about keeping the twist secret. To this end, I'd suggest not revealing
to the players that they could have gone off-script, unless they do.
1Multicore10y
It seems like an interesting story idea, but, of course, the twist can't be
revealed to any prospective player without spoiling it, so it might seem cliched
on the surface.
0witzvo10y
As an example of a flash game with similar story branches (albeit a pretty
different plot), there's endeavor
[http://www.newgrounds.com/portal/view/555072].
0ephion10y
That does sound very appealing. I'm not well versed at all in game creation, but
I do remember playing with RPG maker a few years ago and it was pretty limiting.
Rather than RPG maker, why not make a Skyrim mod? That would be much more fun to
play and would have a much larger potential userbase.
3CronoDAS10y
JRPGs are what I know best.
(And RPG Maker makes standalone programs; someone who wants to play an RPG Maker
game doesn't need to have RPG Maker themselves.)
0Lumifer10y
Well, that's just the twist idea, but what's your framework? Are you thinking
about first-person shooters (Deus Ex style, for example) or about tactical
turn-based RPGs or about 2-D platformers or what?
0CronoDAS10y
RPG Maker, by default, makes games that look like SNES-era JRPGs.
Statistics Done Wrong is a guide to the most popular statistical errors and slip-ups committed by scientists every day, in the lab and in peer-reviewed journals. Many of the errors are prevalent in vast swathes of the published literature, casting doubt on the findings of thousands of papers. Statistics Done Wrong assumes no prior knowledge of statistics, so you can read it before your first statistics course or after thirty years of scientific practice.
Not particularly important ,but if anyone wants to come out and tell me why they went on a mass-downvoting spree on my comments, please feel free to do so.
It looks like you have an unspoken treaty of non-hostility. People don't just
forget those kind of things; you didn't. My advice is to make good with the
person and acknowledge your prior differences, it will be less awkward going
forward and you would gain his/her respect. And who knows, they might even gain
your respect. Friends for the most part are always better than enemies.
2shminux10y
"It takes two to tangle" and such. Is the reason for the falling out still
there? Or is the residual hate just one of those lost purposes?
1PECOS-910y
"It takes two to tango
[http://en.wikipedia.org/wiki/Takes_two_to_tango_(idiom\])" (not tangle)
Bonus related dinosaur comic [http://www.qwantz.com/index.php?comic=1841]
0Omid10y
Yes, I'm still angry with him. He did something cruel to someone weak, and he
got angry with me for saying that was wrong. I wish I could delete him from my
life but he works near me.
2TheOtherDave10y
I've experienced variation on the theme.
My usual approach is to decide whether I value treating them as an enemy for
some reason. If I do, then I continue to do so (which can include pretending to
treat them like a friend, depending on the situation). If I don't, then I move
on. Whether they've actually moved on or not is their problem.
0ChristianKl10y
I generally don't think it makes much sense to label other people as enemies.
Was some change made in the lw code in the past couple of weeks or so? I can't browse this site with my android smartphone anymore, have tried several browsers. The site either frequently freezes the browser or shows a blank page after the page has finished loading. This happens more with bigger threads.
I have the same problem for pages like recent posts
[http://lesswrong.com/recentposts], which look OK at first, but then become
blank. Article pages are more likely to load correctly. Solution: turn off
javascript. (Android 2.2)
0hyporational10y
Thanks. This obviously disables a lot of funtionality. Another fix I found for
the blank page problem is simply interrupt the loading of the page once you
start seeing stuff.
If Yvain is (understandably) too busy to run it this year, I am willing to do it. But I will be making changes if I do it, including reducing the number of free responses and including a basilisk question.
Read about hyperbolic discounting, if you haven't already.
Assuming a conflict between short- and long-term decisions, the general advice is to mentally bundle a given short-term decision with all similar decisions that will occur in the future. For example, you might think of an unhealthy snack tonight as "representing" the decision to eat an unhealthy snack every night.
Optimize your environment for decreased time-preference when you have the most
control:
Fill your refrigerator when you're not hungry. Apply effortful-to-dismantle
restrictions on your computer when you're not bored and tired. Walk to the
university library to study so it takes effort to come back home to your
hobbies.
I'd like to read and collect other similar strategies for my toolbox.
ETA: I just realized I do this for exercise too. There's a lake near my house
with a circumference of about 6 kilometers, and I go jogging around it
frequently. I have a strong desire to quit once I've gotten to the other side,
but I have no choice but to run the whole route at that point. Sometimes I
decide to walk the other half, but I guess it's better than nothing. Another
option would be to just run in one direction and then back, but I find the idea
too boring, even if I change my route a bit.
-1savageorange10y
-- Time less ;)
-- this question feels like it's missing a word or two. What does
time-preference mean?
EDIT: Thanks, Arundelo. So basically, time preference ~= level of short-sighted
optimization.
In that case, do some projects that strictly require long-sighted optimization.
A deadline is one good tool; Telling others what you're doing (in an unequivocal
way, so that the greatest disappointment/irritation/harassment is achieved). Of
course these tools are nothing new, the point is to increase the pressure as
high as you can stand and reduce the amount of 'slack' time you have to allocate
to a minimum.
On a more meta level, you can try things like doing some mindfulness meditation
every day, which I personally find makes it easier to ignore irrelevant stimuli,
worry less, and stick to my priorities.
An even more general observation: Introverts typically have lower time
preference relative to extraverts, so ask them about how they dispel
distraction. (I say this in the sense described by Dorothy Rowe: 'extraverts are
basically worried about belonging and feel understimulated, introverts are
basically worried about keeping control of themselves and feel overstimulated' ,
and not the vague 'Extraverts are social, introverts are not, derp' that seems
to be the misapprehension of the average person)
In case there's any question, I'm an extravert, so yeah, I tend to struggle with
this issue too.
I read somewhere, might have been on lw, that telling what you're doing might
decrease your chance of success, because it provides a way to get compliments
without actually having achieved anything yet. I suppose this depends on how you
do it, though.
I'm an introvert, have terrible problems with time-preference, and don't
understand the rationalization by Dorothy Rowe you provide.
Any empirical sources for your claim?
3VincentYu10y
Gollwitzer et al. (2009)
[https://dl.dropboxusercontent.com/u/238511/papers/2009-gollwitzer.pdf]. When
intentions go public: Does social reality widen the intention-behavior gap?
Abstract (emphasis mine):
--------------------------------------------------------------------------------
Paper link posted to LW dicussion
[http://lesswrong.com/r/discussion/lw/ajh/link_why_you_should_keep_your_goals_secret/]
in 2012 by Barry_Cotter.
1MathiasZaman10y
I know this TED-talk [http://www.youtube.com/watch?v=NHopJHSlVo4] says similar
things. It's where I first heard of that concept.
0savageorange10y
I definitely agree that you can't -just- tell a person what you're doing, you
need to pick the right person, and cultivate the right attitude (From my
observation of myself, it succeeds when I am in the mindset where I can take
plenty of teasing equitably, accepting any pokes as potential observations about
reality without -stressing- about that.). ..
What rationalization of Rowe's? It's a summary of what they themselves report
when 'laddered' (a process which basically consists of asking them what the most
terrible thing that could possibly happen to them is, followed by iterative
'why?' until they can no longer go to the next lowest level).
For extroverts, being utterly abandoned == total personal disintegration; For
introverts, utter loss of self-control == total personal disintegration. (I do
paraphrase here; read The Successful Self for the whole picture.)
If anything, any rationalization is mine: I observe that introverts I know are
reliably better at moving long term projects forward than I, or any extravert I
know, seems to be. Not that they are not weak in this way -- they just seem to
be less weak as a consequence of the difference in their focus. (my inference
bolded.)
I'm neutral to your statement of introversion, basically because my prior for
people being hilariously terrible at assessing this stuff is quite high.
No empirical sources as far as I know. Nobody even really manages to agree on
the definition of introvert and extrovert, so far. Dorothy Rowe is just the only
writer I've found on the subject who manages to describe a system that is
relateable, consistent, and can be applied in the real world.
I wonder if there's research that rationalists should do that could be funded this way. I'd pay for high quality novel review articles about topics relevant to lw.
What was the string that generated the hash, then?
ETA: See Lumifer's link above.
2Douglas_Knight10y
It seems to me that a relevant detail is time frame is ~7 months (as you say
elsewhere). Ideally, hashes would be commitments to reveal the plaintext in a
specified time. Don't you discuss this somewhere?
0gwern10y
Not sure what you mean.
3Douglas_Knight10y
There is a danger with publishing hashes that you might publish opposite ones.
Ideally, you should be committing to one answer or the other. High entropy
predictions mitigates against this, but the effect is still there. And we can't
tell if it's high entropy until it is revealed. Publishing dates mitigates
against this.
I thought I recently saw an essay on the uses of hash precommitments, including
this kind of problem. If that doesn't ring a bell, I guess it wasn't by you.
4gwern10y
Oh. Yeah, I did start a little discussion
[http://lesswrong.com/lw/icj/open_thread_august_1925_2013/9mxe] on an isomorphic
trick in Umineko. In this case, the date on which I posted the hash is provided
automatically by Reddit/Lesswrong/Twitter/etc and one can also verify I didn't
post any other hashes recently to those fora.
The trick also only works for 'small keyspaces', if you will - if for example I
was trying to fake a precommitment to a 100-digit number, the trick isn't going
to work because it's not feasible to publish precommitments to even a tenth of
the potential 100-digit numbers without people noticing and calling foul - 'so,
gwern, why are you publishing that many precommitments and when can we expect
them all to be revealed...?'
I'm a bit emotionally tense at the moment, so this observation might not be as valuable as it seems to me, but it occurs to me that there are two categories of things I do: thinking things through in detail, and acting on emotion with very little forethought involved. The category that we want--thinking an action through, then performing it--is mysteriously absent.
It's possible to get around this to some extent, but it requires the emotionally-driven, poorly-thought out things to involve recurring or predictable stimuli. In those cases, I can think through... (read more)
Here's a more difficult version of the AI box experiment. I haven't seen this particular version anywhere, but I'd be pleased to be proven wrong.
Imagine we've come up with a very intelligent AI that is free to manipulate the environment and uses the action-reward system like Hutter's AIXI. Also imagine that we've somehow figured a way to make the rewards very hard to counterfeit (perhaps we require the rewards to be cryptographically signed). It's clear that in such a system, the 'weak point' would be the people in control of the private key. In this case ... (read more)
Torture is probably the easiest way. Another way could be to examine the
key-keeper's mind and extract the key directly from that, but this is needlessly
complicated.
5Eliezer Yudkowsky10y
Torture might stand too great a chance of destroying the encryption key. Though
I suppose if nanotech were sufficiently difficult to obtain, the possible
key-destructive effects of torture might be balanced against the probability of
a car running over the keyholder in the meantime.
7NancyLebovitz10y
I would think that confusion (set things up so the key-keeper is confused and
distracted, then do some phishing) is in the same reliability range as torture,
and less likely to get the AI in trouble.
0passive_fist10y
I suspect the answer to be more complex than this. The AI knows that if it
attempted something like that there is the very huge risk of being cut off from
all reward, or even having negative reward administered. In other words: tit for
tat. If it tries to torture, it will itself be tortured. Remember that before it
has the private key, we are in control.
4ChristianKl10y
The first step would start by starting to control information flow to the people
who hold the key. Only tell the person about all the good thing the AI does, and
hide evidence of any wrongdoing that might reduce the rewards that the AI gets.
The second step is to break the security which gets used to protect the key.
Install a keylogger.
2solipsist10y
I don't understand how this encryption would work. What do people physically do
to reward the AI, and how do you ensure that only people can do that? Would
humans compute RSA signatures in their head? Would humans typing reusable
passwords onto a "secure" reward computer that is "outside the AI's control"? Do
humans precompute and memorize a finite number of one-time reward phrases before
the AI is turned on, and reward the AI by uttering a phrase aloud?
In the precomputed, one-time cookie case, I'd just make the human think about
the reward phrase. I'm sure humans leak thoughts like a sieve through
subvocalization [http://en.wikipedia.org/wiki/Subvocalization], nerve impulses,
etc.
0passive_fist10y
What I had in mind was the reward being administered through a consensus
cryptography system, perhaps via some elected board or somesuch, but I really
didn't give that aspect of the problem much thought. If the key is distributed,
the AI would have to extract it from each individual holding a part of it.
This in itself is an interesting problem imo, and if a good solution is found it
might have important implications for FAI research.
6solipsist10y
If the AI is out of the box, I don't think humans are the weak point.
Humans physically do something when they reward the AI. To get a reward, the AI
has only to figure out what the humans would physically do and mimic that
itself. If the human reward the AI by pressing a big red button, then the AI can
just kill the human and press the big red button itself. It wouldn't matter if
the big red button uses 512 bit elliptic curve cryptography -- the AI just has
to find a paperweight and put it on the button.
If humans can perform RSA encryption
[http://en.wikipedia.org/wiki/RSA_(algorithm\]#Encryption) silently in their
heads, then you might be on to something. A human could memorize a private key
and produce a cryptographically signed reward for the AI when the human deemed
the AI worthy. The AI would not know the private key, would not be able to
produce signed rewards, and would not be able to mimic humans. This setup works
because it is cryptographically difficult to mimic a human doing RSA. But if the
human did not perform the cryptography silently in their head, it would not be
cryptographically difficult to mimic their rewarding behavior.
But I doubt that humans could perform RSA or elliptic curve cryptography
mentally. Unless humans can compute trapdoor functions
[http://en.wikipedia.org/wiki/Trapdoor_function] silently in their head, I don't
see how public key cryptography could buy you anything over a reward-button.
0passive_fist10y
For the same reason that Linus Torvalds is not able to hack every single Linux
system in existence, it is reasonable to assume that the probability of
'backdoor'-type attacks could be reduced or eliminated.
If you're not talking about a back door, then I'm not sure what you're trying to
say here. Are you implying that the AI will find the 'reward producing machine'
and somehow use it to produce rewards? It doesn't work that way because the
machine would still need the key to produce a cryptographically-valid reward,
and the key would not be stored on the machine. That's the whole point of using
cryptography. For instance, you could do something like
http://en.wikipedia.org/wiki/Secret_sharing
[http://en.wikipedia.org/wiki/Secret_sharing] where the key is divided among
several participants and never actually assembled in one place.
Of course any cryptography system has vulnerabilities. The point is not to build
a 100% secure system. The point is to make it so that attempting to hack the
system has less expected utility than simply doing what the humans say. And if
that fails, then the AI will attempt to hack the system using its weakest point:
the people controlling the key. Hence my question.
6solipsist10y
Yeah, we're talking past each other. I think I understand what you're saying,
and I'll try to rephrase what I'm saying.
The AI is out. It is free to manipulate the world at its will. Sensors are
everywhere. The AI can hear every word you say, feel every keystroke you make,
and see everything you see. The only secrets left are the ones in your head.
How do humans reward the AI? You say "cryptographically", but cryptography
requires difficult arithmetic. How do you perform difficult arithmetic on a
secret that can't leave your head?
0passive_fist10y
Too many assumptions are being made here. What is the basis for believing the AI
will have sensors everywhere, especially while it's still under human control?
And if it has the ability to put clandestine sensors in even the most secure
locations, why couldn't it plant clandestince brain implants in the people
controlling the key?
Yes and yes.
If you're already beeminding without the pledge and it's not working perfectly,
I'd suggest trying a small pledge for the value of information.
Doesn't this already exist?
[http://wiki.lesswrong.com/wiki/Sequences#eReader_Formats] Or is this not what
you meant?
I'm reading that pdf version on my phone and it looks fine.
3pan10y
From posts like this one
[http://lesswrong.com/lw/h7t/help_us_name_the_sequences_ebook/] I got the
impression that they were being edited and released together in a possibly new
order. Maybe I am mistaken?
5RomeoStevens10y
There was a plan to release two books. That was scrapped in favor of other uses
of MIRI's time/resources.
0[anonymous]10y
I think the two rationality books were supposed to be complete rewrites, and I
think this is separate from the sequence ebooks (not confident, sort of
confused)
0MathiasZaman10y
I can't speak for anything else, but I've read up to the Meta-ethics sequence
without encountering any gaps. I can't vouch for anything after that, but the
pdf seems complete. Maybe someone else can shed a light on your question.
In the interests of shaping behavior by praising approximations of the desired
behavior, can you identify three threads that are most like what you'd like to
see more of?
6shminux10y
Thank you for turning my complaining into an actionable item.
Here are some of the Main posts, on vastly different topics:
http://lesswrong.com/lw/izs/yes_virginia_you_can_be_9999_or_more_certain_that/
[http://lesswrong.com/lw/izs/yes_virginia_you_can_be_9999_or_more_certain_that/]
http://lesswrong.com/lw/id2/to_what_degree_do_you_model_people_as_agents/
[http://lesswrong.com/lw/id2/to_what_degree_do_you_model_people_as_agents/]
http://lesswrong.com/lw/hr3/giving_now_currently_seems_to_beat_giving_later/
[http://lesswrong.com/lw/hr3/giving_now_currently_seems_to_beat_giving_later/]
2drethelin10y
because it's based on the works of someone who wrote volumes about untestables
and unprovables?
6shminux10y
... while singing praise to testability and provability?
0mwengler10y
Generally, there are many things which are unproven or not tested, a smaller
(but still large) number of things which difficult to test or difficult to
prove, a smaller number of things which are testable or provable relatively
easily, and finally a small number of things which are tested or proven.
One can expect some people to consider only truths which cluster at these ends.
Academia I think tends in that direction. At one level this makes sense, one can
expect a lot of useful work to be done on things which are relatively easy to
prove, while things that are very difficult to prove one can expect a much lower
density of utility in the work and discussion on them.
However, one cannot expect interesting and useful truths to cluster at the
"provable" or "proven" end of these distributions. Indeed, given the high amount
of work done at the provable end, one might expect the most useful provable
truths to be pretty well described already, and the supply of provable truths
yet to be proven to be more and more abstract and less and less useful. We pick
the fruit that is low-hanging, and well we should.
So one would expect the more interesting truths still open to question to be
concentrated along the spectrum of harder or very hard to prove.
2shminux10y
Indeed. But reframing and carving testables from untestables and provables from
unprovables should be an explicit goal.
2mwengler10y
OK. And so should a theoretical exploration of the space of hypotheses about the
not-yet provables with the intent of getting the most truth for the buck when
these expensive experiments are finally done.
Can someone explain nanotech enthusiasm to me? Like, I get that nanotech is one of the sci-fi technologies that's actually physics-compliant, and furthermore it should be possible because biology.
But I get the impression that among transhumanist types slightly older than me, there's a widespread expectation that it will lead to absolutely magical things on the scale of decades, and I don't get where that comes from, even after picking up Engines of Creation.
I'm thinking of, e.g. Eliezer talking about how he wanted to design nanotechnology before he got into AI, or how he casually mentions nanotechnology as being one of the big ways a super-intelligent AI could take over the world. I always feel totally mystified when I come across something like that, like it's a major gulf between me and slightly older nerds.
Predicting chemistry from physics should be easy with a quantum computer, but appears hard with a classical computer. Often people say that even once you make a classical approximation, ie, assume that the dynamics are easy on a classical computer, that the problem of finding the minimum energy state of a protein is NP-hard. That's true, but a red herring, since the protein isn't magically going to know the minimum energy state. Though it's still possible that there's some catalyst to push it into the right state, so simulating the dynamics in a vacuum won't get you the right answer (cf prions). Anyhow, there's some hope that evolution has found a good toolbox for designing proteins and that if can figure out the abstractions that evolution is using, it will all become easy. In particular, there are building blocks like the alpha helix. Certainly an engineer, whether evolution or us, doesn't need to understand every protein, just know how to make enough.
I think the possibility that a sufficiently smart AI would quickly find an adequate toolbox for designing proteins is quite plausible. I don't know what Eliezer means, but the possibility seems to me adequate for his arguments.
I definitely have found that this forum is NOT immune to fictional evidence.
New ligament discovered in the human knee as a result of surgeons trying to figure out why some people didn't recover fully after knee injuries.
I'm tempted to deduce "Keep paying attention, you never know what might have been missed"-- I really would have expected that all the ligaments had been discovered a long time ago.
Another conclusion might be "Try to solve real problems, you're more likely to find out something new that way than by just poking around."
Does someone have the medical knowledge to explain how this is possible? My layperson guess is that once cut up a knee, you can more or less see all the macroscopic structures. Did they just think it was unimportant?
I'm more impressed, actually, in terms of the unevenness of progress - it took ~134 years to confirm his postulate? It's not like corpses were unavailable for dissection in 1879.
The media giveth sensationalism, and the media taketh away.
reddit - "So that "new" ligament? Here's a study from 2011 that shows the same thing. It's not even close to a new development and has been seen many times over the past 100 years." Summary quote: "The significance of the Belgian paper was to link [the ligament's] functionality to what they called "pivot shift", and knee reinjuries after ACL surgery. The significance of this paper, I believe, is that in the near future surgeons performing these operations will have an additional ligament to inspect and possibly repair during ACL surgery, which will hopefully reduce recurrence rates, and likely the rates of developing osteoarthritis in the injured knee down the line."
So I get home from a weekend trip and go directly to the HPMOR page. No new chapter yet. But there is a link to what seems to be a rationalist Death Note.
I am going to read it. Here are my next thoughts:
So, it seems like Eliezer succeeded to create a whole new genre of literature: rationalist fiction. Nice job!
Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature? There is something horribly wrong with this world if this is true.
Discussing with my girlfriend about which stories should be x-rationalizated next, she suggests HPMOR. Someone should make a HPMOR fanfic where the protagonist is even more rational than the rational Harry. Would that lead to a spiral of even more and more rational heroes?
What exactly could the MoreRational!Harry do? It would be pretty awesome if he could someho... (read more)
An idea came to my mind. Would it be possible to make a story in which Harry is less intelligent, in a way that he would score less in an IQ test for example, but at the same time more rational? HJPEV seems to be a highly intelligent prodigy even without the rationality addition. I would like to see how a more normal boy would do.
I think what you are referring to here is "a story where the protagonist describes their actions and motivations using rationality terminology" or maybe "a story where the rational thinking of the protagonist motivates the plot or moves it along". At least some of the genre of detective fiction — early examples being Poe's Auguste Dupin stories — would be along these lines.
Stories where protagonists behave rationally (without using rationality terrminology) wouldn't look like stories about rationality. They look like stories where protagonists do things that make sense.
Yup. At least sort-of. If you haven't read Eliezer's old post Lawrence Watt-Evans's Fiction I recommend it. However, conspicuous failures of rationality in fiction may be mostly an issue with science fiction and fantasy. If you want to keep the characters in your cop story from looking like idiots, you can do research on real police methods, etc. and if you do it right, you have a decent shot at writing a story that real police officers will read without thinking your characters are idiots.
On the other hand, when an author is trying to invent an entire fictional universe, with futuristic technology and/or magic, it can be really hard to figure out what would constitute "smart behavior" in that universe. This may be partly because most authors aren't themselves geniuses, but even more importantly, the fictional universe, if it were real, would have millions of people trying to figure out how to make optimal use of the resources that exist in that universe. It's hard for one person, however smart, to compete with that.
For that matter, it's hard for one author to compete with an army of fans dissecting their work, looking for ways the characters could have been smarter.
This leads to another comment on rationalist fiction: Most of it seems to be restricted to fan-fiction. The mold appears to be: "Let's take a story in which the characters underutilized their opportunities and bestow them with intelligence, curiosity, common sense, creativity and genre-awareness". The contrast between the fanfic and the canon is a major element of the story, and the canon an existing scaffold which saves the writer from having to create a context.
This isn't a bad thing necessarily, just an observation.
So, the question becomes, how do you recognize "rationalist" stories in non-fan-fic form? Is it simply the presence of show-your-work-smart characters? Is simply behaving rationally sufficient?
Every genre has a theme...romance, adventure, etc.
So where are the stories which are, fundamentally, about stuff like epistemology and moral philosophy?
Have Eliezer's views (or anyone else's who was involved) on the Anthropic Trilemma changed since that discussion in 2009?
Hmm, conditional on that being the case, do you also believe that the closer to physics the mind is the more person it is in it? Example: action potentials encoded in the position of rods in a babbage engine vs. spread over fragmented ram used by a functional programing language using lazy evaluation in the cloud.
Brian Leiter shared an amusing quip from Alex Rosenberg:
... (read more)Ugh. The prize was first and foremost in recognition of Fama, Shiller, and Hansen's empiricism in finance. In the sixties, Fama proposed a model of efficient markets, and it held up to testing. Later, both Fama, Shiller, and Hansen showed further tests didn't hold up. Their mutual conclusion: the efficient market hypothesis is mostly right, and while there is no short-term predictability based on publicly available information, there is some long-term predictability. Since the result is fairly messy, Fama and Shiller have differences about what they emphasize (and are both over-rhetorical in their emphasis). Does "mostly right" mean false or basically true?
What's causing the remaining lack of agreement, especially over bubbles? Lack of data. Shiller thinks bubbles exist, but are rare enough he can't solidly establish them, while Fama is unconvinced. Fama and Shiller have done path-breaking scientific work, even if the story about asset price fluctuation isn't 100% settled.
New research suggests that the amount of variance in DNA among individual cells in a person may be much higher than is normally believed. See here.
SPOILERS FOR "FRIENDSHIP IS OPTIMAL"
Why is 'Friendship is optimal' "dark" and "creepy"? I've read many people refer it that way. Only things that are clearly bad are the killings of all the other lifeforms, but otherwise this is scenario is one of the best that humanity could come across. It's not perfect, but it's good enough and much better than the world we have today. I'm not sure if it's realistic to ask for more. Considering how likely it is that humanity will end in some incredibly fucked up way full of suffering, then I would definitely defend this kind of utopia.
(Comment cosmetically edited in response to Kaj_Sotala, and again to replace a chunk of text that fell in a hole somewhere)
OK, I'll have a go (will be incomplete).
People in general will find the Optimalverse unpleasant for a lot of reasons I'll ignore; major changes to status quo, perceived incompatibility with non-reductionist worldviews, believing that a utopia is necessarily unpleasant or Omelas-like (a variant of this fallacy?), and lots of even messier things.
People on LessWrong may be thinking about portions of the Fun Theory Sequence that the Optimalverse conflicts with, and in some cases they may think that these conflicts destroy all of the value of the future, hence horror.
(rot13 some bits that might consitute spoilers)
Humans want things to go well, but they also want things to have been able to go badly, such that they made the difference. Relevant: Living By Your Own Strength, Free to Optimize.
The existence of a superintelligence makes human involvement superfluous, and humans do not want this to happen. Relevant: Amputation of Destiny.
Gur snpg gung gur NV vf pbafgenvarq gb fngvfsl uhzna inyhrf gur cbal jnl zrnaf gung n uhtr nzbhag bs cbffvoyr uhzna rkcrevra
No, let's not ignore it. Let's confront it, because I want a better explanation. Surely a person who values being a person that makes my life better, AND who is a person such that I will value making their life better, is absolutely the best kind of person for me to create (if I'm in a situation such that it's moral for me to create anyone at all).
I mean, seriously? Why would I want to mix any noise into this process?
Am I the only one who is bothered that these threads don't start on Monday anymore?
Posting a request from a past open thread again: Does anyone have a table of probabilities for major (negative) life events, like divorse or being in a car accident? I ask this to have a priority list of events to be prepared for, either physically or mentally.
ETA: Apparently a new WHO recommendation for filling death certificates was introduced in 2005-2006 and this caused a significant drop in pneumonia mortality in Finland.
I'm not entirely sure if it works this way in the whole EU, but it probably does. It's more complicated than what I explain below, but it's the big picture that matters.
The most common way to record mortality statistics is that the doctor who was treating the patient fills a death certificate. There are three types of causes of death that can be recorded in a death certificate. There are immediate causes of death and there are underlying causes of death. There are also intermediate causes of death, but nobody really cares about those because recording them is optional. The statistics department in Finland is interested in recording only the underlying causes of death and that's what gets published as mortality statistics. Only one cause of death per patient gets recorded.
If someone with advanced cancer gets pneumonia and dies, a doctor fills the death certificate saying that the underlying cause of death was cancer and the immediate cause of death was pneumonia. Cancer gets recorded as the one and only cause of dea... (read more)
In honor of NaNoWriMo, I offer up this discussion topic for fans of HPMOR and rationalist fiction in general:
How many ways can we find that stock superpowers (magical abilities, sci-fi tech, whatever), if used intelligently, completely break a fictional setting? I'm particularly interested in subtly game-breaking abilities.
The game-breaking consequences of mind control, time travel, and the power to steal other powers are all particularly obvious, but I'm interested in things like e.g. Eliezer pointing out that he had to seriously nerf the Unbreakable Vow in HPMOR to keep the entire story from being about that.
Thats because those are among the worst possible ways to use those abilities.
The energy blasts as usually depicted break conservation of energy; with a bit of physics trickery you can get time travel out of that. Even if not, they make you an extremely portable and efficient energy source, perfect for a spaceship where mass is critical and a human needs to come along anyway but it doesn't matter in particular who since it's for PR reasons.
Mind reading is a means of communication that does not require cooperation or any abilities in the target, and cant be lied through. Communication with locked-in patients, interrogation, extraction of testimonials from animals. And if you an find a way to yourself precommit, you also have fully reliable precommitment checking for everyone, lie detection for political promises, and the ultimate forensics tool.
If you combine the strengths of 2 kinds of system, you get something greater than the sum of it's parts. So it is with human senses and digital sensors. The key here is bandwidth, and analysis. Sure, you can get all the same data onto a computer, but it won't do much good there. Someone with true super-senses as flexible and integrated as the... (read more)
Does anyone here have any serious information regarding Tulpas? When I first heard of them they immediately seemed to be the kind of thing that is obviously and clearly a very bad idea, and may not even exist in the sense that people describe them. A very obvious sign of a persons who is legitimately crazy, even.
Naturally, my first re-reaction is the desire to create one myself (One might say I'm a bit contrarian by nature). I don't know any obvious reason not to (ignoring social stigma and time consuming initial investment), And there may be some advantag... (read more)
I'm reminded of many years ago, a coworker coming into my office and asking me a question about the design of a feature that interacts with our tax calculation.
So she and I created this whole whiteboard flowchart working out the design, at the end of which I said "Hrm. So, at a high level, this seems OK. That said, you should definitely talk to Mark about this, because Mark knows a lot more about the tax code than I do, and he might see problems I missed. For example, Mark will probably notice that this bit here will fail when $condition applies, which I... um... completely failed to notice?"
I could certainly describe that as having a "Mark" in my head who is smarter about tax-code-related designs than I am, and there's nothing intrinsically wrong with describing it that way if that makes me more comfortable or provides some other benefit.
But "Mark" in this case would just be pointing to a subset of "Dave", just as "Dave's fantasies about aliens" does.
If I made a game in RPG Maker, would anyone actually play it?
::is trying to decide whether or not to attempt a long-term project with uncertain rewards::
Only if I heard particularly good things about it.
Most creative endeavors you could undertake have a very small chance of leading to external reward, even the validation of people reading/watching/playing them - there's simply too much content available these days for people to read yours. So I'd advise against making such a thing, unless you find making it to be rewarding enough in itself.
I have a game I've been fantasizing about and I think I could make it work. It has to be a game, not a story, because I want to pull a kind of trick on the player. It's not that unusual in fiction for a character to start out on the side of the "bad guys", have a realization that his side is the one that's bad, and then go on to save the day. (James Cameron's Avatar is a recent example.) I want to start the player out on the side of bad guys that appear good, as in Eliezer's short story "The Sword of Good", and then give the player the opportunity to fail to realize that he's on the wrong side. There would be two main story branches: a default one, and one that the player can only get to by going "off-script", as it were, and not going along with what it seems like you have to do to continue the story. (At the end of the default path, the player would be shown a montage of the times he had the chance to do the right thing, but chose not to.)
The actual story would be something like the anti-Avatar; a technological civilization is encroaching on a region inhabited by magic-using, nature-spirit-worshiping nomads. The nature spirits are EVIL (think: "nature, red in tooth and claw") and resort to more and more drastic measures to try to hold back the technological civilization, in which people's lives are actually much better.
Does this sound appealing?
That sounds fun, and something that'd actually translate nicely to the RPG Maker template. It's also something that takes skill to pull off well, you'll need to play with how the player will initially frame the stuff you show to be going on, and how the stuff should actually be interpreted. Not coming off as heavy-handed is going to be tricky. Also, pulling this off is based on knowing how to use the medium, so if this is the first RPG Maker thing you're going to be doing, it's going to be particularly challenging.
There might also be a disconnect between games and movies here. Movies tend to always go out of their way to portray the protagonist's side as good, while games have a lot more of just semi-symmetric opposing factions. You get to play as the kill-happy Zerg or Undead Horde, and nobody pretends you're siding with the noble savages against the inhuman oppressors. So the players might just go, "ooh, I'm the Zerg, cool!" or "I guess I'm supposed to defect from Zerg to Terran here".
Random other thoughts, Battlezone 2 has a similar plot twist with off-script player action needed, though both factions are high-tech. Dominions 4 has Asphodel that's a neat corr... (read more)
http://www.refsmmat.com/statistics/
Not particularly important ,but if anyone wants to come out and tell me why they went on a mass-downvoting spree on my comments, please feel free to do so.
Has anyone else had this happen to them?
Was some change made in the lw code in the past couple of weeks or so? I can't browse this site with my android smartphone anymore, have tried several browsers. The site either frequently freezes the browser or shows a blank page after the page has finished loading. This happens more with bigger threads.
Anyone else having this problem?
Do you think there should be a new LW survey soon?
[pollid:574]
If Yvain is (understandably) too busy to run it this year, I am willing to do it. But I will be making changes if I do it, including reducing the number of free responses and including a basilisk question.
Give me a few days to see if I can throw something together and otherwise I will turn it over to your capable hands (reluctantly; I hate change).
How do I decrease my time-preference?
Read about hyperbolic discounting, if you haven't already.
Assuming a conflict between short- and long-term decisions, the general advice is to mentally bundle a given short-term decision with all similar decisions that will occur in the future. For example, you might think of an unhealthy snack tonight as "representing" the decision to eat an unhealthy snack every night.
Guy decides to do his PhD thesis on Dungeons & Dragons, acquires funding via Kickstarter.
I wonder if there's research that rationalists should do that could be funded this way. I'd pay for high quality novel review articles about topics relevant to lw.
Incidentally, I'm making a hash precommitment:
See http://www.reddit.com/r/DarkNetMarkets/comments/1pta82/precommitment/
I'm a bit emotionally tense at the moment, so this observation might not be as valuable as it seems to me, but it occurs to me that there are two categories of things I do: thinking things through in detail, and acting on emotion with very little forethought involved. The category that we want--thinking an action through, then performing it--is mysteriously absent.
It's possible to get around this to some extent, but it requires the emotionally-driven, poorly-thought out things to involve recurring or predictable stimuli. In those cases, I can think through... (read more)
A way to fall asleep and/or gain gut intuition for "exponentially slow": count in binary, in your head, at a regular beat. YMMV.
Here's a more difficult version of the AI box experiment. I haven't seen this particular version anywhere, but I'd be pleased to be proven wrong.
Imagine we've come up with a very intelligent AI that is free to manipulate the environment and uses the action-reward system like Hutter's AIXI. Also imagine that we've somehow figured a way to make the rewards very hard to counterfeit (perhaps we require the rewards to be cryptographically signed). It's clear that in such a system, the 'weak point' would be the people in control of the private key. In this case ... (read more)
Beeminder users, did you pledge? Do you find that it works better if you do?
Russell's teapot springs a leak... OK, that's enough one-liners for this week.
I've seen a few posts about the sequences being released as an ebook, is there a time frame on this?
I'd really like to get the ebook printed out by some online service so I can underline/write on them as I read through them.
Why does this forum spend so much time and effort discussing untestables and unprovables? I'ts disappointing.