There has recently been some speculation that life started on Mars, and then got blasted to earth by an asteroid or something. Molybdenum is very important to life (eukaryote evolution was delayed by 2 billion years because it was unavailable), and the origin of life is easier to explain if Molybdenum is available. The problem is that Molybdenum wasn't available in the right time frame on Earth, but it was on Mars.
Anyway, assuming this speculation is true, Mars had the best conditions for starting life, but Earth had the best conditions for life existing, and it is unlikely conscious life would have evolved without either of these planets being the way they are. Thus, this could be another part of the Great Filter.
Side note: I find it amusing that Molybdenum is very important in the origin/evolution of life, and is also element 42.
As someone pointed out to me when mentioning this to them, to be a candidate for
Great Filter there would need to be something intrinsic about how planets are
formed that cause these two types of environments to be mutually exclusive, else
it seems like there isn't sufficient reduction in probability of their
availability. Is this actually the case? Perhaps user:CellBioGuy can elucidate.
3[anonymous]10y
Well it took a while for that summoning to actually take effect.
The point I was making that that is not necessarily a strong contender (as in
many orders of magnitude) because of two things. One, all stars slowly increase
in brightness as they age after settling down into their main sequence phase and
their habitable zones thus sweep through the inner system over time (though this
effect is much stronger for larger stars because they age and change faster).
Secondly, its probable that most young smaller planets tend to have much more in
the way of atmospheres than later in their lives just due to the fact that they
are more geologically active then and haven't had time for the light molecules
to be blasted away. And if terrestrial planets follow any sort of a power rule
in their distribution of masses, there should be multiple Mars-size planets for
every Earth sized planet.
At this point I would say that any speculation on the exact place and time of
the origin of life is premature, that there's nothing to suggest that it didn't
happen on Earth, but that there is little to suggest that it couldn't have
happened elsewhere within our own solar system either even if we have little
reason to think it had to (besides adding the necessity that it later moved to
the very clement surface of the Earth, the only place in the solar system that
can support a big high-bimoass biosphere like ours).
I honestly don't know much about the proposed molybdenum connection. Some
cursory looking about the internet suggests that molybdenum is necessary for
efficient fixation of nitrogen from the air into organic molecules by
nitrogenase, the enzyme that does most of that biological activity on Earth. I
would be surprised though if that were the only way it could go, rather than
just the way it went here...
EDIT: upon further looking around, I am worried that the proposed molybdenum
connection could be correlation rather than causation. Most sources claiming
that the presence of lots o
-2Will_Newsome10y
(Are you Adele Lack Cotard?)
0Adele_L10y
Like in Synecdoche, New York? No... it is an abbreviation of my real name.
The ancient Stoics apparently had a lot of techniques for habituation and changing cognitive processes. Some of those live on in the form of modern CBT. One of the techniques is to write a personal handbook with advice and sayings to carry around at all times as to never be without guidance from a calmer self. Indeed, Epictet advises to learn this handbook by rote to further internalisation. So I plan to write such a handbook for myself, once in long form with anything relevant to my life and lifestyle, and once in a short form that I update with things that are difficult at that time, be it strong feelings or being deluded by some biases.
In this book I intend to include a list of all known cognitive biases and logical fallacies. I know that some biases are helped by simply knowing them, does anyone have a list of those? And should I complete the books or have a clear concept of their contents, are you interested in reading about the process of creating one and possible perceived benefits?
Though lack of motivation or laziness is not a particularly interesting answer.
I have found "I thought X would be awesome, and then on doing X realized that the costs were larger than the benefits" to be useful information for myself and others. (If your laziness isn't well modeled by that, that's also valuable information for you.)
(mild exaggeration) Has anyone else transitioned from "I only read Main posts, to I nearly only read discussion posts, to actually I'll just take a look at the open threat and people who responded to what I wrote" during their interactions with LW?
To be more specific, is there a relevant phenomenon about LW or is it just a characteristic of my psyche and history that explain my pattern of reading LW?
I read the sequences and a bunch of other great old main posts but now mostly read discussion. It feels like Main posts these days are either repetitive of what I've read before, simply wrong or not even wrong, or decision theory/math that's above my head. Discussion posts are more likely to be novel things I'm interested in reading.
Selection bias alert: asking people whether they have transitioned to reading mostly discussion and then to mostly just open threads in an open thread isn't likely to give you a good perspective on the entire population, if that is in fact what you were looking for.
There would be far more selection bias if he asked about it outside an open
thread, though.
0blashimov10y
Really? Why?
6private_messaging10y
Because he's asking about people who only read the open thread. Here he could
get response from the people who do read LW in general, inclusive of the open
thread, and people who read only the open thread (he'll miss the people who
don't read the open thread). Outside the open thread, he gets no response at all
from people who only read the open thread.
4blashimov10y
I see what you mean.
9RolfAndreassen10y
I read the Sequences as they were posted; Main posts now rarely hold my interest
the same way. Eliezer's writing is just better than most people's.
Honestly, I don't know why Main is even an option for posting. It should really be just an automatically labeled/generated "Best of LW" section, where Discussion posts with, say, 30+ karma are linked. This is easy to implement, and easy to do manually using the Promote feature until it is. The way it is now, it's mostly by people thinking that they are making an important contribution to the site, which is more of a statement about their ego than about quality of their posts.
I predict that some people will have been through the sequences, which are Main
posts, but then mainly cared about discussion. I suspect it has to do with
Morning Newspaper Bias - the bias of thinking that new stuff is more relevant,
when actually it is just pointless to read most of the time, only scrambles your
mind, and loses value very quickly.
1David_Gerard10y
The lower the barrier to entry, the more the activity. Thus, more posts are on
Discussion. My hypothesis is that this has worked well enough to make Discussion
where stuff happens. c.f. how physics happens on arXiv these days, not in
journals. (OTOH, it doesn't happen on viXra, whose barrier to entry may be too
low.)
1Username10y
I've definitely noticed this in my use of LW. I find that the open threads/media
threads with their consistent high-quality novelty in a wide range of subject
areas are far more enjoyable than the more academic main threads. Decision
theory is interesting, but it's going to be hard to hold my attention for a
3,000 word post when there are tasty 200-word bites of information over here.
2David_Gerard10y
Well, chat's always more fun.
0niceguyanon10y
I'll admit that much of the main sequence are too heavy to understand without
prior knowledge, so I find discussions much easier to take in, and many times I
end up reading a sequence because it was posted in a discussion comment. For me
discussion posts are like the gateway to Main.
0ygert10y
My experience is similar. I read the sequences as they were published on OB,
then when the move over to LW happened I just subscribed to the RSS feed and
only read Promoted posts for quite a few years. Only about a year ago I actually
signed up for an account here and started posting and reading Discussion and the
Open Thread.
The joke in that comic annoys me (and it's a very common one on SMBC, there must be at least five there with approximately the same setup). Human values aren't determined to align with the forces of natural selection. We happen to be the product of natural selection, and, yes, that made us have some values which are approximately aligned with long-term genetic fitness. But studying biology does not make us change our values to suddenly become those of evolution!
In other words, humans are a 'genie that knows, but doesn't care'. We have understood the driving pressures that created us. We have understood what they 'want', if that can really be applied here. But we still only care about the things which the mechanics of our biology happened to have made us care about, even though we know these don't always align with the things that 'evolution cares about.'
(Please if someone can think of a good way to say this all without anthropomorphising natural selection, help me. I haven't thought enough about this subject to have the clarity of mind to do that and worry that I might mess up because of such metaphors.)
For more on this topic, see for example these posts:
* Evolutionary Psychology [http://lesswrong.com/lw/l1/evolutionary_psychology/]
* Thou Art Godshatter [http://lesswrong.com/lw/l3/thou_art_godshatter/]
* Rebelling Within Nature [http://lesswrong.com/lw/s5/rebelling_within_nature/]
* The Gift We Give To Tomorrow
[http://lesswrong.com/lw/sa/the_gift_we_give_to_tomorrow/]
Anyone tried to use the outside view on our rationalist community?
I mean, we are not the first people on this planet who tried to become more rational. Who were our predecessors, and what happened to them? Where did they succeed and where they failed? What lesson can we take from their failures?
The obvious reply will be: No one has tried doing exactly the same thing as we are doing. That's technically true, but that's a fully general excuse against using outside view, because if you look into enough details, no two projects are exactly the same. Yet it is experimentally proved that even looking at sufficiently similar projects gives better estimates than just using the inside view. So, if there was no one exactly like us, who was the most similar?
I admit I don't have data on this, because I don't study history, and I have no personal experience with Objectivists (which are probably the most obvious analogy). I would probably put Objectivists, various secret societies, educational institutions, or self-help groups into the reference class. Did I miss something important? The common trait is that those people are trying to make their thinking better, avoid some frequent faults, and t... (read more)
I've seen a few attempts, mostly from outsiders. The danger involved there is an
outsider has difficult picking the right reference class- you don't know how
much they know about you, and how much they know about other things.
The things that the outside view has suggested we should be worried about that I
remember (in rough order of frequency):
* Being a cult.
* Being youth-loaded.
* Optimizing for time-wasting over goal-achieving.
Here are two critiques I remember from insiders that seem to rely on outside
view thinking: Yvain's Extreme Rationality: It's Not That Great
[http://lesswrong.com/lw/9p/rationality_its_not_that_great/], patrissimo's
Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental
Rationality
[http://lesswrong.com/lw/2po/selfimprovement_or_shiny_distraction_why_less/].
Eliezer's old posts on Every Cause Wants To Be A Cult
[http://lesswrong.com/lw/lv/every_cause_wants_to_be_a_cult/] and Guardians of
Ayn Rand [http://lesswrong.com/lw/m1/guardians_of_ayn_rand/] also seem relevant.
(Is there someone who keeps track of our current battle lines for cultishness?
Do we have an air conditioner on, and are we optimizing it deliberately?)
One of the things that I find interesting is in response to patrissimo's comment
in September 2010 that LW doesn't have enough instrumental rationality practice,
Yvain proposed that we use subreddits, and the result was a "discussion"
subreddit. Now in September 2013 it looks like there might finally be an
instrumental rationality subreddit
[http://lesswrong.com/r/discussion/lw/iip/which_subreddits_should_we_create_on_less_wrong/].
That doesn't seem particularly agile. (This is perhaps an unfair comparison, as
CFAR has been created in the intervening time and is a much more promising
development in terms of boosting instrumental rationality, and there are far
more meetups now than before, and so on.)
There's also been a handful of "here are other groups that we could try to
emulate," and the pri
The reason why I asked was not just "who can we be pattern-matched with?", but also "what can we predict from this pattern-matching?". Not merely to say "X is like Y", but to say "X is like Y, and p(Y) is true, therefore it is possible that p(X) is also true".
Here are two answers pattern-matching LW to a cult. For me, the interesting question here is: "how do cults evolve?". Because that can be used to predict how LW will evolve. Not connotations, but predictions of future experiences.
My impression of cults is that they essentially have three possible futures: Some of them become small, increasingly isolated groups, that die with their members. Others are viral enough to keep replacing the old members with new members, and grow. The most successful ones discover a way of living that does not burn out their members, and become religions. -- Extinction, virality, or symbiosis.
What determines which way a cult will go? Probably it's compatibility of long-term membership with ordinary human life. If it's too costly, if it requires too much sacrifice from members, symbiosis is impossible. The other two choices probably depend on how much ... (read more)
Agreed. One of the reasons why I wrote a comment that was a bunch of links to
other posts is because I think that there is a lot to say about this topic. Just
"LW is like the Mormon Church" was worth ~5 posts in main.
A related question: is LessWrong useful for people who are awesome, or just
people who want to become awesome? This is part of patrissimo's point: if you're
spending an hour a day on LW instead of an hour a day exercising, you may be
losing the instrumental rationality battle. If someone who used to be part of
the LW community stops posting because they've become too awesome, that has
unpleasant implications for the dynamics of the community.
I was interested in that because "difference between the time a good idea is
suggested and the time that idea is implemented" seems like an interesting
reference class.
8Viliam_Bur10y
Isn't this a danger that all online communities face? Those who procrastinate a
lot online get a natural advantage against those who don't. Thus, unless the
community is specifically designed against that (how exactly?), the
procrastinators will become the elite.
(It's an implication: Not every procrastinator becomes a member of elite, but
all members of elite are procrastinators.)
Perhaps we could make an exception for Eliezer, because for him writing the
hundreds of articles was not procrastination. But unless writing a lot of stuff
online is one's goal, then procrastination is almost a necessity to get a
celebrity status on a website.
Then we should perhaps think about how to prevent this effect. A few months ago
we had some concerned posts against "Eternal September" and stuff. But this is
more dangerous, because it's less visible, it is a slow, yet predictable change,
towards procrastination.
8Vaniver10y
Yes, which is I think a rather good support for having physical meetups.
Agreed.
Note that many of the Eternal September complaints are about this, though
indirectly: the fear is that the most awesome members of a discussion are the
ones most aggravated by newcomers, because of the distance between them and
newcomers is larger than the difference between a median member and a newcomer.
The most awesome people also generally have better alternatives, and thus are
more sensitive to shifts in quality.
4linkhyrule510y
Supporting this, I'll note that I don't see many posts from, say, Wei Dai or
Salamon in recent history - though as I joined all of a month of ago take that
with a dish of salt.
I wonder if something on the MIRI/CFAR end would help? Incentives on the actual
researchers to make occasional (not too many, they do have more important things
to do) posts on LessWrong would probably alleviate the effect.
2Viliam_Bur10y
Perhaps to some degree, different karma coefficients could be used to support
what we consider useful on reflection (not just on impulse voting). For example,
if a well-researched article generated more karma than a month of
procrastinating while writing comments...
There is some support for this: articles in Main get 10× more karma than
comments. But 10 is probably not enough, and also it is not obvious what exactly
belongs to Main; it's very unclearly defined. Maybe there could be a Research
subreddit where only scientific-level articles are allowed, and there the karma
coefficient could be pretty high. (Alternatively, to prevent karma inflation,
the karma from comments should be divided by 10.)
0ChristianKl10y
I don't think that "sounding good" is a accurate description of how people in
the personal development field succeed.
Look at Tony Robbins who one of the most successful in the business. When you
ask most people whether walking on hot coal is impressive they would tell you
that it is. Tony manages to get thousands of people in a seminar to walk over
hot coals.
Afterwards they go home and tell there friends about who they walked about hot
coals. That impresses people and more people come to his seminars.
It not only that his talk sounds good but that he is able to provide impressive
experiences.
On the other hand his success is also partly about being very good at building a
network marketing structure that works.
But in some sense that not much different than the way universities work. They
evidence that universities actually make people successful in life isn't that
strong.
I don't think so. If you are a scientologist and believe in Xenu that reduces
your compatibility with ordinary human life. At the same time it makes you more
committed to the group if you are willing something to belong.
Opus Dei members wear cilice to make themselves uncomfortable to show that they
are committed.
I think the fact that you don't see where many people as members of groups that
need a lot of commitment is a feature of 20th century where mainstream society
with institution such as television that are good at presenting a certain
culture which everyone in a country has a shared identity.
At the moment all sort of groups like the Amnish or LDS that require more
commitment of their members seem to grow. It could be that we have a lot more
people as members of groups that require sacrifice in a hundred years than we
have now.
1Armok_GoB10y
I'd like to here point out a reference class that includes if I understood
things right: the original buddhism movement, the academic community of france
around the revolution, and the ancient greek philosophy tradition. More examples
and a name for this reference class would be welcome.
I include this mainly to counterbalance the bias towards automatically looking
for the kind of cynical reference classes typically associated with and primed
by the concept of the outside view.
Looking at the reference class examples I came up with, there seems to be a
tendency towards having huge geniuses at the start that nobody later could
compete with, wich lead after a few centuries to dissolving into spinoff
religions dogmatically accepting or even perverting the original ideals.
DISCLAIMER: This is just one reference class and the conclusions reached by that
reference class, it was reached/constructed by trying to compensate for a
predicted bias with tends to lead to even worse bias on average.
2Viliam_Bur10y
Thank you! This is the reference class I was looking for, so it is good to see
someone able to overcome the priming done by, uhm, not exactly the most neutral
people. I had a feeling that something like this is out there, but specific
examples did not come into my mind.
The danger of not being able to "replicate" Eliezer seems rather realistic to
me. Sure, there are many smart people in CFAR, and they will continue doing
their great work... but would they be able to create a website like LW and
attract many people if they had to start from zero or if for whetever reasons
the LW website disappeared? (I don't know about how MIRI works, so I cannot
estimate how much Eliezer would be replaceable there.) Also, for spreading
x-rationality to other countries, we need local leaders there, if we want to
have meetups and seminars, etc. There are meetups all over the world, but they
are probably not the same thing as the communities in Bay Area and New York
(although the fact that two such communities exist is encouraging).
1ChristianKl10y
I think the cult view is valuable to look at some issues.
When you have someone asking
[http://lesswrong.com/lw/if2/open_thread_august_26_september_1_2013/9nfc]
whether he should cut ties with his nonrational family members is valuable to
keep in mind that's culty behavior.
Normal groups in our society don't encourage their members to cut family ties.
Bad cults do those things. That doesn't mean that there's never a time where one
should rationally advice someone to cut those ties, but one should be careful.
Given the outside view of how cults went to a place where people literally drunk
kool aid I think it's important to encourage people to keep ties to people who
aren't in the community.
Part of why Eliezer banned the basilisk might have been that having it more
actively around would push LessWrong in the direction of being a cult.
There's already the promise for nearly eternal life in a FAI moderated heaven.
It always useful to investigate where cult like behaviors are useful and where
they aren't.
To maybe help others out and solve the trust bootstrapping involved, I'm offering for sale <=1 bitcoin at the current Bitstamp price (without the usual premium) in exchange for Paypal dollars to any LWer with at least 300 net karma. (I would prefer if you register with #bitcoin-otc, but that's not necessary.) Contact me on Freenode as gwern.
EDIT: as of 9 September 2013, I have sold to 2 LWers.
Pardon me, but - what is the trust boostrapping involved?
6gwern10y
Paypal allows clawbacks for months, hence it's difficult to sell for Paypal to
anyone who is not already in the -otc web of trust; but by restricting sales to
high-karma LWers, I am putting their reputation here at risk if they scam me,
which enables me to sell to them. Hence, they can acquire bitcoins & get
bootstrapped into the -otc web of trust based on LW.
Thank you. All I need is hand held spray thermos to make Australia a viable
working vacation. I have a strong irrational aversion to spiders. This is much
more acceptable than the home made flamer.
What makes money essential for the functioning of modern society? Through an experiment, we present evidence for the existence of a relevant behavioral dimension in addition to the standard theoretical arguments. Subjects faced repeated opportunities to help an anonymous counterpart who changed over time. Cooperation required trusting that help given to a stranger today would be returned by a stranger in the future. Cooperation levels declined when going from small to large groups of strangers, even if monitoring and payoffs from cooperation were invariant to group size. We then introduced intrinsically worthless tokens. Tokens endogenously became money: subjects took to reward help with a token and to demand a token in exchange for help. Subjects trusted that strangers would return help for a token. Cooperation levels remained stable as the groups grew larger. In all conditions, full cooperation was possible through a social norm of decentralized enforcement, without using tokens. This turned out to be especially demanding in large groups. Lack of trust among strangers thus made money behaviorally essential. To explain these results, we developed an evolutionary model. When behavior in society is heterogeneous, cooperation collapses without tokens. In contrast, the use of tokens makes cooperation evolutionarily stable.
Does this also work with macaques, crows or some other animals that can be taught to use money, but didn't grow up in a society where this kind of money use is taken for granted?
Not strictly the same, but there have been monkey money experiments. And the
results are hilarious.
www.zmescience.com/research/how-scientists-tught-monkeys-the-concept-of-money-not-long-after-the-first-prostitute-monkey-appeared/
Who is this and what has he done with Robin Hanson?
The central premise is in allowing people to violate patents if it is not "intentional". While reading the article the voice in my head which is my model of Robin Hanson was screaming "Hypocrisy! Perverse incentives!" in unison with the model of Eliezer Yudkowsky which was also shouting "Lost Purpose!". While the appeal to total invasive surveillance slightly reduced the hypocrisy concerns it at best pushes the hypocrisy to a higher level in the business hierarchy while undermining the intended purpose of intellectual property rights.
This may be an odd question, but what (if anything) is known on turning NPCs into PCs? (Insert your own term for this division here, it seems to be a standard thing AFAICT.)
I mean, it's usually easier to just recruit existing PCs, but ...
I suspect that finding people on the borderline between the categories and
giving them a nudge is part of the solution to this problem.
What do you need PCs to do that NPCs cannot do? Zeroing in on the exact quality
needed may make the problem easier.
3blashimov10y
Take the leadership feat, and hope your GM is lazy enough to let you level them.
More practically, is it a skills problem or as I would guess an agency problem?
Can impress on them the importance of acting vs not? Lend them the Power of
Accountability? 7 habits of highly effective people? Can you compliment them
every time they show initiative? etc. I think the solution is too specific to
individuals for general advice, nor do I know a general advice book beyond those
in the same theme as those mentioned.
1MugaSofer10y
Heh.
Agency. I've just noticed how many people I interact with are operating almost
totally on cached thoughts, and getting caught up in a lot of traps that they
could avoid if they were in the correct frame of mind (ie One Of Us.) But you
have to be ... motivated correctly, I guess, in order to turn to rationalism or
some other brand of originality. Goes my reasoning.
Yeah, could be. I figure it's always possible someone already solved this,
though, so I'd rather find there's already a best practice than kick myself much
later for reinventing the wheel ( or worse, giving up!)
2ChristianKl10y
Sometimes I even think that I would profit from having some cached thoughts that
give me effective habits that I fulfill at every occasion without thinking too
much.
When the alarm bell rings it would be good if I would have a cached thought that
would make me automatically get up without thinking the decision through.
I don't think the state of being paralysed because you killed all cached thought
is particulary desirable. I think I spent too much time in that state in the
last year ;)
I think it's more a question of focusing your energy on questioning those cached
thoughts that actually matter.
When it comes to agency I think there are some occasions where I show a lot but
others where I show little. Expecially when you compare me to an average person
the domains in which I show my agency are different.
I can remember one occasion where I took more responsibility for a situation
after reading the transition of McGonneral from NPC to PC in HPMOR.
I think that HPMOR is well written when it comes to installing the frame of mind
you are talking about.
0MugaSofer10y
Oh, we evolved them for a reason. Heck, your brain almost certainly couldn't
function without at least some. But when people start throwing type errors
whenever something happens and a cached though doesn't kick in, they could
probably do with a little more original thought.
That said, there's more to agency and PC-ness than cached thoughts. It was just
particularly striking to see people around me fishing around for something
familiar they knew how to respond to, and that's what prompted me to wonder how
much we knew about the problem.
Powell’s biggest revelation in considering the role of humans in algorithms, though, was that humans can do it better. “I would go down to Yellow, we were trying to solve these big deterministic problems. We weren’t even close. I would sit and look at the dispatch center and think, how are they doing it?” That’s when he noticed: They are not trying to solve the whole week’s schedule at once. They’re doing it in pieces. “We humans have funny ways of solving problems that no one’s been able to articulate,” he says. Operations
In loading trucks for warehouses, some OR guys I know ran into the opposite
problem- they encoded all the rules as constraints, found a solution, and it was
way worse than what people were actually doing. Turns out it was because the
people actually loading the trucks didn't pay attention to whether or not the
load was balanced on the truck, or so on (i.e. mathematically feasible was a
harder set of constraints than implementable because the policy book was harder
than the actuality).
(I also don't think it's quite fair to call the OR approach 'punting', since we
do quite a bit of optimization using heuristics.)
If anyone wants to teach English in China, my school is hiring. The pay is higher than the market rate and the management is friendly and trustworthy. Must have a Bachelor's degree and a passport from and English speaking country. If you are at all curious, PM me for details.
I have updated on how important it is for Friend AI to succeed (more now). I did this by changing the way I thought about the problem. I used to think in terms of the chance of Unfriendly AI, this lead me to assign a chance of whether a fast, self-modifying, indifferent or FAI was possible at all.
Instead of thinking of the risk of UFAI, I started thinking of the risk of ~FAI. The more I think about it the more I believe that a Friendly Singleton AI is the only way for us humans to survive. FAI mitigates other existential risks of nature, unknowns, hu... (read more)
So you want a god to watch over humanity -- without it we're doomed?
4niceguyanon10y
As of right now, yes. However, I could be persuaded otherwise.
-1ChristianKl10y
I find it unlikely that you are well calibrated when you put your credence at
99% for a 1,000 year forecast.
Human culture changes over time. It's very difficult to predict how humans in
the future will think about specific problems. We went in less than 100 years
from criminalizing homosexual acts to lawful same sex marriage.
Could you imagine that everyone would adopt your morality in 200 or 300 hundred
years? If so do you think that would prevent humanity from being doomed?
If you don't think so, I would suggest you to evaluate your own moral beliefs in
detail.
Is there a name for, taking someone being wrong on A as evidence as being wrong on B? Is this a generally sound heuristic to have? In the case of crank magnetism; should I take someone's crank ideas, as evidence against an idea that is new and unfamiliar to me?
It's evidence against them being a person whose opinion is strong evidence of B, which means it is evidence against B, but it's probably weak evidence, unless their endorsement of B is the main thing giving it high probability in your book.
I don't know if there's a name for this, but I definitely do it. I think it's
perfectly legitimate in certain circumstances. For example, the more B is a
subject of general dispute within the relevant grouping, and the more
closely-linked belief in B is to belief in A, the more sound the heuristic. But
it's not a short-cut to truth.
For example, suppose that you don't know anything about healing crystals, but
are aware that their effectiveness is disputed. You might notice that many of
the same people who (dis)believe in homeopathy also (dis)believe in healing
crystals, that the beliefs are reasonably well-linked in terms of structure, and
you might already know that homeopathy is bunk. Therefore it's legitimate to
conclude that healing crystals are probably not a sound medical treatment -
although you might revise this belief if you got more evidence. On the other
hand, note that reversed stupidity is not truth - healing crystals being bunk
doesn't indicate that conventional medicine works well.
The place where I find this heuristic most useful is politics, because the sides
are well-defined - effectively, you have a binary choice between A and ~A,
regardless of whether hypothetical alternative B would be better. If I stopped
paying attention to current affairs, and just took the opposite position to Bob
Crow [http://en.wikipedia.org/wiki/Bob_Crow] on every matter of domestic
political dispute, I don't think I'd go far wrong.
3shminux10y
I don't know if there is a name for it, but there ought to be one, since this
heuristic is so common: the reliability prior of an argument is the reliability
of the arguer. For example, one reason I am not a firm believer in the UFAI
doomsday scenarios is Eliezer's love affair with MWI.
2A1987dM10y
Yes, but in many cases it's very weak evidence. Overweighing it leads to the
“reversed stupidity” failure mode.
1Adele_L10y
Bayes' theorem to the rescue! Consider a crank C, who endorses idea A. Then the
probability of A being true, given that C endorses it equals the probability of
C endorsing A, given that A is true times the probability that A is true over
the probability that C endorses A.
In equations: P(A being true | C endorsing A) = P(C endorsing A | A being
true)*P(A being true)/P(C endorsing A).
Since C is known to be a crank, our probability for C endorsing A given that A
is true is rather low (cranks have an aversion to truth), while our probability
for C endorsing A in general is rather high (i.e. compared to a more sane
person). So you are justified in being more skeptical of A, given that C
endorses A.
0David_Gerard10y
It's a logical fallacy, but is something humans evolved to do (or didn't evolve
not to do), so may in fact be useful when dealing with humans you know in your
group.
0satt10y
Somewhat related: The Correct Contrarian Cluster
[http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/].
0[anonymous]10y
Horrifically misnamed.
-1Douglas_Knight10y
ad hominem
Not that there's anything wrong with that.
This seems to be a hidden assumption of cryonics / transhumanism / anti-deathism: We should do everything we can to prevent people from dying, rather than investing these resources into making more or more productive children.
The usual argument (which I agree with) is that "Death events have a negative utility". Once a human already exists, it's bad for them to stop existing.
Complement it with the fact that it costs about 800 thousand dollars to raise a
mind, and an adult mind might be able to create value at rates high enough to
continue existing. .
Makaulay Culkin and Haley Joel Osmend (or whatever spelling) notwithstanding,
that is a good argument against children.
4twanvl10y
An adult, yes. But what about the elderly? Of course this is an argument for
preventing the problems of old age.
Is it? It just says that you should value adults over children, not that you
should value children over no children. To get one of these valuable adult minds
you have to start with something.
2Mestroyer10y
How does that negative utility vary over time though? Because if it stays the
same (or increases) then if we know now it's impossible to live 3^^^3 years,
then disutility from death sooner than that is counterbalanced (or more than
that) by averted disutility from dying later, meaning decisions made are
basically the same as if you didn't disvalue death (or as if you valued it).
8Oscar_Cunningham10y
I think that part of the badness of death is the destruction of that person's
accumulated experience. Thus the negative utility of death does indeed increase
over time. However this is counterbalanced by the positive utility of their
continued existence. If someone lives to 70 rather than 50 then we're happy
because the 20 extra years of life were worth more than the worsening of the
death event.
0Armok_GoB10y
In this case, it seems like the best policy is cryopreserving then letting them
stay dead but extracting those experiences and inserting them in new minds.
Which sounds weird when you say it like that, but is functionally equivalent to
many of the scenarios you would intuitively expect and find good, like radically
improving minds and linking them into bigger ones before waking them up since
anything else would leave them unable to meaningfully interact with anything
anyway and human-level minds are unlikely to qualify for informed consent.
0Mestroyer10y
So if Bob is cryopreserved, and I can res him for N dollars, or create a
simulation of a new person and run them quickly enough to catch up a number of
years equal to Bob's age at death, for N - 1 dollars, I should spend all
available dollars on the latter?
Edit: to clarify why I think this is implied by your answer, what this is doing
is trading such that you gain a death at Bob's current age, but gain a life of
experience up to Bob's current age. If a life ending at Bob's current age is net
utility positive, this has to be net utility positive too.
3drethelin10y
broadly: yes, though all available dollars is actually all available dollars
(for making people), and you're ignoring considerations like keeping promises to
people unable to enforce them such as the cryopreserved or asleep or unconscious
etc.
Assuming Rawls's veil of ignorance, I would prefer to be randomly born in a world where a trillion people lead billion-year lifespans than one in which a quadrillion people lead million-year lifespans.
I agree, but is this the right comparison? Isn't this framing obscuring the fact that in the trillion-people world, you are much less likely to be born in the first place, in some sense?
Let us try this framing instead: Assume there are a very large number Z of possible different human "persons" (e.g. given by combinatorics on genes and formative experiences). There is a Rawlsian chance of 1/Z that a new created human will be "you". Behind the veil of ignorance, do you prefer the world to be one with X people living N years (where your chance of being born is X/Z) or the one with 10X people living N/10 years (where your chance of being born is 10X/Z)?
I am not sure this is the right intuition pump, but it seems to capture an aspect of the problem that yours leaves out.
Rawls's veil of ignorance + self-sampling assumption = average utilitarianism,
Rawls's veil of ignorance + self-indication assumption = total utilitarianism
(so to speak)? I had already kind-of noticed that, but hadn't given much thought
to it.
9Mestroyer10y
Doesn't Rawls's veil of ignorance prove too much here though? If both worlds
would exist anyway, I'd rather be born into a world where a million people lived
101 year lifetimes than a world where 3^^^3 people lived 100 year lifetimes.
2TrE10y
So then, Rawls's veil has to be modified such that you are randomly chosen to be
one of a quadrillion people. In scenario A, you live a million years. In
scenario B, one trillion people live for one billion years each, the rest are
fertilized eggs which for some reason don't develop.
I'd still choose B over A.
0ShardPhoenix10y
Would you? A million probably isn't enough to sustain a modern economy, for
example. (Although in the 3^^^3 case it depends on the assumed density since we
can only fit a negligible fraction of that many people into our visible
universe).
6Mestroyer10y
If the economies would be the same, then yes. Don't fight the hypothetical.
2ShardPhoenix10y
I think "fighting the hypothetical" is justified in cases where the necessary
assumptions are misleadingly inaccurate - which I think is the case here.
5Creutzer10y
But compared to 3^^^3, it doesn't matter whether it's a million people, a
billion, or a trillion. You can certainly find a number that is sufficient to
sustain an economy and is still vastly smaller than 3^^^3, and you will end up
preferring the smaller number for a single additional year of lifespan. Of
course, for Rawls, this is a feature, not a bug.
9Izeinwinter10y
Existing people take priority over theoretical people. Infinitely so. This
should be obvious, as the reverse conclusion ends up with utter absurdities of
the "Every sperm is sacred" variety.
Mad grin
Once a child is born, it has as much claim on our consideration as every other
person in our light cone, but there is no obligation to have children. Not any
specific child, nor any at all. Reject this axiom and you might as well commit
suicide over the guilt of the billions of potentials children you could have
that are never going to be born. Right now.
Even if you stay pregnant till you die/never masturbate, this would effectively
not help at all - each conception moves one potential from the space of "could
be" to to the space of "is", but at the same time eliminates at least several
hundred million other potential children from the possibility space - that is
just how human reproduction works.
TL:DR; yes, yes they are. It is a silly question.
8twanvl10y
Does this mean that I am free to build a doomsday weapon that kills everyone
born after September 4th 2013 100 years from now, if that gets me a cookie?
Not necessarily. It would merely be your obligation to have as many children as
possible, while still ensuring that they are healthy and well cared for. At some
point having an extra child will make all your children less well of.
Why is there a threshold at birth? I agree that it is a convenient point, but it
is arbitrary.
Why should I commit suicide? That reduces the number of people. It would be much
better to start having children. (Note that I am not saying that this is my
utility function).
3Eliezer Yudkowsky10y
The "infinitely so" part seems wrong, but the idea is that 4D histories which
include a sentient being coming into existence, and then dying, are dispreferred
to 4D world-histories in which that sentient being continues. Since the latter
type of such histories may not be available, we specify that continuing for a
billion years and then halting is greatly preferable to continuing for 10 years
then halting. Our degree of preference for such is substantially greater than
the degree to which we feel morally obligated to create more people, especially
people who shall themselves be doomed to short lives.
3Alejandro110y
The switch from consquentialist language ("4D histories which include… are
dispreferred") to deontological language ("…the degree to which we feel morally
obligated to create more people") is confusing. I agree that saving the lives of
existing people is a stronger moral imperative than creating new ones, at the
level of deontological rules and virtuous conduct which is a large part of
everyday human moral reasoning. I am much less clear than when evaluating 4D
histories I assign higher utility to one with few people living long lives than
to one with more people living shorter lives. Actually, I tend towards the
opposite intuition preferring a world with more people who live less (as long as
the their lives are still well worth living, etc.)
0Armok_GoB10y
Not sure what part of this comment tree this belongs so just posting it here
where it's likely to be seen:
It struck me with an image that it's not at all necessary that these tradeoffs
are actually a thing once you dissolve the "person" abstraction; it's possible
that something like the following is optimal: half the universe is dedicated to
search the space of all experiences in order starting with the highest
utility/most meaningful/lowest hanging fruit. This is then aggregated and
metadata added and sent to the other half which is tiled with minimal
context-experiencing units equivalent to individual peoples subjective whatever.
in the end, you end up with equivalent to if you had half the number of
individual people as if that was your only priority, each having the utility as
a single person with the entire future history of half the universe dedicated to
it, including context of history.
Thats the best case scenario. It's pretty certain SOME aspect or another of the
fragile godshatter will disallow it obviously.
Yea, this was basically pseud tangential musings.
0A1987dM10y
If by “old humans” you mean healthy adults, yes. If you mean this
[http://slatestarcodex.com/2013/07/17/who-by-very-slow-decay/], no. (IMO --
YMMV.)
0Alsadius10y
Death isn't just a negative for the dead person - it also causes paperwork and
expenses, destruction of relationships, and grief among the living.
2MugaSofer10y
This is true, but in my experience usually used to massage models that don't
consider death a disutility into giving the right answers. I can't think of ever
hearing this argument used for any other reason, in fact, in meatspace.
(Replying to this comment out of context on the Recent Comments.)
0Alsadius10y
The context is someone asking whether it's better to stop existing people from
dying or just make new people.
0MugaSofer10y
Hmm. I guess I'm going to cautiously say "called it!"
0drethelin10y
Yes.
1twanvl10y
Because?
7drethelin10y
a level 5 character is more valuable than a level 1 character.
A person who is older has more to give the world and has been more invested in
than a baby. they're a lot less replaceable.
also i like em more.
Since PB users' calibrations are not yet good enough to see the future, you can
easily avoid MoR spoilers by subscribing to the email or RSS alerts for new
chapters & reading them as appropriate.
5Adele_L10y
This is the obvious solution, but I want to reread what I've currently read, and
have some time to think about the story and try creating an accurate causal
model of events and such in the story as I read new!Adele material (Eliezer says
it's supposed to be a solvable puzzle). I don't have time to do this right now,
so in the meantime, I try to avoid spoilers.
2Douglas_Knight10y
I used feed43 [http://www.feed43.com/] to create an rss feed
[http://www.feed43.com/predictionbook_recent.xml] out of recent predictions
[http://predictionbook.com/predictions]. Then I used feedrinse
[http://www.feedrinse.com/] to filter out references to hpmor resulting in a
safe feed
[http://www.feedrinse.com/services/rinse/?rinsedurl=3ca3bbba4fc10f010b651d1f026b53de].
(Update: chaining unreliable services makes something even less reliable.)
You could do the same for the pages of recently judged or future or users you
follow. I think feedrinse offers to merge feeds (into a "channel") before or
after doing the filtering. But if you find someone new and just want to click on
the username, you'll leave the safe zone. Even if you see someone you have
processed, the username will take you to the unsafe page.
A better solution would be to write a greasemonkey script that modified each
predictionbook page as you look at it.
The final feedrinse feed works in a couple of my browsers, but not chrome.
Probably sending it through feedburner would fix it.
feed43 was finicky. The item search pattern was:
{_}
{_}{%} [{%}]{%}
The regexp I used in feedrinse was /hp.?mor/
It is case insensitive and manages to eliminate "HP MoR:", "[HPMOR]", etc. It
won't work if they spell it out, or just predict "Harry is orange" without
indicating which story they're predicting about. In that case, someone will
probably leave a hpmor comment, but this doesn't see such comments.
2Jayson_Virissimo10y
If you are skilled in the art of Ruby, then yes. Otherwise, maybe. People
(myself included) have been complaining about the lack of tagging/sorting system
on PB for quite some time, but so far, no one has played the hero.
The following query is sexual in nature, and is rot13'ed for the sake of those who would either prefer not to encounter this sort of content on Less Wrong, or would prefer not to recall information of such nature about my private life in future interactions.
V nz pheeragyl va n eryngvbafuvc jvgu n jbzna jub vf fvtavsvpnagyl zber frkhnyyl rkcrevraprq guna V nz. Juvyr fur cerfragyl engrf bhe frk nf "njrfbzr," vg vf abg lrg ng gur yriry bs "orfg rire," juvpu V ubcr gb erpgvsl.
Sbe pynevsvpngvba, V jbhyq fnl gung gur trareny urnygu naq fgnovy... (read more)
Well, I'm flattered that you think my position is so enviable, but I also think this would be a pretty reasonable course of action for someone who made a billion dollars.
This book
[http://www.amazon.com/Multi-Orgasmic-Man-Sexual-Secrets-Should/dp/0062513362/ref=sr_1_1?s=books&ie=UTF8&qid=1378190333&sr=1-1&keywords=The+Multi-Orgasmic+Man]
pbhyq uryc jvgu gur fgnzvan. Vg jbexrq sbe zl uhfonaq, jura ur gevrq vg n srj
lrnef ntb.
0Desrtopa10y
Are the instructions anything simple enough that I could replicate them without
needing to buy the entire book?
0Sabiola10y
Maybe, but then I'd have to read it to find out, and I have many other books I'd
like to read. Maybe you can find it in the library?
0Desrtopa10y
I'll check; I'm pretty sure my own library doesn't have a Sex section, but it
might be in network.
Asking to order it would be pretty embarrassing, I have to admit, especially at
my own library where a lot of the people who work there know me by name.
0khafra10y
Dewey Decimal number 613.96, IIRC from my internet-deprived adolescence.
0Douglas_Knight10y
If you're too cheap to spend $4 at amazon, pirate it
[http://gen.lib.rus.ec/search.php?req=multi+orgasmic+man].
2NancyLebovitz10y
Slow Sex
[http://www.amazon.com/Slow-Sex-Craft-Female-Orgasm/dp/0446567183/ref=cm_cr_pr_pb_t]
seems to help move at least some people to move from good to great.
1Desrtopa10y
Does that entail sex literally done slowly? We could try it out, but that
doesn't seem to be to her preferences.
0NancyLebovitz10y
It involves learning to pay more attention as a meditative practice, but not (I
think) a recommendation to always go slowly.
1drethelin10y
Practice makes perfect. I think a lot of good sex is intuitively reading your
partner's signals and ramping things up/down with good timing in response to
them. I think this is something you might be able to learn via logos but I think
it's much more likely to be something you need to experience before you can get
good at it. When to pull hair, when to thrust deeper, etc.
In general I and whoever I'm with have had more fun when I felt I had a good
idea of what they wanted in the moment, which I think I've gotten better at
mainly through practice.
1Desrtopa10y
I suspect that I can continue to improve with practice, but I'd like to be able
to set out every option available to me on the table.
Even if I can attain the status of "best" without taking such extraordinary
measures, this is something I'm genuinely competitive on, which at least to me
means that simply taking first place isn't sufficient if I can still see avenues
to top myself.
People sometimes say that we don't choose to be born. Is this false if I myself choose to have kids for the same reason my parents did (or at least to have kids if I was ever in the relevantly same situation?) If so, can I increase my measure by having more children for these reasons?
Technically yes, but obviously if this is part of your motivation for doing so
then thats a meaningful difference unless your parents also understood TDT and
had that as part of their reason, so if in fact they did not (which they
probably didn't since it wasn't invented back then) then this answer is entirely
useless.
2A1987dM10y
Other forms of folk rule consequentialism (e.g. the Golden Rule) have existed
for quite a long time.
2Armok_GoB10y
Interesting point! This is quite a tricky problem that I've considered before.
My current stance is we need to know more of the specifics and causal history of
how those kind of rules are implemented in general before we can determine if
they count, and there is also the possibility that once we've done that it turns
out they "should" but our current formalizations won't... This is an interesting
mostly unexplored (i think) subject that seems likely to spawn an fruitful
discussion thou.
Has anyone here read up through ch18 of Jaynes' PT:LoS? I just spent two hours trying to derive 18.11 from 18.10. That step is completely opaque to me, can anybody who's read it help?
You can explain in a comment, or we can have a conversation. I've got gchat and other stuff. If you message me or comment we can work it out. I probably won't take long to reply, I don't think I'll be leaving my computer for long today.
EDIT: I'm also having trouble with 18.15. Jaynes claims that P(F|A_p E_aa) = P(F|A_p) but justifies it with 18.1... I just don't see how that... (read more)
I've just looked and I have no idea either. If anyone wants to help there's a
copy of the book here [http://www-biba.inrialpes.fr/Jaynes/prob.html].
EDIT: The numbers in that copy are off by 1 from the book. "18.10" = "18-9" and
so on.
0alex_zag_al10y
Yeah, so to add some redundancy for y'all, here's the text surrounding the
equations I'm having trouble with.
The 18.10 to 18.11 jump I'm having trouble with is the one in this part of the
text:
And equation 18.15, which I can't justify, is in this part of the text:
0Kindly10y
A not-quite-rigorous explanation of the thing in 18.15:
E_aa is, by construction, only relevant to A. A_p was defined (in 18.1) to
screen off all previous knowledge about A. So in fact, if we are given evidence
E_aa but then given evidence A_p, then E_aa becomes completely irrelevant: it's
no longer telling us anything about A, but it never told us anything about
anything else. Therefore P(F|A_p E_aa) can be simplified to P(F|A_p).
0alex_zag_al10y
That's not true though. By construction, every part of it is relevant to A.
That doesn't mean it's not relevant to anything else. For example, It could be
in this Bayes net: E_aa ---> A ----> F. Then it'd be relevant to F.
Although... thinking about that Bayes net might answer other questons...
Hmm. Remember that Ap screens A from everything. I think that means that A's
only connection is to Ap - everything else has to be connected through Ap.
So the above Bayes net is really
Eaa --> Ap --> F With another arrow from Ap to A.
Which would mean that Ap screens Eaa from F, which is what 18.15 says.
The above Bayes net represents an assumption that Eaa and F's only relevance to
each other is that they're both evidence of A, which is often true I think.
Hmm. When I have some time I'm gonna draw Bayes nets to represent all of Jaynes'
assumptions in this chapter, and when something looks unjustified, figure out
what Bayes net structure would justify it.
In fact, I skipped over this before but this is actually recommended in the
comments of that errata page I posted:
I learned about Egan's Law, and I'm pretty sure it's a less-precise restatement of the correspondence principle. Anyone have any thoughts on that similarity?
The term is also used more generally, to represent the idea that a new theory should reproduce the results of older well-established theories in those domains where the old theories work.
Sounds good to me, although that's not what I would have guessed from a name like 'correspondence principle'.
I suppose some minor difference is that this "law" is also applicable to
meta-ethics, not just to physics. It's probably worth adding a link to the
standard terminology to the LW wiki page.
I found this interesting post over at lambda the ultimate about constructing a provably total (terminating) self-compiler. It looked quite similar to some of the stuff MIRI has been doing with the Tiling Agents thing. Maybe someone with more math background can check it out and see if there are any ideas to be shared?
This is the same basic idea as Benja's Parametric Polymorphism
[http://lesswrong.com/lw/e5j/how_to_cheat_l%C3%B6bs_theorem_my_second_try/],
with N in the post corresponding to kappa from parametric polymorphism.
The "superstition" is:
And from the section in Tiling Agents
[http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/] about
parametric polymorphism (recommended if you want to learn about parametric
polymorphism):
Anyway, it's interesting that someone else has a very similar idea for this kind
of problem. But as mentioned in Tiling Agents, the "superstitious belief" seems
like a bad idea for an epistemically rational agent.
2[anonymous]10y
It is neat that this problem is coming up elsewhere. It reminds me that MIRI's
work could be relevant to people working in other sub-fields of math, which is a
good sign and a good opportunity.
No law or even good idea is going to stop various militaries around the world, including our own, from working as fast as they can to create Skynet. Even if they tell you they've put the brakes on and are cautiously proceeding in perfect accordance with your carefully constructed rules of friendly AI, that's just their way of telling you you're stupid.
There are basically two outcomes possible here: They succeed in your lifetime, and you are killed by a Terminator, or
So, in other words, absolutely no engagement with the actual ideas/arguments of
the people the 'letter' is addressed to.
2brandyn10y
Clarify?
0somervta10y
He's ignoring everything Friendly AI Proponents have said on the issue , and is
attacking a strawman instead of the real reasons FAI people think it's a
problem.
2brandyn10y
Trying to understand here. What's the strawman in this case?
Can you point me to an essay that addresses the points in this one?
2somervta10y
He doesn't really make any relevant points.
The closest is this:
Which is really just an assertion that you won't get FOOM (I mean, no one thinks
it'll take less time than it takes you to hit Ctrl-C, but that's just hyperbole
for writing style). He doesn't argue for that claim, he doesn't address any of
the arguments for FOOM (most notably and recently: IEM
[http://intelligence.org/files/IEM.pdf]).
2brandyn10y
Ah, thanks, better understand your position now. I will endeavor to read IEM (if
it isn't too stocked with false presuppositions from the get go).
I agree the essay did not endeavor to disprove FOOM, but let's say it's just
wrong on that claim, and that FOOM is really a possibility -- then are you
saying you'd rather let the military AI go FOOM than something homebrewed? Or
are you claiming that it's possible to reign in military efforts in this
direction (world round)? Or give me a third option if neither of those applies.
2somervta10y
AS to the 'third option', most work that I'm aware of falls into either
educating people about AI risk, or trying to solve the problem before someone
else builds an AGI. Most people advocate both.
2somervta10y
FAI proponents (Of which I am one, yes) tend to say that, ceteris paribus, an
AGI which is constructed without first 'solving FAI'* will be 'Unfriendly',
military or otherwise. This would be very very bad for humans and human values.
*There is significant disagreement over what exactly this consists of and how
hard it will be.
2brandyn10y
Do you think people who can't implement AGI can solve FAI?
2somervta10y
The problem of Friendliness can be worked on before the problem of AGI has been
solved, yes
2brandyn10y
Which do you think is more likely: That you will die of old age, or of
unfriendly-AI? (Serious question, genuinely curious.)
2somervta10y
I have lots of uncertainty around these sorts of question, especially
considering the timeline-dependency and the how-likely-are-MIRI(and
others)-to-succed-at-avoiding-UFAI. Suffice to say that it's not an obvious win
for 'old age', (which for me is hopefully >50 years away).
0brandyn10y
Let's imagine you solve FAI tomorrow, but not AGI. (I see it as highly
improbable that anyone will meaningfully solve FAI before solving AGI, but let's
explore that optimistic scenario.) Meanwhile, various folks and institutions out
there are ahead of you in AGI research by however much time you've spent on FAI.
At least one of them won't care about FAI.
I have a hard time imagining any outcome from that scenario that doesn't involve
you wishing you'd been working on AGI and gotten there first. How do you imagine
the outcome?
2Randaly10y
"worked on" != "solved"
(In addition, MIRI claims that a FAI could be easier to implement than an AGI in
general- i.e. that if you solve the philosophical difficulties regarding FAI,
this also makes it easier to create an AGI in general. For example, MIRI's
specific most-likely scenario for the creation of an AGI is a sub-human AI that
self-modifies to become smarter very quickly; MIRI's research on modeling
self-modification, while aimed at solving one specific problem that stands in
the way of Friendliness, also has potential applications towards understanding
self-modification in general.)
2somervta10y
drethlin nailed it - If I counterfactually-had spent that time working on AGI, I
wouldn't have solved Friendliness, and (unless someone else had solved FAI
without me) my AGI would be just as Unfriendly in expectation as the
competitors.
If FAI is solved first, however, it increases the probability that the first AGI
will be Friendly. Depending on the nature of the solution (how much of it is
something that can be published so others can use it with their AGIs?), this
could happen through AGI development by people already convinced of the problem,
or it could be 'added on' to existing AGi projects.
2brandyn10y
See reply below to drethlin.
2drethelin10y
why would he wish that? His unfriendly AI that he'd been working on will
probably just kill him.
-2brandyn10y
Sigh.
Ok, I see the problem with this discussion, and I see no solution. If you
understood AGI better, you would understand why your reply is like telling me I
shouldn't play with electricity because Zeus will get angry and punish the
village. But that very concern prevents you from understanding AGI better, so we
are at an impasse.
It makes me sad, because with the pervasiveness of this superstition, we've lost
enough minds from our side that the military will probably beat us to it.
2drethelin10y
The other thing is: "Our Side" is not losing minds. People are going to try to
make AGI regardless of friendliness. but almost no one anywhere has ever heard
of AI friendliness and even fewer give a shit. That means that the marginal
person working on friendliness is HUGELY more valuable. And if someone discovers
friendliness, guess what? The military are going to want it too! Maybe someone
actually insane would not, but any organization that has goals and cares about
humans at all will be better off with a friendly AI than not.
2brandyn10y
"but almost no one anywhere has ever heard of AI friendliness"
Ok, if this is your vantage point, I understand better. I must hang in the wrong
circles 'cause I meet far more FAI than AGI folks.
2drethelin10y
You shouldn't play with radiation because you don't understand it and trying to
build a bomb might get you and everyone else killed. This isn't a question of
superstition, you fool, it's a question of NOT just throwing a bunch of
radioactive material in a pile to see what happens, only when you fuck it up you
don't just kill yourself like Marie Curie or possibly blow up a few square miles
like if the Manhattan Project hadn't been careful enough. You fuck over
EVERYTHING.
0brandyn10y
Appologies for the provacative phrasing--I was (inadvertently) asking for a
heated reply...
But to clarify the point in light of your response (which no doubt will get
another heated reply, though honestly trying to convey the point w/out
provoking...):
Piles of radioactive material is not a good analogy here. But I think it's
appearance here is a good illustration of the very thing I'm hoping to convey:
There are a lot of (vague, wrong) theories of AGI which map well to the
radioactive pile analogy. Just put enough of the ingredients together in a pile,
and FOOM. But the more you actually work on AGI, the more you realize how
heuristic, incremental, and data bound it is; how a fantastic solution to monkey
problems (vision, planning, etc) confers only the weakest ability in symbolic
domains, and that, for instance, NP problems most likely remain NP hard
regardless of intelligence, and their solutions are limited by time, space, and
energy constraints--not cleverness. Can a hyper-intelligent AI improve upon
hardware design, etc, etc? Sure! But the whole system (of progress) we're
speaking of is a large complex system of differential equations with many
bottlenecks, at least some of which aren't readily amendable to
hyper-exponential change. Will there be a point where things are out of our
(human) hands? Yes. Will it happen over night? No.
The radioactive pile analogy fails because AGI will not be had by heaping a
bunch of stuff in a pile--it will be had through extensive engineering and
design. It will progress incrementally, and it will be bounded by resource
constraints--for a long long time.
A better analogy might be to building a fusion reactor. Sure, you have to be
careful, especially in the final design, construction, and execution of the full
scale device, but there's a huge amount of engineering to be done before you get
anywhere near having to worry about that, and tons of smaller experiments that
need to be proven first, and so on. And you learn as you
2drethelin10y
How not hard is it? How long do you think it would take you to solve it?
0brandyn10y
I think it will be incidental to AGI. That is, by the time you are approaching
human-level AGI it will be essentially obvious (to the sort of person who groks
human-level AGI in the first place). Motivation (as a component of the process
of thinking) is integral to AGI, not some extra thing only humans and animals
happen to have. Motivation needs be grokked before you will have AGI in the
first place. Human motivational structure is quite complex, with far more
alterior motives (clan affiliation, reproduction, etc) than straightforward
ones. AGIs needn't be so-burdened, which in many ways makes the FAI problem
easier in fact than our human-based intuition might surmise. On the other hand,
simple random variation is a huge risk--that is, no matter the intentional
agenda, there is always the possibility that a simple error will put that very
abstract coefficient of feedback over unity, and then you have a problem. If AGI
weren't going to happen regardless, I might say it's worthy of a debate now what
the nature of that problem would be (but in that debate, I still say it's not a
huge problem--it's not instantaneous FOOM, it's time-to-uplug FOOM; and you have
the advantage of other FAIs by then with full ability to analyze each other so
you actually have a lot of tools available to put out fires long before they're
raging); but AGI is going to happen regardless, so the race is not FAI vs. AGI,
but whether the first to solve AGI wants FAI or something else. And like I say,
there is also the race against our own inevitable demise of old age (talk to
anybody who's been in the longevity community for > 20 years and you will learn
they once had your optimism about progress).
Don't get me wrong, FAI is not an uninteresting problem. My claim is quite
simply that for the goals of the FAI community (which I have to assume includes
your own long-term survival), y'all would do far better to be working (hard and
seriously) on AGI than not. All of this sofa-think today will be rep
0brandyn10y
Just to follow up, I'm seeing nothing new in IEM (or if it's there it's too
burried in "hear me think" to find--Eliezer really would benefit from pruning
down to essentials). Most of it concerns the point where AGI approaches or
exceeds human intelligence. There's very little to support concern for the long
ramp up to that point (other than some matter of genetic programming, which I
haven't the time to address here). I could go on rather at length in rebuttal of
the post-human-intelligence FOOM theory (not discounting it entirely, but
putting certain qualitative bounds on it that justify the claim that FAI will be
most fruitfully pursued during that transition, not before it), but for the
reasons implied in the original essay and in my other comments here, it seems
moot against the overriding truth that AGI is going to happen without FAI
regardless--which means our best hope is to see AGI+FAI happen first. If it's
really not obvious that that has to lead with AGI, then tell me why.
Does anybody really think they are going to create an AGI that will get out of
their hands before they can stop it? That they will somehow bypass ant, mouse,
dog, monkey, and human and go straight to superhuman? Do you really think that
you can solve FAI faster or better than someone who's invented monkey-level AI
first?
I feel most of this fear is risidual leftovers from the self-modifying
symbolic-program singularity FOOM theories that I hope are mostly left behind by
now. But this is just the point -- people who don't understand real AGI don't
understand what the real risks are and aren't (and certainly can't mediate
them).
2somervta10y
Self-modifying AI is the point behind FOOM. I'm not sure why you're connecting
self-modification/FOOM/singularity with symbolic programming (I assume you mean
GOFAI), but everyone I'm aware of who thinks FOOM is plausible thinks it will be
because of self-modification.
0brandyn10y
Yes, I understand that. But it matters a lot what premises underlie AGI how
self-modification is going to impact it. The stronger fast-FOOM arguments spring
from older conceptions of AGI. Imo, a better understanding of AGI does not
support it.
Thanks much for the interesting conversation, I think I am expired.
0somervta10y
I don't think anyone is saying that an 'ant-level' AGI is a problem. The issue
is with 'relatively-near-human-level' AGI. I also don't think there's much
disagreement about whether a better understanding of AGI would make FAI work
easier. People aren't concerned about AI work being done today, except inasmuch
as it hastens better AGI work done in the future.
0brandyn10y
"I mean, no one thinks it'll take less time than it takes you to hit Ctrl-C" --
by the way, are you sure about this? Would it be more accurate to say "before
you realize you should hit control-C"? Because it seems to me, if it aint goin'
FOOM before you realize you should hit control-C (and do so) then.... it aint
goin' FOOM.
2drethelin10y
More importantly: If someone who KNOWS how important stopping it is sitting at
the button, then they're more likely to stop it, but if someone is like "it's
getting more powerful and better optimized! Let's see how it looks in a week!"
is in charge, then problems.
0brandyn10y
Well, then, I hope it's someone like you or me that's at the button. But that's
not going to be the case if we're working on FAI instead of AGI, is it...
2somervta10y
What do you think Ctrl-C does in this?
2brandyn10y
"The Intelligence Explosion Thesis says that an AI can potentially grow in
capability on a timescale that seems fast relative to human experience due to
recursive self-improvement. This in turn implies that strategies which rely on
humans reacting to and restraining or punishing AIs are unlikely to be
successful in the long run, and that what the first strongly self-improving AI
prefers can end up mostly determining the final outcomes for Earth-originating
intelligent life. " -- Eliezer Yudkowsky, IEM.
I.e., Eliezer thinks it'll take less time than it takes you to hit Ctrl-C.
(Granted it takes Eliezer a whole paragraph to say what the essay captures in a
phrase, but I digress.)
6somervta10y
Eliezer's position is somewhat more nuanced than that. He admits a possibility
of a FOOM timescale on the order of seconds, but a timescale on the order of
weeks/months/years is also in line with the IE thesis.
0[anonymous]10y
FOOM on the order of seconds can be strongly argued against (Eli does a fair job
of it himself, but likes to leave everything open so he can cite himself later
no matter what happens), and if it's weeks/months/years, then Hit Control C.
Seriously. If your computer is trying to take over the world and is likely to
succeed in the next few weeks, then kill -9 the thing. I realize that at that
point you've likely got other AIs to worry about, but at least you're in a
position to understand it well enough to have some hope at making yours friendly
and re-activating it before Skynet goes live. (I know Eli has counters to this,
and counter-counters, and counter-counter-counters, but so do I--I just don't
assume you'll be interested in hearing them. The main point here really is that
the original statement wasn't so much hyperbole as refreshingly concise--whether
or not you agree with it.)
0ChristianKl10y
How would you know that you have to press Crtl-C? What observation would you
need to make?
Framing effects (causing cognitive biases) can be thought of as a consequence
of the absence of logical transparency in System 1 thinking. Different mental
models that represent the same information are psychologically distinct, and
moving from one model to another requires thought. If this thought was not
expended, the equivalent models don't get constructed, and intuition doesn't
become familiar with these hypothetical mental models.
This suggests that framing effects might be counteracted by explicitly imagining
alternative framings in order to present a better sample to intuition; or,
alternatively, focusing on an abstract model that has abstracted away the
irrelevant details of the framing.
I recently realized that I have something to protect (or perhaps a smaller version of the same concept). I also realized that I've been spending too much time thinking about solutions that should have have been obviously not workable. And I've been avoiding thinking about the real root problem because it was too scary, and working on peripheral things instead.
Does anyone have any advice for me? In particular, being able to think about the problem without getting so scared of it would be helpful.
I would like recommendations for an Android / web-based to-do list / reminder application. I was happily using Astrid until a couple of months ago, when they were bought up and mothballed by Yahoo. Something that works with minimal setup, where I essentially stick my items in a list, and it tells me when to do them.
Wunderlist 2 has android (it only speaks english in the phone app, but it does
portuguese in the normal online version.
it puts your tasks in the cloud so you can catch up with what you wrote in other
services.
I'm amazed by David Allen's GTD at the moment, so I want to recommend it,
despite still being on honeymoon effect.
2sixes_and_sevens10y
Looking into Wunderlist now.
Don't worry. I read GTD several years ago, and stole plenty of stuff from it.
2Metus10y
I want to tack onto this and ask for a solution that provides some privacy, that
is where I can run my own server.
0Vladimir_Golovin10y
I was on Astrid too. I switched to Wunderlist mostly because their import from
Astrid worked correctly. Wunderlist is OK, though I can't say I'm completely
satisfied with it. Its UI is laggy (on a Nexus 4!) and unreliable, for example
the auto-sync often destroys the last task I just typed in, or when I
accidentally tap outside the task entry box the text I just typed is lost
forever.
I'm looking at alternatives, and the one I like the most so far is Remember the
Milk. Last time I tried it (probably a year ago) it was rubbish, but the latest
version has a clean and fast native Android GUI and some nice extra
functionality (e.g. geofencing). I'm thinking about switching, but it doesn't
have import from Wunderlist, so I'll have to move about 200 tasks manually.
0Ben_LandauTaylor10y
I've been happily using http://www.rememberthemilk.com/
[http://www.rememberthemilk.com/] to manage my GTD system. It's got a simple,
intuitive interface, both on desktop and on Android. I'm not sure if it has the
reminder features you're after, since that's not something I've ever wanted.
Bruce Schneier wrote an article on the Guardian in which he argues that we should give plausibility to the idea that the NSA can hack more forms of encryption than we previously believed.
Prefer symmetric cryptography over public-key cryptography. Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can.
The security of bitcoin wallets rests on elliptic-curve cryptography. This could mean that the NSA has the power to turn the whole bitcoin economy into toast if bitcoin becomes a real problem for them on a political level.
What don't you understand? The 2 homo economicuses are aware that 'existence is
suffering' especially when they are the butt of the humor, and rationally commit
suicide.
0shminux10y
You mean, left the bar?
5gwern10y
The end of the joke is the end of them.
0shminux10y
The joke would end one way or another, regardless of what they decide to do.
8gwern10y
Er, yes, but that's like saying you should stop eating ice cream right now
because one day you will die.
So.... Thinking about using Familiar, and realizing that I don't actually know what I'd do with it.
I mean, some things are obvious - when I get to sleep, how I feel when I wake up, when I eat, possibly a datadump from RescueTime... then what? All told that's about 7-10 variables, and while the whole point is to find surprising correlations I would still be very surprised if there were any interesting correlations in that list.
Suggestions? Particularly from someone already trying this?
Has anyone got a recommendation for a nice RSS reader? Ideally I'm looking for one that runs on the desktop rather than in-browser (I'm running Ubuntu). I still haven't found a replacement that I like for Lightread for Google Reader.
I've been discussing the idea of writing a series of short story fanfics where Rapture, an underwater city from the computer game Bioshock run by an Objectivist/Libertarian, is run by a different political philosophy. Possibly as a collaborative project with different people submitting different short stories. Would anyone here be interested in reading or contributiggg to something like that?
This is rather off-topic to the board, but my impression is that there is some sympathy here for alternative theories on heart disease/healthy diets, etc. (which I share).
Any for alternative cancer treatments? I don't find any that have been recommended to me as remotely plausible, but wonder if I'm missing something, if some disproving study if flawed, etc.
In the effective animal altruism movement, I've heard a bit (on LW) about wild animal suffering- that is, since raised animals are vastly outnumbered by wild animals (who encounter a fair bit of suffering on a frequent basis), we should be more inclined to prevent wild suffering than worry about spreading vegetarianism.
That said, I think I've heard it sometimes as a reason (in itself!) not to worry about animal suffering at all, but has anyone tried to solve or come up with solutions for that problem? Where can I find those? Alternatively, are there more resources I can read on wild animal altruism in general?
That doesn't sound true if you weight by intelligence (which I think you should
since intelligent animals are more morally significant). Surely the world's
livestock outnumber all the other large mammals.
2[anonymous]10y
That's... a very good point, now that you mention it. Thanks for suggesting it!
I looked into the comparisons in the USA (obviously, we're not only concerned
about the USA. Some countries will have a higher population of wild or domestic,
like Canada vs. Egypt. I have no idea if the US represents the average, but I
figure it would be easiest to find information on.
That said; some very rough numbers:
Mule & black-tailed deer populations in USA: ~5 million (2003) (Source)
[http://www.muledeerworkinggroup.com/Docs/Proceedings/2005-Western%20States%20and%20Provinces%20Deer%20and%20Elk%20Workshop/Status%20and%20Trend%20of%20Population%20and%20Harvest%20for%20Deer%20and%20Elk%20.pdf]
White-tailed deer population in USA: ~15 million (2010?) (Source)
[http://tippnews.com/local/rising-u-s-deer-population-calls-for-more-effective-deer-control-for-home-gardens/]
Black bear population in USA: ~.5 million (2011) (Source)
[http://www.blackbearsociety.org/bearPopulationbyState.html]
Coyote population in USA: No good number found
Elk population in USA: ~1 million (2008) (Source)
[http://ezinearticles.com/?Elk-Population-Facts-and-Statistics-For-the-United-States&id=3171991]
That totals 21.5 million large wild animals- obviously, these aren't the only
large wild animals in the USA, but I imagine that the rest added together
wouldn't equal more than a quarter more than that- so I'll guess 25 million.
Domesticated animals:
Cattle population in USA: ~100 million (2011) (Source)
[http://www.aphis.usda.gov/animal_health/nahms/downloads/Demographics2010_rev.pdf]
Hog & pig population in USA: ~120 million (2011) (Source)
[http://www.aphis.usda.gov/animal_health/nahms/downloads/Demographics2010_rev.pdf]
Again, there are other large animals kept on commercial farms (goats, sheep),
but they're probably not more than a quarter- so about 275 million large
domesticated animals.
Looking at that, that does put "wild animal suffering" into perspective- if you
accepted that philosophy, i
2blashimov10y
Large mammals only? Is a domesticated cow smarter than a rat? A pigeon? Tough
call.
0DanielLC10y
There's not a whole lot we can do now, so one thing I've heard suggested is to
spread vegetarianism so that people will be more sympathetic to animals in
general, and when we have the ability to engineer some retrovirus to make them
suffer less or something like that, we'll care more about helping animals than
not playing god.
Another possibility: nuke the rainforests.
2[anonymous]10y
Vegetarianism as seeding empathy, interesting- where have you heard that idea
brought up? (That is, was it a book or somewhere online I could see more on?)
Mass genetic engineering was the 'solution' I was wondering about especially.
(Obviously it's a little impractical at the moment.)
Nuking the rainforests doesn't seem like a good solution (aside from the obvious
impacts on OUR wellbeing!) for the same reasons that nuking currently-suffering
human populations doesn't seem like a good solution. Of course, you may have
been joking.
1DanielLC10y
I don't know exactly where I heard it, but I'm pretty sure it was somewhere on
felicifia.org [http://felicifia.org].
I am somewhat skeptical of wild animal suffering being bad enough to necessitate
nuking the rainforsts, but I think we should try to find out exactly how good
their lives are. If their suffering really does significantly outweigh their
happiness, then I don't see how we could justify not nuking them. If an animal
is suffering and isn't likely to get better, you euthanize it. If this applies
to all the animals, you euthanize all of them.
Hi, I am taking a course in Existentialism. It is required for my degree. The primary authors are Sartre, de Bouvoir and Merleau-Ponty. I am wondering if anyone has taken a similar course, and how they prevented material from driving them insane (I have been warned this may happen). Is there any way to frame the material to make sense to a naturalist/ reductionist?
This could be a Lovecraft horror story: "The Existential Diary of JMiller."
Week 3: These books are maddeningly incomprehensible. Dare I believe that it all really is just nonsense?
Week 8: Terrified. Today I "saw" it - the essence of angst - and yet at the same time I didn't see it, and grasping that contradiction is itself the act of seeing it! What will become of my mind?
Week 12: The nothingness! The nothingness! It "is" everywhere in its not-ness. I can not bear it - oh no, "not", the nothingness is even constitutive of my own reaction to it - aieee -
(Here the manuscript breaks off. JMiller is currently confined in the maximum security wing of the Asylum for the Existentially Inane.)
1. If you do not have a preexisting tendency for depression as a result of
taking ideas seriously, you probably have nothing to worry about. If you are
already a reductionist materialist, you also probably have nothing to worry
about. Millions of college students have taken courses in existentialism.
Almost all of them are perfectly fine. Even if they're probably pouring
coffee right now.
2. In LW terms, it may be useful to brush up on your metaethics
[http://wiki.lesswrong.com/wiki/Metaethics_sequence], as such problems are
usually most troublesome about these kinds of ideas in my social circle. Joy
in the Merely Real [http://wiki.lesswrong.com/wiki/Joy_in_the_Merely_Real]
may also be useful. I have no idea how your instructors will react if you
cache these answers and then offer them up in class, though. I would suggest
not doing that very often.
3. In the event that the material does overwhelm you beyond your ability to
cope, or prevents you from functioning, counseling services/departments on
college campuses are experienced in dealing with philosophy-related
depression, anxiety, etc. The use of the school counseling services should
be cheap/free with payment of tuition. I strongly suggest that you make use
of them if you need them. More generally, talking about the ideas you are
learning about with a study group, roommate, etc. will be helpful.
4. Eat properly. Sleep properly. Exercise. Keep up with your studying. Think
about things that aren't philosophy every once in a while. Your mind will
get stretched. Just take care of it properly to keep it supple and elastic.
(That was a really weird metaphor.)
3pragmatist10y
When reading Merleau-Ponty it might help to also read the work of contemporary
phenomenologists whose work is much more rooted in cognitive science and
neuroscience. A decent example is Shaun Gallagher's book How the Body Shapes the
Mind
[http://www.amazon.com/Body-Shapes-Mind-Shaun-Gallagher/dp/0199204160/ref=sr_1_1?ie=UTF8&qid=1378459732&sr=8-1&keywords=shaun+gallagher],
or perhaps his introductory book on naturalistic phenomenology
[http://www.amazon.com/The-Phenomenological-Mind-Shaun-Gallagher/dp/0415610370/ref=sr_1_3?ie=UTF8&qid=1378459732&sr=8-3&keywords=shaun+gallagher],
which I haven't read. Gallagher has a more or less Merleau-Pontyesque view on a
lot of stuff, but explicitly connects it to the naturalistic program and
expresses things in a much clearer manner. It might help you read Merleau-Ponty
sympathetically.
3fubarobfusco10y
All of those weird books were written by humans.
Those humans were a lot like other humans.
They had noses and butts and toes.
They ate food and they breathed air.
They could add numbers and spell words.
They knew how to have conversations and how to use money.
They had girlfriends or boyfriends or both.
Why did they write such weird books?
Was it because they saw other humans kill each other in wars?
Was it because writing weird books can get you a lot of attention and money?
Was it because they remembered feeling weird about their moms and dads?
People talk a lot about that.
Why do they talk a lot about that?
1ChristianKl10y
Ignorance isn't bliss. If the course brings you in contact with a few Ugh fields
that you hold that should be a good.
0IlyaShpitser10y
I think existentialism is very compatible w/ naturalism/reductionism.
Existentialists just use a weird vocabulary. But one of the main points, I
think, is coping with an absent/insane deity.
Another feature suggestion that will probably never be implemented: a check box for "make my up/down vote visible to the poster". The information required is already in the database.
What happened with the Sequence Reruns? I was getting a lot out of them. Were they halted due to lack of a party willing to continue posting them, or was a decision made to end them?
I'm pretty sure they halted because they had gone through the Sequences. Final
Words [http://lesswrong.com/lw/cl/final_words/], the last rerun post
[http://lesswrong.com/lw/hev/seq_rerun_final_words/], was published after Go
Forth and Create The Art!
[http://lesswrong.com/lw/c4/go_forth_and_create_the_art/], which is listed as
the last of the Craft and Community
[http://lesswrong.com/lw/cz/the_craft_and_the_community/] sequence, which was
the last of the Major Sequences
[http://wiki.lesswrong.com/wiki/Sequences#Major_Sequences].
0wedrifid10y
I never heard of such a decision and if it was made then it can be ignored
because it was a bad decision. (Until power is applied to hinder
implementation.)
If you value the sequence reruns then by all means start making the posts!
I personally get my 'sequence reruns' via the audio versions. Originally I used
text to speech but now many of them have castify [http://castify.co/].
When I was a teenager I took a personality test as a requirement for employment at a retail clothing store. I didn't take it too seriously, I "failed" it and that was the end of my application. How do these tests work and how to you pass or fail them? Is there evidence that these tests can actually predict certain behaviors?
You cannot fail a personality test unless the person administering the test
wants to filter out specific personality types that are similar to yours, for a
process unrelated to the test itself (e.g. employment).
The thing is, most possible personalities seem to be considered undesirable by
employers, and so many people simply resort to lying on these tests to present a
favourable image to employers (basically: extrovert, conformist,
"positive"/upbeat/optimistic, ambitious, responsible etc.). Looks like employers
know about this, but don't care anyway, because they think that if you aren't
willing to mold yourself into somebody else for the sake of the job, then you
don't want the job enough and there are many others who do.
(Disclaimer: I'm an outsider to the employment process and might not know what
I'm talking about. My impressions are gathered from job interview advice and job
descriptions.)
0[anonymous]10y
You might be interested in the [Big Five] model of personality, which seems to
be a rough scientific consensus, and is better empirically supported than other
models. In particular, measures of conscientiousness have a relatively strong
predictive value for things like grades, unemployment, crime, income, etc.
Myers-Briggs-style tests that sort people into buckets ("You're an extrovert!
You're an introvert!") are more common but don't seem to have a much predictive
value except insofar as they reduce to something like the Big Five model.
However, from what I remember of applying to crappy jobs, you may not have taken
a real test. I remember normal sounding items like, "I prefer large groups to
small groups", mixed in with "trick" questions like, "If I saw my best friend
stealing from the cash register, I would report him/her". I assume you got those
right. Either way, you're expected to just lie and say you'd be the perfect,
most hard-working employee ever and that cleaning toilets/selling shoes/washing
cars is what you've dreamed of doing since you were five.
0[anonymous]10y
I've heard (second-hand, but the original source was a counselor for
job-finding) that a trick for passing those, if it's a test that offers options
from "Strongly disagree" to "Strongly agree", is to always pick one of the
polarized ends ("Strongly" either). The idea seems to be that they'll prefer
candidates who are less washy, have stronger convictions, etc.
Yet another article on the terribleness of schools as they exist today. It strikes me that Methods of Rationality is in large part a fantasy of good education. So is the Harry Potter/Sherlock Holmes crossover I just started reading. Alicorn's Radiance is a fair fit to the pattern as well, in that it depicts rapid development of a young character by incredible new experiences. So what solutions are coming out of the rational community? What concrete criteria would we like to see satisfied? Can education be 'solved' in a way that will sell outside this community?
The characters in those fics are also vastly more intelligent and conscientious
than average. True, current school environments are stifling for gifted kids,
but then they are also a very small minority. Self-directed learning is
counterproductive for not-so-bright, and attempts to reform schools to encourage
"creativity" and away from the nasty test-based system tend to just be
smoke-screens for any number of political and ideological goals. Like the drunk
man and the lamppost, statistics and science are used for support rather than
illumination, and the kids are the ones who suffer.
There are massive structural problems wracking the educational system but I
wouldn't take the provincial perspectives of HPMoR or related fiction as good
advice for the changes with the biggest marginal benefit.
0ChristianKl10y
I think that's a bad question. I don't think that every school should follow the
same criteria. It's perfectly okay if different school teach different things.
http://www.kipp.org/ [http://www.kipp.org/] would be an educational project
financed by Bill Gates which tries to use a lot of testing. On the other hand
you have unschooling and enviroments like Sudbury Valley School. I don't think
that every child has to learn the same way. Both ways are viable.
When it comes to the more narrow rationality community I think there more
thought about building solutions that educate adults than about educating
children. If however something like Anki helps adults learn, there no real
reason why the same idea can't help children as well.
Similar things go for the Credence game and predictionbook. If those tools can
help adults to become more calibrated they probably can also help kids even if
some modifications might be needed.
Without having the money to start a completly new school I think it's good to
focus on building tool that build a particular skill.
Fighting (in the sense of arguing loudly, as well as showing physical strength or using it) seems to be bad the vast majority of time.
When is fighting good? When does fighting lead you to Win TDT style (which instances of input should trigger the fighting instinct and payoff well?)
There is an SSA argument to be made for fighting in that taller people are stronger, stronger people are dominant, and bigger skulls correlate with intelligence. But it seems to me that this factor alone is far, far away from being sufficient justification for fighting, given the possible consequences.
If everyone agrees about how power is distributed fighting is unnecessary.
Fighting can be necessary when another person claims to have power that they
actually don't have.
0Emile10y
Surely it's in nearly everyone's interest to have more power distributed to
themselves!
But fighting to get more power may have positive utility for oneself, it usually
has negative utility for others, so it's in everybody's interest that everybody
agrees to not fighting for more power. This agreement can take the form of
alternative ways of getting power (elections, money), or making power less
important to one's happiness (the rule of law).
0ChristianKl10y
If you don't have enough power to win a fight fighting is also negative utility
for yourself. If everyone predicts that you would win a fight, you usually don't
actually have to fight it to get what you want.
5blashimov10y
Fighting has a huge signalling component: when viewed in isolation, a fight
might be trivially, obviously, a net negative for both participants. However,
either or both! participants might in the future win more concessions for their
willingness to fight alone than the loss of the fight. As humans are adaption
executers, a certain willingness to fight, to seek revenge, etc. is pretty
common. At least, this seems to be the dominant theory and sensible to me.
0wedrifid10y
Or even just CDT style. Human interaction is approximately an iterated prisoners
dilemma without a fixed duration. Reputation concerns are sufficient to account
for most of the (perceived and actual) benefit among humans. Then more can be
attributed to ethical inhibitions
[http://lesswrong.com/lw/v0/ethical_inhibitions/] on the 'pride' ethic.
0drethelin10y
Fighting makes a lot more sense in a tribe or in small groups/individuals of
humans than it does now. A big argument with someone now will very rarely keep
you from starving and will probably never get you a child. On the other, showing
dominance in a situation where the women around you are choosing a mate out of 5
guys, will get you a lot more laid.
2diegocaleiro10y
I haven't seen people who can get laid frequently getting into dominance
disputes/fights.
There is a distinction between dominance which is assertive and aversive, and
prestige, which is recognized and non-aversive.
Guys like Keanu Reeves, Tom Cruise, Brad Pitt have prestige which gets them
(potentially) laid.
Women have more reason to be be attracted to a man if he is universally
recognized to be awesome, than if he is all the time showing his power through
small agonistic interactions with other people - males and females.
If Cesar had been universally prestigious instead of agonistically powerful,
Brutus wouldn't have reason to kill him leaving an unassisted widow and
children.
5wedrifid10y
I agree with your central point but I think this claim is something of an
overstatement (since I don't wish to accuse you of being sheltered). Crudely
speaking it tends to be sexier to win without fighting than to fight and win but
fighting (social status battles) and winning is still more than sufficiently
sexy.
I also note that it is hard to become the kind of person who does not need to
engage in any dominance disputes and still maintain high social status without
in engaging in many dominance disputes on the way. To a certain extend the
process can be munchkined since much of the record of who is dominant is stored
in the individual but some actual dominance disputes will still be inevitable.
2diegocaleiro10y
Yes, also keep in mind that human cognition related to hierarchies of prestige
and dominance is flexible enough that it may be worth more to step up in a
different hierarchy than try to save yourself in this one by agonistic dispute.
We don't have the problem of being "stuck" with the same group forever, which
facilitates a lot.
3Viliam_Bur10y
First, for modern humans fighting is not the only method of achieving higher
status. There are other ways, too. Guys like Keanu Reeves are examples of
successfully using the other methods. If you are a movie superstar, you don't
have to fight with people to be recognized.
Second, even the fighters don't fight all the time. This is precisely why social
animals have pecking order -- cached results of the previous fights. If you won
against someone yesterday, most likely he will not challenge you today;
therefore you can be today admired as peaceful. The more clear was your victory,
the longer time will pass until someone dares to challenge you again. Therefore,
if someone is obviously stronger that all his competitors, he will actually
fight very rarely. It's like the first place is "does not have to fight because
no one dares to fight him", second place is "fights and wins", third place is
"fights and loses", and the last place is "too afraid to fight". Also, often the
real fight is avoided if both parties agree on their estimate of who would win.
(Analogically: a policeman has a gun, but he uses the gun very rarely. The mere
presence of the gun, and the knowledge that he would use it if necessary, causes
the psychological effect.)
So the best case is to be seen as so poweful that everyone else just gives up.
Then you can be dominant and peaceful. But if you don't have the real fighting
power, sooner or later someone will call your bluff. (In case of Keanu Reeves,
his power is social. If you try to go and kick him, his fans wil come to his
defense, and his lawyers will destroy you. Your power is not just your
individual, but also all those people who would come to fight for you.)
-2Lumifer10y
To put it crudely, alpha males very rarely get into dominance fights because
part of being an alpha male is being acknowledged as an alpha male.
Betas and gammas status-fight more often since their position on the ladder is
less stable.
A large part of having status is not having to constantly prove it.
Just had a discussion with my in-law about the singularity. He's a physicist and his immediate response was: There are no singularities. They appear mathematically all the time and it only means that there is another effect taking over. Correspondingly a quick google thus brought up this:
On LW, 'singularity' does not refer to a mathematical singularity, and does not involve or require physical infinities of any kind. See Yudkowsky's post on the three major meanings of the term singularity. This may resolve your physicist friend's disagreement. In any case, it is good to be clear about what exactly is meant.
There aren't any that I'm aware of, except for "a disaster happens and everyone
dies," but that's bad luck, not a hard limit. I would respond with something
along the lines of "exponential growth can't continue forever, but where it
levels out has huge implications for what life will look like, and it seems
likely it will level out far above our current level, rather than just above our
current level."
-1Armok_GoB10y
One calculation per planck time per cubic planck length in the future light
cone.
I am seeking a mathematical construct to use as a logical coin for the purpose of making hypothetical decision theory problems slightly more aesthetically pleasing. The required features are:
Unbiased. It gives (or can be truncated or otherwise resolved to give) a 50/50 split on a boolean outcome.
Indexable. The coin can be used multiple times through a sequence number. eg. "The n-th digit of pi is even".
Intractable. The problem is too hard to solve. Either because there is no polynomial time algorithm to solve it or just because it is somewha
It looks to me like you want a cryptographically secure pseudo-random number
generator restricted to the output space {0, 1} and with a known seed. That's
unbiased and intractable pretty much by definition, indexable up to some usually
very large periodicity, and typically verifiable and simple to refer to because
that's standard practice in the security world.
There's plenty of PRNGs out there, and you can simply truncate or mod their
outputs to give you the binary output you want; Fortuna
[http://en.wikipedia.org/wiki/Fortuna_(PRNG\]) looks like a strong candidate to
me.
(I was going to suggest the Mersenne twister
[http://en.wikipedia.org/wiki/Mersenne_twister], which I've actually implemented
before, but on further examination it doesn't look cryptographically strong.)
5pengvado10y
That works with caveats: You can't just publish the seed in advance, because
that would allow the player to generate the coin in advance. You can't just
publish the seed in retrospect, because the seed is an ordinary random number,
and if it's unknown then you're just dealing with an ordinary coin, not a
logical one. So publish in advance the first k bits of the pseudorandom stream,
where k > seed length, thus making it information-theoretically possible but
computationally intractable to derive the seed; use the k+1st bit as the coin;
and then publish the seed itself in retrospect to allow verification.
Possible desiderata that are still missing: If you take multiple coins from the
same pseudorandom stream, then you can't allow verification until the end of the
whole experiment. You could allow intermediate verification by committing to N
different seeds and taking one coin from each, but that fails wedrifid's
desideratum of a single indexable problem (which I assume is there to prevent
Omega from biasing the result via nonrandom choice of seed?).
I can get both of those desiderata at once using a different protocol: Pick a
public key cryptosystem, a key, and a hash function with a 1-bit output. You
need a cryptosystem where there's only one possible signature of any given
input+key, i.e. one that doesn't randomize encryption. To generate the Nth coin:
sign N, publish the signature, then hash the signature.
2bcoburn10y
My first idea is to use something based on cryptography. For example, using the
parity of the pre-image of a particular output from a hash function.
That is, the parity of x in this equation:
f(x) = n, where n is your index variable and f is some hash function assumed to
be hard to invert.
This does require assuming that the hash function is actually hard, but that
both seems reasonable and is at least something that actual humans can't provide
a counter example for. It's also relatively very fast to go from x to n, so this
scheme is easy to verify.
-2Watercressed10y
Hash functions map multiple inputs to the same hash, so you would need to limit
the input in some other way, and that makes it harder to verify.
0[anonymous]10y
No candidates, but I'd like to point out that your unbiased requirement may
perhaps be omitted, conditional on the implementation.
If you have a biased logical coin, you poll the coin twice until the results
differ, and then you pick the last result when they do differ. That results in
an unbiased logical coin.
My first instinct is to bet on properties of random graphs
[http://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_model], but that's
not my field.
Now there is
[http://lesswrong.com/r/discussion/lw/jgb/open_thread_for_january_8_16_2014/].
The next time you miss an open thread you can make one. A lot more people will
see your comment than if you post in an old thread, and you might get a point or
two of karma.
Why should an AI have to self-modify in order to be super-intelligent?
One argument for self-modifying FAI is that "developing an FAI is an extremely difficult problem, and so we will need to make our AI self-modifying so that it can do some of the hard work for us". But doesn't making the FAI self-modifying make the problem much more difficult, since how we have to figure out how to make goals stable under self-modification, which is also a very difficult problem?
The increased difficulty could be offset by the ability for the AI to undergo a &quo... (read more)
My immediate reaction is, 'Possibly -- wait, how is that different? I imagine
the AI would write subroutines or separate programs that it thinks will do a
better job than its old processes. Where do we draw the line between that and
self-modification or -replacement?'
If we just try to create protected code that it can't change, the AI can remove
or subvert those protections (or get us to change them!) if and when it acquires
enough effectiveness.
2[anonymous]10y
The distinction I have in mind is that a self-modifying AI can come up with a
new thinking algorithm to use and decide to trust it, whereas a
non-self-modifying AI could come up with a new algorithm or whatever, but would
be unable to trust the algorithm without sufficient justification.
Likewise, if an AI's decision-making algorithm is immutably hard-coded as "think
about the alternatives and select the one that's rated the highest", then the AI
would not be able to simply "write a new AI … and then just hand off all its
tasks to it"; in order to do that, it would somehow have to make it so that the
highest-rated alternative is always the one that the new AI would pick. (Of
course, this is no benefit unless the rating system is also immutably
hard-coded.)
I guess my idea in a nutshell is that instead of starting with a flexible system
and trying to figure out how to make it safe, we should start with a safe system
and try to figure out how to make it flexible. My major grounds for believing
this, I think, is that it's probably going to be much easier to understand a
safe but inflexible system than it is to understand a flexible but unsafe
system, so if we take this approach, then the development process will be easier
to understand and will therefore go better.
2ChristianKl10y
You basically say that the AI should be unable to learn to trust a process that
was effective in the past to also be effective in the future. I think that would
restrict intelligence a lot.
0[anonymous]10y
Yeah, that's a good point. What I want to say is, "oh, a non-self-modifying AI
would still be able to hand off control to a sub-AI, but it will automatically
check to make sure the sub-AI is behaving correctly; it won't be able to turn
off those checks". But my idea here is definitely starting to feel more like a
pipe dream.
0Armok_GoB10y
Hmm, might still be something gleaned for attempting to steelman this or work in
different related directions.
Edit; maybe something with an AI not being able to tolerate things it can't make
certain proofs about? Problem is it'd have to be able to make those proofs about
humans if they are included in its environment, and if they are not it might
make UFAI there (Intuition pump; a system that consists of a program it can
prove everything about, and humans that program asks questions to). Yea this
doesn't seem very useful.
0ChristianKl10y
You can't really tell whether something that is smarter than yourself is
behaving correctly. In the end a non-self-modifying AI checking on whether a
self-modifying sub-AI is behaving correctly isn't much different from a safety
perspective than a human checking whether the self modifying AI is behaving
correctly.
1drethelin10y
immutably hard-coding something in is a lot easier to say than to do.
1drethelin10y
Or it can write a new AI that's an improved version of itself and then just hand
off all its tasks to it.
2Vaniver10y
I'm not sure where the phrase "have to" is coming from. I don't think the
expectation that we will build a self-modifying intelligence that becomes a
superintelligence is because that seems like the best way to do it but because
it's the easiest way to do it, and thus the one likely to be taken first.
In broad terms, the Strong AI project is expected to look like "humans build
dumb computers, humans and dumb computers build smart computers, smart computers
build really smart computers." Once you have smart computers that can build
really smart computers, it looks like they will (in the sense that at least one
institution with smart computers will let them, and then we have a really smart
computer on our hands), and it seems likely that the modifications will occur at
a level that humans are not able to manage effectively (so it really will be
just smart computers making the really smart computers).
Yes. This is why MIRI is interested in goal stability under self-modification.
0[anonymous]10y
Yeah, I guess my real question isn't why we think an AI would have to
self-modify; my real question is why we think that would be the easiest way to
do things.
0drethelin10y
you'd have to actively stop it from doing so. An AI is just code: If the AI has
the ability to write code it has the ability to self modify.
0[anonymous]10y
If the AI has the ability to write code and the ability to replace parts of
itself with that code, then it has the ability to self-modify. This second
ability is what I'm proposing to get rid of. See my other comment
[http://lesswrong.com/lw/ii6/open_thread_september_28_2013/9pvi].
7drethelin10y
If an AI can't modify its own code it can just write a new AI that can.
1Vaniver10y
Unpack the word "itself."
(This is basically the same response as drethelin
[http://lesswrong.com/lw/ii6/open_thread_september_28_2013/9pvv]'s, except it
highlights the difficulty in drawing clear delineations between different kinds
of impacts the AI can have on the word. Even if version A doesn't alter itself,
it still alters the world, and it may do so in a way that bring around version B
(either indirectly or directly), and so it would help if it knew how to design
B.)
0[anonymous]10y
Well, I'm imagining the AI as being composed of a couple of distinct parts—a
decision subroutine (give it a set of options and it picks one), a thinking
subroutine (give it a question and it tries to determine the answer), and a
belief database. So when I say "the AI can't modify itself", what I mean more
specifically is "none of the options given to the decision subroutine will be
something that involves changing the AI's code, or changing beliefs in
unapproved ways".
So perhaps "the AI could write some code" (meaning that the thinking algorithm
creates a piece of code inside the belief database), but "the AI can't replace
parts of itself with that code" (meaning that the decision algorithm can't make
a decision to alter any of the AI's subroutines or beliefs).
Now, certainly an out-of-the-box AI would, in theory, be able to, say, find a
computer and upload some new code onto it, and that would amount to
self-modification. I'm assuming we're going to first make safe AI and then let
it out of the box, rather than the other way around.
LWers seem to be pretty concerned about reducing suffering by vegetarianism, charity, utilitarianism etc. which I completely don't understand. Can anybody explain to me what is the point of reducing suffering?
Commonly, humans have an amount of empathy that means that when they know about
suffering of entities within their circle of interest, they also suffer. EG, I
can feel sad because my friend is sad. Some people have really vast circles, and
feel sad when they think about animals suffering.
Do you understand suffering yourself? If so, presumably when you suffer you act
to reduce it, by not holding your hand in a fire or whatnot? Working to end
suffering of others can end your own empathic suffering.
4Oscar_Cunningham10y
I don't help people because of empathy for them. I just want to help them. It's
a terminal value for me that other people be happy. I do feel empathy, but
that's not why I help people.
Your utility function needn't be your own personal happiness! It can be anything
you want!
4drethelin10y
No it can't. You don't get to choose your utility function.
But anyway I was responding to rationalnoodles as someone who clearly doesn't
seem to understand wanting to help people.
9Oscar_Cunningham10y
My point was that you should never feel constrained by your utility function.
You should never feel like it's telling you to do something that isn't what you
want. But if you thought that utility=happiness then you might very well end up
feeling this way.
3drethelin10y
That's fair. I think a better way to put it is to not put too much value into
any explicit attempt to state your own utility function?
0Oscar_Cunningham10y
Yeah.
0Jayson_Virissimo10y
Are you implying that utility functions don't change or that they do, but you
can't take actions that will make it more likely to change in a given direction,
or something else?
2drethelin10y
More that any decision you make about trying to change your utility function is
not "choosing a utility function" but is actually just your current utility
function expressing itself.
-2[anonymous]10y
I understand wanting to help people. I have empathy and I feel all the things
you've mentioned. What I'm trying to say is if you suffer when you think about
suffering of others, why not to try to stop thinking (caring) about it and
donate to science, instead of spending your time and money to reduce suffering?
2drethelin10y
do you think people should donate to science because that will reduce MORE
suffering in the long term?
0[anonymous]10y
Nope. I just like science.
Upd: I understand why my other comments were downvoted. But this?
1A1987dM10y
And some other people just like other people not suffering. Why should your like
count more than theirs?
0[anonymous]10y
Could you show me where I wrote that my like should count more than theirs?
1A1987dM10y
You didn't say that explicitly, but if yours doesn't count more than theirs, why
should we spend money on yours but not theirs?
0[anonymous]10y
Because they can (looks like not) deal with suffering from suffering of others,
without spending money on it, while enjoying spending money on science?
1drethelin10y
I'm not sure, I didn't vote it. But my theory would be that you seem to be
making fun of people who like to reduce suffering for no better reason than you
like a different thing (I don't understand why you do x? is often code for x is
dumb or silly).
0[anonymous]10y
I don't think it's silly. I think it's silly to spend governmental money and
encourage others to spend money on it, since it makes no sense. But if you
personally enjoy it, well, that's great.
1drethelin10y
what do you mean by "makes no sense" ? Do you mean in the nihilistic sense that
nothing really matters? You keep using the phrase as if it's a knockdown
argument against reducing suffering, so it might be useful to clarify what you
mean.
0[anonymous]10y
Yes, in nihilistic sense. If we follow the "what for?" question long enough, we
will inevitably get to the point where there is no explanation, and we therefore
may conclude that there is no sense in anything.
5drethelin10y
In that case, your question is already answered by the people who tell you that
they want to. If nothing really matters than the only reasons to do things are
internal to minds. In which case reducing suffering is simply a very common
thing for minds in this area to want to do. Why? evolutionary advantage mayhaps.
If you buy nihilism there is no reason to reduce suffering but there's also no
reason no to and no reason to do anything else.
0[anonymous]10y
And this is exactly what I think, and exactly why I said that:
and
0drethelin10y
but why? Why is it silly? What makes it silly? Literally nothing. You act as if
government money should be reserved for things that "make sense" or have a
reason but nothing does. Spending gov money or encouraging others to reduce
suffering is exactly as meaningful as every other thing you could spend it on.
0[anonymous]10y
Senselessness makes it silly. I not only act so but also think that doing
anything is silly. What I'm doing right now is silly.
I shouldn't have included "encouraging others"; what makes governmental money
different is that government acquired it's money by force without any reason to
use force. And your ethical system has to allow usage of force without reason,
for government to be ethical.
0drethelin10y
What's wrong with usage of force? It's not like there's a reason not to.
0[anonymous]10y
I didn't say that there is anything wrong with usage of force. It's wrong to use
force in my ethical system because I don't like it and don't want it to be used
on me.
If your ethical system is different and allows usage of force without reason --
it's okay. But please -- only use it on other people who think like you.
0ChristianKl10y
I don't think you have given any argument in favor of that demand. If you really
think that nothing has any meaning why should you follow the golden rule and
only use it on other people who think like you.
0[anonymous]10y
It's more of a request than a demand, and I understand that the person who likes
use of force, most likely will not listen to it, especially when I have no
arguments. They shouldn't follow this request. It's only intention is to show
what I would like them to do.
2Schlega10y
In my experience, trying to choose what I care about does not work well, and has
only resulted in increasing my own suffering.
Is the problem that thinking about the amount of suffering in the world makes
you feel powerless to fix it? If so then you can probably make yourself feel
better if you focus on what you can do to have some positive impact, even if it
is small. If you think "donating to science" is the best way to have a positive
impact on the future, than by all means do that, and think about how the
research you are helping to fund will one day reduce the suffering that all
future generations will have to endure.
0[anonymous]10y
It could be the problem, but, actually, the main one is that I see no point in
reducing suffering and it looks like nobody can explain it to me.
2DanielLC10y
It's an intrinsic value. Reducing suffering is the point.
I don't like to suffer. It's bad for me to suffer. Other people are like me.
Therefore, it's also bad for them to suffer.
0[anonymous]10y
When you say that "reducing suffering is the point", I suppose that there is a
reason to reduce it. How does it follow from "It's bad" to "needs to be
reduced"?
4DanielLC10y
No. It's a terminal value
[http://lesswrong.com/lw/l4/terminal_values_and_instrumental_values/]. When you
ask what the point of doing X is, the answer is that it reduces suffering, or
increases happiness, or does something else that's terminally valuable.
0[anonymous]10y
1. I don't see justification for dividing values in these two categories in
that post.
2. Do I understand you right, you think that although there is no reason why we
should reduce suffering and there is no reason what for we should reduce
suffering, we anyway should do it only because somebody called it "terminal
value"?
0DanielLC10y
Let me try this from the beginning.
Consider an optimization process. If placed in a universe, it will tend to
direct that universe towards a certain utility function. The end result it moves
it towards is called its terminal values.
Optimization processes do not necessarily have instrumental values. AIXI is the
most powerful possible optimization process, but it only considers the effect of
each action on its terminal values.
Evolution is another example. Species are optimized solely based on their
inclusive genetic fitness. It does not understand, for example, that if it got
rid of humans' blind spots, they'd do better in the long run, so it might be a
good idea to select for humans who are closer to having eyes with no blind
spots. Since you can't change gradually from "blind spot" to "no blind spot"
without getting "completely blind" for quite a few generations in between,
evolution is not going to get rid of out blind spots.
Humans are not like this. Humans can keep track of sub-goals to their goals. If
a human wants chocolate as a terminal value, and there is chocolate at the
store, a human can make getting to the store an instrumental value, and start
considering actions based on how they help get him/her to the store. These
sub-goals are known as instrumental values.
Perhaps you don't have helping people as a terminal value. However, you have
terminal values. I know this because you managed to type grammatically correct
English. Very few strings are grammatically correct English, and very few
patterns of movement would result in any string being sent as a comment to
LessWrong.
Perhaps typing grammatically correct English is a terminal value. Perhaps you're
optimizing something else, such as your own understanding of meta-ethics, and it
just so happens that grammatically correct English is a good way to get this
result. In this case, it's an instrumental value (unless you just have so much
computing power that you didn't even consider what helps you wri
0[anonymous]10y
Accident comment.
-2[anonymous]10y
Thanks for this wall of text but you didn't even try to answer my question. I
asked for justification to this division of values -- you just explained to me
this division.
If you are able to get the analogy, my argument sounds like this
[http://www.amazon.com/review/RW4KL1ZYJSLWB/ref=cm_cr_pr_perm?ie=UTF8&ASIN=1401922759&linkCode=&nodeID=&tag=]:
"The author has tried hard to tie various component of personal development into
three universal principles that can be applied to any situation. Unfortunately
human personality is a much more nuanced thing that defies such neat
categorizations. The attempt to force fit the 'fundamental principles of
personal development(!)' into neat categories can only result in such inanities
as love + truth = oneness; truth + power = courage; etc. There is no explanation
on why only these categories are considered universal, why not others? After all
we have a long list of desirable qualities say virtue, honor, commitment,
persistence, discipline etc. etc. On what basis do you pick 3 of them and
declare them to be 'fundamental principles'? If truth, love and power are the
fundamental principals of personality, then what about the others?
...
The point is that there is no scientific basis for claiming that truth, power
and love are the basic three principles and others are just a combination of
them. There are no hypothesis, no tests, no analysis and no proofs. No reference
to any studies in any university of repute. No double blind tests on sample
population. Just results. Whatever author says is a revelation that does not
require any external validation. His assertion is enough since it is based on
his personal experience. Believe it and you will see the results."
Btw, It's still extremely interesting to me, how exactly does "terminality" of
value give sense to action that has no reasons to be done.
0DanielLC10y
Why do anything? It's not enough to have an infinite or circular chain of
reasoning. You can construct an infinite or circular chain of reasoning that
supports any conclusion. You have to have an ending to it. That is what we call
a terminal value.
Nobody said it has to be simple. Our values are complicated
[http://wiki.lesswrong.com/wiki/Complexity_of_value]. Love, truth, oneness,
power, courage, etc. are all terminal values. Some of them are also instrumental
values. Power is very useful in fulfilling other values, and you will put forth
more effort to achieve power than you would if it was just a terminal value.
There are also instrumental values that are not terminal values, such as going
to the store (assuming you don't particularly like the store, although even then
you could argue that it's the happiness you like).
0[anonymous]10y
I don't know why. The most plausible answer I know -- because you like doing it.
1. Okay. However there are only assertions and no justifications, let's assume
that your first paragraph is right. Anyway, how does "terminality" of value
give sense to otherwise senseless action?
2. I ask you why these two categories, and it looks like you even cite the
right piece out of my review-argument and... Bam! "Nobody said it has to be
simple".
But, why? Why these two categories of values? Where is justification? Or is it
just "too basic to be explained"? If you think so, write it, please.
2DanielLC10y
What gives value to an otherwise senseless action is a meta-ethical question.
"Terminality" is just what you call it when you value something for reasons
other than it causing something else that you value.
Let me try making an example:
Suppose you're a paperclip-maximizer. You value paperclips. Paperclip factories
help build paperclips, so factories are good too. Given a choice between
building a factory immediately and a paperclip immediately, you'd probably pick
the former. It's like you value factories more than paperclips.
But if you're given the opportunity to build a factory-maximizer, you'd turn it
down. Those factories potentially could make a lot of paperclips, but they
won't, because the factory-maximizer would need that metal to make more
factories. You don't really value factories. They're just useful. You value
paperclips.
You could come up with an exception like this for any instrumental value. No
matter how much the instrumental value is maximized, you won't care unless it
helps with the terminal value. There is no such exception for you terminal
values. If there's more paperclips, it's better. End of story.
The actual utility function can be quite complicated. Perhaps you prefer
paperclips in a certain size range. Perhaps you want them to be easily bent, and
hard to break. In that case, your terminal value is more sophisticated than
"paperclips", but it's something.
0[anonymous]10y
Sorry for the pause. Have been thinking.
If there is reason 'what for' (What for did you buy this car? To drive to work)
do something, then it's instrumental value. If there is only reason 'why' (Why
did you buy this car? Because I like it) do something, then it's a terminal
value. Right?
2DanielLC10y
I don't know the difference between "what for" and "why".
If you bought the car to drive to work, it's instrumental. If you bought it
because having nice cars makes you happy, its instrumental. If you bought it
because you just prefer for future you to have a car, whether or not he's happy
about it or even wants a car, then it's terminal.
0[anonymous]10y
As for why: you can answer to "why" with either "because" or "to" but you can
only answer to "what for" with "to". To 'avoid' confusion I prefer to use "why"
when I want to get "because" and "what for" when I want to get "to", e.g. Why
did you buy this car? Because I like it. What for did you buy this car? To drive
to work
0[anonymous]10y
I'm not sure, are we talking about subjective or objective values?
0DanielLC10y
What's an objective value?
0[anonymous]10y
"existing freely or independently from a mind
[http://en.wikipedia.org/wiki/Objectivity_(philosophy])"
1DanielLC10y
How are you defining value then?
It sounds to me like objective value is a contradiction in terms.
0[anonymous]10y
Value is just another way to say that something is liked or disliked by someone.
I'm sorry if all this time you were talking about subjective values. I have
nothing against them.
3wedrifid10y
With respect to vegetarianism there are a couple of vocal advocates. Don't
assume this applies to a majority. "Utilitarianism" is also considered outright
ridiculous by many (in contrast to the principles of consequentialism, expected
value maximising and altruistic values, which are generally popular.)
0[anonymous]10y
Wow. I was so heavily downvoted while not even one of my arguments was refuted.
Really?
3wedrifid10y
You were given replies, including to your previous complaint about this same
comments. Read them.
0[anonymous]9y
Yeah, thanks. I think just didn't understand what they were trying to say.
There has recently been some speculation that life started on Mars, and then got blasted to earth by an asteroid or something. Molybdenum is very important to life (eukaryote evolution was delayed by 2 billion years because it was unavailable), and the origin of life is easier to explain if Molybdenum is available. The problem is that Molybdenum wasn't available in the right time frame on Earth, but it was on Mars.
Anyway, assuming this speculation is true, Mars had the best conditions for starting life, but Earth had the best conditions for life existing, and it is unlikely conscious life would have evolved without either of these planets being the way they are. Thus, this could be another part of the Great Filter.
Side note: I find it amusing that Molybdenum is very important in the origin/evolution of life, and is also element 42.
The ancient Stoics apparently had a lot of techniques for habituation and changing cognitive processes. Some of those live on in the form of modern CBT. One of the techniques is to write a personal handbook with advice and sayings to carry around at all times as to never be without guidance from a calmer self. Indeed, Epictet advises to learn this handbook by rote to further internalisation. So I plan to write such a handbook for myself, once in long form with anything relevant to my life and lifestyle, and once in a short form that I update with things that are difficult at that time, be it strong feelings or being deluded by some biases.
In this book I intend to include a list of all known cognitive biases and logical fallacies. I know that some biases are helped by simply knowing them, does anyone have a list of those? And should I complete the books or have a clear concept of their contents, are you interested in reading about the process of creating one and possible perceived benefits?
I'm also interested in hearing from you again about this project if you decide to not complete it. Rock on, negative data!
I have found "I thought X would be awesome, and then on doing X realized that the costs were larger than the benefits" to be useful information for myself and others. (If your laziness isn't well modeled by that, that's also valuable information for you.)
(mild exaggeration) Has anyone else transitioned from "I only read Main posts, to I nearly only read discussion posts, to actually I'll just take a look at the open threat and people who responded to what I wrote" during their interactions with LW?
To be more specific, is there a relevant phenomenon about LW or is it just a characteristic of my psyche and history that explain my pattern of reading LW?
I read the sequences and a bunch of other great old main posts but now mostly read discussion. It feels like Main posts these days are either repetitive of what I've read before, simply wrong or not even wrong, or decision theory/math that's above my head. Discussion posts are more likely to be novel things I'm interested in reading.
Selection bias alert: asking people whether they have transitioned to reading mostly discussion and then to mostly just open threads in an open thread isn't likely to give you a good perspective on the entire population, if that is in fact what you were looking for.
Honestly, I don't know why Main is even an option for posting. It should really be just an automatically labeled/generated "Best of LW" section, where Discussion posts with, say, 30+ karma are linked. This is easy to implement, and easy to do manually using the Promote feature until it is. The way it is now, it's mostly by people thinking that they are making an important contribution to the site, which is more of a statement about their ego than about quality of their posts.
Background: "The genie knows, but doesn't care" and then this SMBC comic.
The joke in that comic annoys me (and it's a very common one on SMBC, there must be at least five there with approximately the same setup). Human values aren't determined to align with the forces of natural selection. We happen to be the product of natural selection, and, yes, that made us have some values which are approximately aligned with long-term genetic fitness. But studying biology does not make us change our values to suddenly become those of evolution!
In other words, humans are a 'genie that knows, but doesn't care'. We have understood the driving pressures that created us. We have understood what they 'want', if that can really be applied here. But we still only care about the things which the mechanics of our biology happened to have made us care about, even though we know these don't always align with the things that 'evolution cares about.'
(Please if someone can think of a good way to say this all without anthropomorphising natural selection, help me. I haven't thought enough about this subject to have the clarity of mind to do that and worry that I might mess up because of such metaphors.)
Anyone tried to use the outside view on our rationalist community?
I mean, we are not the first people on this planet who tried to become more rational. Who were our predecessors, and what happened to them? Where did they succeed and where they failed? What lesson can we take from their failures?
The obvious reply will be: No one has tried doing exactly the same thing as we are doing. That's technically true, but that's a fully general excuse against using outside view, because if you look into enough details, no two projects are exactly the same. Yet it is experimentally proved that even looking at sufficiently similar projects gives better estimates than just using the inside view. So, if there was no one exactly like us, who was the most similar?
I admit I don't have data on this, because I don't study history, and I have no personal experience with Objectivists (which are probably the most obvious analogy). I would probably put Objectivists, various secret societies, educational institutions, or self-help groups into the reference class. Did I miss something important? The common trait is that those people are trying to make their thinking better, avoid some frequent faults, and t... (read more)
The reason why I asked was not just "who can we be pattern-matched with?", but also "what can we predict from this pattern-matching?". Not merely to say "X is like Y", but to say "X is like Y, and p(Y) is true, therefore it is possible that p(X) is also true".
Here are two answers pattern-matching LW to a cult. For me, the interesting question here is: "how do cults evolve?". Because that can be used to predict how LW will evolve. Not connotations, but predictions of future experiences.
My impression of cults is that they essentially have three possible futures: Some of them become small, increasingly isolated groups, that die with their members. Others are viral enough to keep replacing the old members with new members, and grow. The most successful ones discover a way of living that does not burn out their members, and become religions. -- Extinction, virality, or symbiosis.
What determines which way a cult will go? Probably it's compatibility of long-term membership with ordinary human life. If it's too costly, if it requires too much sacrifice from members, symbiosis is impossible. The other two choices probably depend on how much ... (read more)
To maybe help others out and solve the trust bootstrapping involved, I'm offering for sale <=1 bitcoin at the current Bitstamp price (without the usual premium) in exchange for Paypal dollars to any LWer with at least 300 net karma. (I would prefer if you register with #bitcoin-otc, but that's not necessary.) Contact me on Freenode as
gwern
.EDIT: as of 9 September 2013, I have sold to 2 LWers.
Liquid nitrogen user
http://www.pnas.org/content/early/2013/08/21/1301888110
Does this also work with macaques, crows or some other animals that can be taught to use money, but didn't grow up in a society where this kind of money use is taken for granted?
Who is this and what has he done with Robin Hanson?
The central premise is in allowing people to violate patents if it is not "intentional". While reading the article the voice in my head which is my model of Robin Hanson was screaming "Hypocrisy! Perverse incentives!" in unison with the model of Eliezer Yudkowsky which was also shouting "Lost Purpose!". While the appeal to total invasive surveillance slightly reduced the hypocrisy concerns it at best pushes the hypocrisy to a higher level in the business hierarchy while undermining the intended purpose of intellectual property rights.
That post seemed out of place on the site.
This may be an odd question, but what (if anything) is known on turning NPCs into PCs? (Insert your own term for this division here, it seems to be a standard thing AFAICT.)
I mean, it's usually easier to just recruit existing PCs, but ...
The Travelling Salesman Problem
... (read more)If anyone wants to teach English in China, my school is hiring. The pay is higher than the market rate and the management is friendly and trustworthy. Must have a Bachelor's degree and a passport from and English speaking country. If you are at all curious, PM me for details.
I have updated on how important it is for Friend AI to succeed (more now). I did this by changing the way I thought about the problem. I used to think in terms of the chance of Unfriendly AI, this lead me to assign a chance of whether a fast, self-modifying, indifferent or FAI was possible at all.
Instead of thinking of the risk of UFAI, I started thinking of the risk of ~FAI. The more I think about it the more I believe that a Friendly Singleton AI is the only way for us humans to survive. FAI mitigates other existential risks of nature, unknowns, hu... (read more)
Is there a name for, taking someone being wrong on A as evidence as being wrong on B? Is this a generally sound heuristic to have? In the case of crank magnetism; should I take someone's crank ideas, as evidence against an idea that is new and unfamiliar to me?
It's evidence against them being a person whose opinion is strong evidence of B, which means it is evidence against B, but it's probably weak evidence, unless their endorsement of B is the main thing giving it high probability in your book.
Are old humans better than new humans?
This seems to be a hidden assumption of cryonics / transhumanism / anti-deathism: We should do everything we can to prevent people from dying, rather than investing these resources into making more or more productive children.
The usual argument (which I agree with) is that "Death events have a negative utility". Once a human already exists, it's bad for them to stop existing.
So every human has a right to their continued existence. That's a good argument. Thanks.
Assuming Rawls's veil of ignorance, I would prefer to be randomly born in a world where a trillion people lead billion-year lifespans than one in which a quadrillion people lead million-year lifespans.
I agree, but is this the right comparison? Isn't this framing obscuring the fact that in the trillion-people world, you are much less likely to be born in the first place, in some sense?
Let us try this framing instead: Assume there are a very large number Z of possible different human "persons" (e.g. given by combinatorics on genes and formative experiences). There is a Rawlsian chance of 1/Z that a new created human will be "you". Behind the veil of ignorance, do you prefer the world to be one with X people living N years (where your chance of being born is X/Z) or the one with 10X people living N/10 years (where your chance of being born is 10X/Z)?
I am not sure this is the right intuition pump, but it seems to capture an aspect of the problem that yours leaves out.
Is there a good way to avoid HPMOR spoilers on prediction book?
The following query is sexual in nature, and is rot13'ed for the sake of those who would either prefer not to encounter this sort of content on Less Wrong, or would prefer not to recall information of such nature about my private life in future interactions.
V nz pheeragyl va n eryngvbafuvc jvgu n jbzna jub vf fvtavsvpnagyl zber frkhnyyl rkcrevraprq guna V nz. Juvyr fur cerfragyl engrf bhe frk nf "njrfbzr," vg vf abg lrg ng gur yriry bs "orfg rire," juvpu V ubcr gb erpgvsl.
Sbe pynevsvpngvba, V jbhyq fnl gung gur trareny urnygu naq fgnovy... (read more)
Well, I'm flattered that you think my position is so enviable, but I also think this would be a pretty reasonable course of action for someone who made a billion dollars.
People sometimes say that we don't choose to be born. Is this false if I myself choose to have kids for the same reason my parents did (or at least to have kids if I was ever in the relevantly same situation?) If so, can I increase my measure by having more children for these reasons?
Has anyone here read up through ch18 of Jaynes' PT:LoS? I just spent two hours trying to derive 18.11 from 18.10. That step is completely opaque to me, can anybody who's read it help?
You can explain in a comment, or we can have a conversation. I've got gchat and other stuff. If you message me or comment we can work it out. I probably won't take long to reply, I don't think I'll be leaving my computer for long today.
EDIT: I'm also having trouble with 18.15. Jaynes claims that P(F|A_p E_aa) = P(F|A_p) but justifies it with 18.1... I just don't see how that... (read more)
A Singularity conference around a project financed by a Russian oligarch, seems to be mostly about uploading and ems.
Looks curious.
I learned about Egan's Law, and I'm pretty sure it's a less-precise restatement of the correspondence principle. Anyone have any thoughts on that similarity?
Sounds good to me, although that's not what I would have guessed from a name like 'correspondence principle'.
I found this interesting post over at lambda the ultimate about constructing a provably total (terminating) self-compiler. It looked quite similar to some of the stuff MIRI has been doing with the Tiling Agents thing. Maybe someone with more math background can check it out and see if there are any ideas to be shared?
The post: Total Self-Compiler via Superstitious Logics
An Open Letter to Friendly AI Proponents by Simon Funk (who wrote the After Life novel):
... (read more)How do you pronounce "Yvain"?
Framing effects (causing cognitive biases) can be thought of as a consequence of the absence of logical transparency in System 1 thinking. Different mental models that represent the same information are psychologically distinct, and moving from one model to another requires thought. If this thought was not expended, the equivalent models don't get constructed, and intuition doesn't become familiar with these hypothetical mental models.
This suggests that framing effects might be counteracted by explicitly imagining alternative framings in order to present a better sample to intuition; or, alternatively, focusing on an abstract model that has abstracted away the irrelevant details of the framing.
I recently realized that I have something to protect (or perhaps a smaller version of the same concept). I also realized that I've been spending too much time thinking about solutions that should have have been obviously not workable. And I've been avoiding thinking about the real root problem because it was too scary, and working on peripheral things instead.
Does anyone have any advice for me? In particular, being able to think about the problem without getting so scared of it would be helpful.
I would like recommendations for an Android / web-based to-do list / reminder application. I was happily using Astrid until a couple of months ago, when they were bought up and mothballed by Yahoo. Something that works with minimal setup, where I essentially stick my items in a list, and it tells me when to do them.
Bruce Schneier wrote an article on the Guardian in which he argues that we should give plausibility to the idea that the NSA can hack more forms of encryption than we previously believed.
The security of bitcoin wallets rests on elliptic-curve cryptography. This could mean that the NSA has the power to turn the whole bitcoin economy into toast if bitcoin becomes a real problem for them on a political level.
Need help understanding the latest SMBC comic strip on rationality and microeconomics...
So.... Thinking about using Familiar, and realizing that I don't actually know what I'd do with it.
I mean, some things are obvious - when I get to sleep, how I feel when I wake up, when I eat, possibly a datadump from RescueTime... then what? All told that's about 7-10 variables, and while the whole point is to find surprising correlations I would still be very surprised if there were any interesting correlations in that list.
Suggestions? Particularly from someone already trying this?
Has anyone got a recommendation for a nice RSS reader? Ideally I'm looking for one that runs on the desktop rather than in-browser (I'm running Ubuntu). I still haven't found a replacement that I like for Lightread for Google Reader.
Is the layout for anyone else weird? The thread titles are more spaced out, like three times. Maybe something broke during my last Firefox upgrade.
I've been discussing the idea of writing a series of short story fanfics where Rapture, an underwater city from the computer game Bioshock run by an Objectivist/Libertarian, is run by a different political philosophy. Possibly as a collaborative project with different people submitting different short stories. Would anyone here be interested in reading or contributiggg to something like that?
This is rather off-topic to the board, but my impression is that there is some sympathy here for alternative theories on heart disease/healthy diets, etc. (which I share). Any for alternative cancer treatments? I don't find any that have been recommended to me as remotely plausible, but wonder if I'm missing something, if some disproving study if flawed, etc.
An awful lot of politics seems to be variations on the theme of "let's you and him fight".
In the effective animal altruism movement, I've heard a bit (on LW) about wild animal suffering- that is, since raised animals are vastly outnumbered by wild animals (who encounter a fair bit of suffering on a frequent basis), we should be more inclined to prevent wild suffering than worry about spreading vegetarianism.
That said, I think I've heard it sometimes as a reason (in itself!) not to worry about animal suffering at all, but has anyone tried to solve or come up with solutions for that problem? Where can I find those? Alternatively, are there more resources I can read on wild animal altruism in general?
Hi, I am taking a course in Existentialism. It is required for my degree. The primary authors are Sartre, de Bouvoir and Merleau-Ponty. I am wondering if anyone has taken a similar course, and how they prevented material from driving them insane (I have been warned this may happen). Is there any way to frame the material to make sense to a naturalist/ reductionist?
This could be a Lovecraft horror story: "The Existential Diary of JMiller."
Week 3: These books are maddeningly incomprehensible. Dare I believe that it all really is just nonsense?
Week 8: Terrified. Today I "saw" it - the essence of angst - and yet at the same time I didn't see it, and grasping that contradiction is itself the act of seeing it! What will become of my mind?
Week 12: The nothingness! The nothingness! It "is" everywhere in its not-ness. I can not bear it - oh no, "not", the nothingness is even constitutive of my own reaction to it - aieee -
(Here the manuscript breaks off. JMiller is currently confined in the maximum security wing of the Asylum for the Existentially Inane.)
Another feature suggestion that will probably never be implemented: a check box for "make my up/down vote visible to the poster". The information required is already in the database.
What happened with the Sequence Reruns? I was getting a lot out of them. Were they halted due to lack of a party willing to continue posting them, or was a decision made to end them?
When I was a teenager I took a personality test as a requirement for employment at a retail clothing store. I didn't take it too seriously, I "failed" it and that was the end of my application. How do these tests work and how to you pass or fail them? Is there evidence that these tests can actually predict certain behaviors?
I recently read Luminosity/radiance, was there ever a discussion thread on here about it?
SPOILERS for the end
V jnf obgurerq ol gur raq bs yhzvabfvgl. Abg gb fnl gung gur raq vf gur bayl rknzcyr bs cbbe qrpvfvba znxvat bs gur punenpgref, naq cresrpgyl engvbany punenpgref jbhyq or obevat naljnl. Ohg vg frrzf obgu ernfbanoyr nf fbzrguvat Oryyn jbhyq unir abgvprq naq n terng bccbeghavgl gb vapyhqr n engvbanyvgl yrffba. Anzryl, Oryyn artrypgrq gb fuhg hc naq zhygvcyl. Fur vf qribgvat yvzvgrq erfbheprf gbjneqf n irel evfxl cyna bs unygvat nyy uhzna zheqre ol ... (read more)
Yet another article on the terribleness of schools as they exist today. It strikes me that Methods of Rationality is in large part a fantasy of good education. So is the Harry Potter/Sherlock Holmes crossover I just started reading. Alicorn's Radiance is a fair fit to the pattern as well, in that it depicts rapid development of a young character by incredible new experiences. So what solutions are coming out of the rational community? What concrete criteria would we like to see satisfied? Can education be 'solved' in a way that will sell outside this community?
Fighting (in the sense of arguing loudly, as well as showing physical strength or using it) seems to be bad the vast majority of time.
When is fighting good? When does fighting lead you to Win TDT style (which instances of input should trigger the fighting instinct and payoff well?)
There is an SSA argument to be made for fighting in that taller people are stronger, stronger people are dominant, and bigger skulls correlate with intelligence. But it seems to me that this factor alone is far, far away from being sufficient justification for fighting, given the possible consequences.
Just had a discussion with my in-law about the singularity. He's a physicist and his immediate response was: There are no singularities. They appear mathematically all the time and it only means that there is another effect taking over. Correspondingly a quick google thus brought up this:
http://www.askamathematician.com/2012/09/q-what-are-singularities-do-they-exist-in-nature/
So my question is: What are the 'obvious' candidates for limits that take over before the all optimizable is optimized by runaway technology?
On LW, 'singularity' does not refer to a mathematical singularity, and does not involve or require physical infinities of any kind. See Yudkowsky's post on the three major meanings of the term singularity. This may resolve your physicist friend's disagreement. In any case, it is good to be clear about what exactly is meant.
Lack of cheap energy.
Ecological disruption.
Diminishing returns of computation.
Diminishing returns of engineering.
Inability to precisely manipulate matter below certain size thresholds.
All sorts of 'boring' engineering issues by which things that get more and more complicated get harder and harder faster than their benefits increase.
I am seeking a mathematical construct to use as a logical coin for the purpose of making hypothetical decision theory problems slightly more aesthetically pleasing. The required features are:
No open thread for Jan 2014 so I'll ask here. Is anybody interested in enactivism? Does anybody think that there is a cognitivist bias in LessWrong?
Why should an AI have to self-modify in order to be super-intelligent?
One argument for self-modifying FAI is that "developing an FAI is an extremely difficult problem, and so we will need to make our AI self-modifying so that it can do some of the hard work for us". But doesn't making the FAI self-modifying make the problem much more difficult, since how we have to figure out how to make goals stable under self-modification, which is also a very difficult problem?
The increased difficulty could be offset by the ability for the AI to undergo a &quo... (read more)
.
LWers seem to be pretty concerned about reducing suffering by vegetarianism, charity, utilitarianism etc. which I completely don't understand. Can anybody explain to me what is the point of reducing suffering?
Thanks.